Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

langchain: Adding a new section aware splitter to langchain #16526

Merged
merged 28 commits into from
Apr 1, 2024
Merged
Show file tree
Hide file tree
Changes from 22 commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
169 changes: 169 additions & 0 deletions docs/docs/modules/data_connection/HTML_section_aware_splitter.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,169 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "c95fcd15cd52c944",
"metadata": {
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"# HTMLSectionSplitter\n",
"## Description and motivation\n",
"Similar in concept to the <a href=\"https://python.langchain.com/docs/modules/data_connection/document_transformers/HTML_header_metadata\">`HTMLHeaderTextSplitter`</a>, the `HTMLSectionSplitter` is a \"structure-aware\" chunker that splits text at the element level and adds metadata for each header \"relevant\" to any given chunk. It can return chunks element by element or combine elements with the same metadata, with the objectives of (a) keeping related text grouped (more or less) semantically and (b) preserving context-rich information encoded in document structures. It can be used with other text splitters as part of a chunking pipeline. Internally, it uses the `RecursiveCharacterTextSplitter` when the section size is larger than the chunk size. It also considers the font size of the text to determine whether it is a section or not based on the determined font size threshold. Use `xslt_path` to provide an absolute path to transform the HTML so that it can detect sections based on provided tags. The default is to use the `converting_to_header.xslt` file in the `data_connection/document_transformers` directory. This is for converting the html to a format/layout that is easier to detect sections. For example, `span` based on their font size can be converted to header tags to be detected as a section.\n",
"\n",
"## Usage examples\n",
"#### 1) With an HTML string:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "initial_id",
"metadata": {
"ExecuteTime": {
"end_time": "2023-10-02T18:57:49.208965400Z",
"start_time": "2023-10-02T18:57:48.899756Z"
},
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"from langchain.text_splitter import HTMLSectionSplitter\n",
"\n",
"html_string = \"\"\"\n",
" <!DOCTYPE html>\n",
" <html>\n",
" <body>\n",
" <div>\n",
" <h1>Foo</h1>\n",
" <p>Some intro text about Foo.</p>\n",
" <div>\n",
" <h2>Bar main section</h2>\n",
" <p>Some intro text about Bar.</p>\n",
" <h3>Bar subsection 1</h3>\n",
" <p>Some text about the first subtopic of Bar.</p>\n",
" <h3>Bar subsection 2</h3>\n",
" <p>Some text about the second subtopic of Bar.</p>\n",
" </div>\n",
" <div>\n",
" <h2>Baz</h2>\n",
" <p>Some text about Baz</p>\n",
" </div>\n",
" <br>\n",
" <p>Some concluding text about Foo</p>\n",
" </div>\n",
" </body>\n",
" </html>\n",
"\"\"\"\n",
"\n",
"headers_to_split_on = [(\"h1\", \"Header 1\"), (\"h2\", \"Header 2\")]\n",
"\n",
"html_splitter = HTMLSectionSplitter(headers_to_split_on=headers_to_split_on)\n",
"html_header_splits = html_splitter.split_text(html_string)\n",
"html_header_splits"
]
},
{
"cell_type": "markdown",
"id": "e29b4aade2a0070c",
"metadata": {
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"#### 2) Pipelined to another splitter, with html loaded from a html string content:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6ada8ea093ea0475",
"metadata": {
"ExecuteTime": {
"end_time": "2023-10-02T18:57:51.016141300Z",
"start_time": "2023-10-02T18:57:50.647495400Z"
},
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"\n",
"html_string = \"\"\"\n",
" <!DOCTYPE html>\n",
" <html>\n",
" <body>\n",
" <div>\n",
" <h1>Foo</h1>\n",
" <p>Some intro text about Foo.</p>\n",
" <div>\n",
" <h2>Bar main section</h2>\n",
" <p>Some intro text about Bar.</p>\n",
" <h3>Bar subsection 1</h3>\n",
" <p>Some text about the first subtopic of Bar.</p>\n",
" <h3>Bar subsection 2</h3>\n",
" <p>Some text about the second subtopic of Bar.</p>\n",
" </div>\n",
" <div>\n",
" <h2>Baz</h2>\n",
" <p>Some text about Baz</p>\n",
" </div>\n",
" <br>\n",
" <p>Some concluding text about Foo</p>\n",
" </div>\n",
" </body>\n",
" </html>\n",
"\"\"\"\n",
"\n",
"headers_to_split_on = [\n",
" (\"h1\", \"Header 1\"),\n",
" (\"h2\", \"Header 2\"),\n",
" (\"h3\", \"Header 3\"),\n",
" (\"h4\", \"Header 4\"),\n",
"]\n",
"\n",
"html_splitter = HTMLSectionSplitter(headers_to_split_on=headers_to_split_on)\n",
"\n",
"html_header_splits = html_splitter.split_text(html_string)\n",
"\n",
"chunk_size = 500\n",
"chunk_overlap = 30\n",
"text_splitter = RecursiveCharacterTextSplitter(\n",
" chunk_size=chunk_size, chunk_overlap=chunk_overlap\n",
")\n",
"\n",
"# Split\n",
"splits = text_splitter.split_documents(html_header_splits)\n",
"splits"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<!-- Copy all nodes and attributes by default -->
<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>

<!-- Match any element that has a font-size attribute larger than 20px -->
<xsl:template match="*[@style[contains(., 'font-size')]]">
<!-- Extract the font size value from the style attribute -->
<xsl:variable name="font-size" select="substring-before(substring-after(@style, 'font-size:'), 'px')" />
<!-- Check if the font size is larger than 20 -->
<xsl:choose>
<xsl:when test="$font-size > 20">
msetbar marked this conversation as resolved.
Show resolved Hide resolved
<!-- Replace the element with a header tag -->
<h1>
<xsl:apply-templates select="@*|node()"/>
</h1>
</xsl:when>
<xsl:otherwise>
<!-- Keep the original element -->
<xsl:copy>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
</xsl:stylesheet>
163 changes: 160 additions & 3 deletions libs/langchain/langchain/text_splitter.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@

import copy
import logging
import os
msetbar marked this conversation as resolved.
Show resolved Hide resolved
import pathlib
import re
from abc import ABC, abstractmethod
Expand Down Expand Up @@ -598,9 +599,7 @@ def split_text_from_file(self, file: Any) -> List[Document]:
"Unable to import lxml, please install with `pip install lxml`."
) from e
# use lxml library to parse html document and return xml ElementTree
# Explicitly encoding in utf-8 allows non-English
# html files to be processed without garbled characters
parser = etree.HTMLParser(encoding="utf-8")
parser = etree.HTMLParser()
tree = etree.parse(file, parser)

# document transformation for "structure-aware" chunking is handled with xsl.
Expand Down Expand Up @@ -1486,3 +1485,161 @@ def __init__(self, **kwargs: Any) -> None:
"""Initialize a LatexTextSplitter."""
separators = self.get_separators_for_language(Language.LATEX)
super().__init__(separators=separators, **kwargs)


class HTMLSectionSplitter:
"""
Splitting HTML files based on specified tag and font sizes.
Requires lxml package.
"""

def __init__(
self,
headers_to_split_on: List[Tuple[str, str]],
xslt_path: str = "document_transformers/xsl/converting_to_header.xslt",
**kwargs: Any,
) -> None:
"""Create a new HTMLSectionSplitter.

Args:
headers_to_split_on: list of tuples of headers we want to track mapped to
(arbitrary) keys for metadata. Allowed header values: h1, h2, h3, h4,
h5, h6 e.g. [("h1", "Header 1"), ("h2", "Header 2"].
xslt_path: path to xslt file for document transformation.
Needed for html contents that using different format and layouts.
"""
self.headers_to_split_on = dict(headers_to_split_on)
self.xslt_path = xslt_path
self.kwargs = kwargs

def split_documents(self, documents: Iterable[Document]) -> List[Document]:
"""Split documents."""
texts, metadatas = [], []
for doc in documents:
texts.append(doc.page_content)
metadatas.append(doc.metadata)
results = self.create_documents(texts, metadatas=metadatas)

text_splitter = RecursiveCharacterTextSplitter(**self.kwargs)

return text_splitter.split_documents(results)

def split_text(self, text: str) -> List[Document]:
msetbar marked this conversation as resolved.
Show resolved Hide resolved
"""Split HTML text string

Args:
text: HTML text
"""
return self.split_text_from_file(StringIO(text))

def create_documents(
self, texts: List[str], metadatas: Optional[List[dict]] = None
) -> List[Document]:
"""Create documents from a list of texts."""
_metadatas = metadatas or [{}] * len(texts)
documents = []
for i, text in enumerate(texts):
for chunk in self.split_text(text):
metadata = copy.deepcopy(_metadatas[i])

for key in chunk.metadata.keys():
if chunk.metadata[key] == "#TITLE#":
chunk.metadata[key] = metadata["Title"]
metadata = {**metadata, **chunk.metadata}
new_doc = Document(page_content=chunk.page_content, metadata=metadata)
documents.append(new_doc)
return documents

def split_html_by_headers(
self, html_doc: str
) -> Dict[str, Dict[str, Optional[str]]]:
try:
from bs4 import BeautifulSoup, PageElement
except ImportError as e:
raise ImportError(
"Unable to import BeautifulSoup/PageElement, \
please install with `pip install \
bs4`."
) from e

soup = BeautifulSoup(html_doc, "html.parser")
headers = list(self.headers_to_split_on.keys())
sections: Dict[str, Dict[str, Optional[str]]] = {}
section_content: List[str] = []
current_header: str = ""
current_header_tag = None

headers = soup.find_all(["body"] + headers)
current_header = headers[0]

for i, header in enumerate(headers):
header_element: PageElement = header
if i == 0:
current_header = "#TITLE#"
current_header_tag = "h1"
section_content = []
else:
current_header = header_element.text.strip()
current_header_tag = header_element.name
section_content = []
for element in header_element.next_elements:
if i + 1 < len(headers) and element == headers[i + 1]:
break
if isinstance(element, str):
section_content.append(element)
content = " ".join(section_content).strip()

if content != "":
sections[current_header] = {
"content": content,
"tag_name": current_header_tag,
}

return sections

def convert_possible_tags_to_header(self, html_content: str) -> str:
if self.xslt_path is None:
return html_content

try:
from lxml import etree
except ImportError as e:
raise ImportError(
"Unable to import lxml, please install with `pip install lxml`."
) from e
# use lxml library to parse html document and return xml ElementTree
parser = etree.HTMLParser()
tree = etree.parse(StringIO(html_content), parser)

# document transformation for "structure-aware" chunking is handled with xsl.
# this is needed for htmls files that using different font sizes and layouts
# check to see if self.xslt_path is a relative path or absolute path
if not os.path.isabs(self.xslt_path):
xslt_path = pathlib.Path(__file__).parent / self.xslt_path

xslt_tree = etree.parse(xslt_path)
transform = etree.XSLT(xslt_tree)
result = transform(tree)
return str(result)

def split_text_from_file(self, file: Any) -> List[Document]:
"""Split HTML file

Args:
file: HTML file
"""
file_content = file.getvalue()
file_content = self.convert_possible_tags_to_header(file_content)
sections = self.split_html_by_headers(file_content)

return [
Document(
page_content=sections[section_key]["content"],
metadata={
self.headers_to_split_on[
str(sections[section_key]["tag_name"])
]: section_key
},
)
for section_key in sections.keys()
]