Python 高效提取 HTML 文本的方法
Python中文社区
共 4772字,需浏览 10分钟
·
2021-01-15 18:39
BeautifulSoup
软件包中的get_text
方法,该方法内部使用lxml
。这是一个经过充分测试的解决方案,但是在处理成千上万个HTML文档时可能会非常慢。selectolax
替换BeautifulSoup
,您几乎可以免费获得5-30倍的加速!BeautifulSoup
软件包中的get_text
方法,该方法内部使用lxml
。这是一个经过充分测试的解决方案,但是在处理成千上万个HTML文档时可能会非常慢。selectolax
替换BeautifulSoup
,您几乎可以免费获得5-30倍的加速!这是一个简单的基准测试,可分析commoncrawl(https://commoncrawl.org/
)的10,000个HTML页面:# coding: utf-8
from time import time
import warc
from bs4 import BeautifulSoup
from selectolax.parser import HTMLParser
def get_text_bs(html):
tree = BeautifulSoup(html, 'lxml')
body = tree.body
if body is None:
return None
for tag in body.select('script'):
tag.decompose()
for tag in body.select('style'):
tag.decompose()
text = body.get_text(separator='\n')
return text
def get_text_selectolax(html):
tree = HTMLParser(html)
if tree.body is None:
return None
for tag in tree.css('script'):
tag.decompose()
for tag in tree.css('style'):
tag.decompose()
text = tree.body.text(separator='\n')
return text
def read_doc(record, parser=get_text_selectolax):
url = record.url
text = None
if url:
payload = record.payload.read()
header, html = payload.split(b'\r\n\r\n', maxsplit=1)
html = html.strip()
if len(html) > 0:
text = parser(html)
return url, text
def process_warc(file_name, parser, limit=10000):
warc_file = warc.open(file_name, 'rb')
t0 = time()
n_documents = 0
for i, record in enumerate(warc_file):
url, doc = read_doc(record, parser)
if not doc or not url:
continue
n_documents += 1
if i > limit:
break
warc_file.close()
print('Parser: %s' % parser.__name__)
print('Parsing took %s seconds and produced %s documents\n' % (time() - t0, n_documents))
>>> ! wget https://commoncrawl.s3.amazonaws.com/crawl-data/CC-MAIN-2018-05/segments/1516084886237.6/warc/CC-MAIN-20180116070444-20180116090444-00000.warc.gz
>>> file_name = "CC-MAIN-20180116070444-20180116090444-00000.warc.gz"
>>> process_warc(file_name, get_text_selectolax, 10000)
Parser: get_text_selectolax
Parsing took 16.170367002487183 seconds and produced 3317 documents
>>> process_warc(file_name, get_text_bs, 10000)
Parser: get_text_bs
Parsing took 432.6902508735657 seconds and produced 3283 documents
selectolax
有时比lxml
快30倍。selectolax
最适合将HTML剥离为纯文本。如果我有10,000多个HTML片段,需要将它们作为纯文本索引到Elasticsearch中。(Elasticsearch有一个html_strip
文本过滤器,但这不是我想要/不需要在此上下文中使用的过滤器)。事实证明,以这种规模将HTML剥离为纯文本实际上是非常低效的。那么,最有效的方法是什么?PyQuery
from pyquery import PyQuery as pq
text = pq(html).text()
selectolax
from selectolax.parser import HTMLParser
text = HTMLParser(html).text()
正则表达式
import re
regex = re.compile(r'<.*?>')
text = clean_regex.sub('', html)
文档(带有
和
等),只是HTML的一小部分。平均大小为10,314字节(中位数为5138字节)。结果如下:pyquery
SUM: 18.61 seconds
MEAN: 1.8633 ms
MEDIAN: 1.0554 ms
selectolax
SUM: 3.08 seconds
MEAN: 0.3149 ms
MEDIAN: 0.1621 ms
regex
SUM: 1.64 seconds
MEAN: 0.1613 ms
MEDIAN: 0.0881 ms
selectolax
比PyQuery
快7倍。HTML Blob
,它可能工作得很好。实际上,如果HTML是 Foo&amp; Bar p>
,我希望纯文本转换应该是Foo&Bar
,而不是Foo&amp; bar
。<h4 class="warning">This should get stripped.h4>
<p>Please keep.p>
<div style="display: none">This should also get stripped.div>
、 和 。因此,让我们实现一下:PyQuery
from pyquery import PyQuery as pq
_display_none_regex = re.compile(r'display:\s*none')
doc = pq(html)
doc.remove('div.warning, div.hidden')
for div in doc('div[style]').items():
style_value = div.attr('style')
if _display_none_regex.search(style_value):
div.remove()
text = doc.text()
selectolax
from selectolax.parser import HTMLParser
_display_none_regex = re.compile(r'display:\s*none')
tree = HTMLParser(html)
for tag in tree.css('div.warning, div.hidden'):
tag.decompose()
for tag in tree.css('div[style]'):
style_value = tag.attributes['style']
if style_value and _display_none_regex.search(style_value):
tag.decompose()
text = tree.body.text()
这实际上有效。当我现在为10,000个片段运行相同的基准时,新结果如下: pyquery
SUM: 21.70 seconds
MEAN: 2.1701 ms
MEDIAN: 1.3989 ms
selectolax
SUM: 3.59 seconds
MEAN: 0.3589 ms
MEDIAN: 0.2184 ms
regex
Skip
同样,selectolax击败PyQuery约6倍。 结论 正则表达式速度快,但功能弱。selectolax
的效率令人印象深刻。 更多阅读
特别推荐
点击下方阅读原文加入社区会员
浏览
76
评论