python BeautifulSoup的简单使用

  官网:https://www.crummy.com/software/BeautifulSoup/bs4/doc/

  参考:https://www.cnblogs.com/yupeng/p/3362031.html

  

  什么是BeautifulSoup?

    BeautifulSoup是用Python写的一个HTML/XML的解析器,它可以很好的处理不规范标记并生成剖析树(parse tree)。 它提供简单又常用的导航(navigating),搜索以及修改剖析树的操作。

  下面通过一个测试例子简单说明下BeautifulSoup的用法

  

from bs4 import BeautifulSoup
def beautifulSoup_test(self):
html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p> <p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
<div class="text" id="div1">测试</div>
and they lived at the bottom of a well.</p> <p class="story">...</p> """
# soup 就是BeautifulSoup处理格式化后的字符串
soup = BeautifulSoup(html_doc,'lxml') # 得到的是title标签
print(soup.title)
# 输出:<title>The Dormouse's story</title> # 得到的是文档中的第一个p标签,要想得到所有标签,得用find_all函数。
# find_all 函数返回的是一个序列,可以对它进行循环,依次得到想到的东西.
print(soup.p)
print(soup.find_all('p')) print(soup.find(id='link3')) # 是返回文本,这个对每一个BeautifulSoup处理后的对象得到的标签都是生效的
print(soup.get_text()) aitems = soup.find_all('a')
# 获取标签a的链接和id
for item in aitems:
print(item["href"],item["id"]) # 1、通过css查找
print(soup.find_all("a", class_="sister"))
# 输出:[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>] print(soup.select("p.title"))
# 输出:[<p class="title"><b>The Dormouse's story</b></p>] # 2、通过属性进行查找
print(soup.find_all("a", attrs={"class": "sister"}))
#输出:[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>] # 3、通过文本进行查找
print(soup.find_all(text="Elsie"))
# 输出:['Elsie'] print(soup.find_all(text=["Tillie", "Elsie", "Lacie"]))
# 输出:['Elsie', 'Lacie', 'Tillie'] # 4、限制结果个数
print(soup.find_all("a", limit=2))
#输出:[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>] print(soup.find_all(id="link2"))
# [<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>] print(soup.find_all(id=True))
#输出:[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
# 输出:<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>,
# <div class="text" id="div1">测试</div>]
上一篇:mysql-用命令导出、导入表结构或数据


下一篇:安卓程序代写 网上程序代写[原]ViewGroup(容器组件)详解(API解析)