《python网络爬虫入门实践》笔记:chp3 静态网页抓取(下)实例:豆瓣电影top250

import requests
from bs4 import BeautifulSoup


def get_movies():
    Headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)'
                      ' Chrome/95.0.4638.69 Safari/537.36 Edg/95.0.1020.44',
        'Host': 'movie.douban.com'}
    movie_list = []
    for i in range(0, 10):
        # 构造25页的url
        link = "https://movie.douban.com/top250?start=" + str(i * 25) + "&filter="
        r = requests.get(link, headers=Headers)
        print(str(i + 1), "页面响应状态码", r.status_code)
        soup = BeautifulSoup(r.text, "html.parser")
        div_list = soup.find_all("div", class_="hd")
        for each in div_list:
            movie = each.a.span.get_text()
            movie_list.append(movie)
    return movie_list


movies = get_movies()
print(movies)
上一篇:时间轮的应用并非Kafka独有, 其应用场景还有很多,在Netty、 Akka、Quartz、ZooKeeper等组件中都存在时间轮的踪影。


下一篇:forms验证的方式和使用