数据采集与融合技术_实验3

  • 作业①:

    1)中国气象网图片的爬取

– 要求:要求:指定一个网站,爬取这个网站中的所有的所有图片,例如中国气象网(http://www.weather.com.cn)。
– 分别使用单线程和多线程的方式爬取。(限定爬取图片数量为学号后3位)
– 输出信息:将下载的Url信息在控制台输出,并将下载的图片存储在images子文件夹中,并给出截图。

完成过程(单线程):
1.向页面发送请求,获取图片所在网页链接:

def get_url(start_url):
    req = urllib.request.Request(start_url, headers=headers)
    data = urllib.request.urlopen(req)
    data = data.read()
    dammit = UnicodeDammit(data, ["utf-8", "gbk"])
    data = dammit.unicode_markup
    soup = BeautifulSoup(data, "lxml")
    urls = soup.select("a")
    i = 0
    for a in urls:
        href = a["href"]
        imageSpider(href, i + 1)
        i = i + 1
        if count > 110:  # 爬取110张
            break

2.爬取该网页下的所有图片的下载链接并下载到本地:

def imageSpider(start_url, cous):
    try:
        urls = []
        req = urllib.request.Request(start_url, headers=headers)
        data = urllib.request.urlopen(req)
        data = data.read()
        dammit = UnicodeDammit(data, ["utf-8", "gbk"])
        data = dammit.unicode_markup
        soup = BeautifulSoup(data, "lxml")
        images = soup.select("img")
        for image in images:
            try:
                if count > 110:
                    break
                src = image["src"]
                url = urllib.request.urljoin(start_url, src)
                if url not in urls:
                    urls.append(url)
                    print(url)
                    download(url, cous)
            except Exception as err:
                print(err)
    except Exception as err:
        print(err)

3.下载图片到指定路径函数:

def download(url, cous):
    global count
    try:
        count = count + 1
        # 提取文件后缀扩展名
        if url[len(url) - 4] == ".":
            ext = url[len(url) - 4:]
        else:
            ext = ""
        req = urllib.request.Request(url, headers=headers)
        data = urllib.request.urlopen(req, timeout=100)
        data = data.read()
        path = r"C:\Users\黄杜恩\PycharmProjects\pythonProject3\images\\" + "第" + str(count) + "张" + ".jpg"  # 指定下载路径
        with open(path, 'wb') as f:
            f.write(data)
        f.close()
        print("downloaded " + str(cous) + "页" + str(count) + ext)
    except Exception as err:
        print(err)

4.输出结果展示:
数据采集与融合技术_实验3

5.爬取图片结果:
数据采集与融合技术_实验3

6.代码地址:https://gitee.com/huang-dunn/crawl_project/blob/master/实验三作业1/project_three_test1_1.py

完成过程(多线程):
1.修改单线程代码部分:

def imageSpider(start_url, cous):
    global threads
    global count
    try:
        urls = []
        req = urllib.request.Request(start_url, headers=headers)
        data = urllib.request.urlopen(req)
        data = data.read()
        dammit = UnicodeDammit(data, ["utf-8", "gbk"])
        data = dammit.unicode_markup
        soup = BeautifulSoup(data, "lxml")
        images = soup.select("img")
        for image in images:
            try:
                if count >= 110:
                    break
                src = image["src"]
                url = urllib.request.urljoin(start_url, src)
                if url not in urls:
                    urls.append(url)
                    count = count+1

                    T = threading.Thread(target=download, args=(url, cous, count))
                    T.setDaemon(False)
                    T.start()
                    threads.append(T)
            except Exception as err:
                print(err)
    except Exception as err:
        print(err)

主函数添加以下部分

get_url(start_url)
threads = []
for t in threads:
    t.join()

2.运行结果展示:
数据采集与融合技术_实验3

3.代码地址:https://gitee.com/huang-dunn/crawl_project/blob/master/实验三作业1/project_three_test1_2.py

2)心得体会:加深了对多线程爬取图片方法的编程理解。

  • 作业②

    1)爬取股票信息

– 要求:使用scrapy框架复现作业①
– 输出信息:同作业①

完成过程:
1.编写item类:

class Pro3Test2Item(scrapy.Item):
    data = scrapy.Field()  # 图片数据
    count = scrapy.Field()  # 图片总数
    ext = scrapy.Field()  # 文件后缀
    url = scrapy.Field()  # 图片链接

2.编写spiders类:

class Test2Spider(scrapy.Spider):
    name = 'pic_test'
    global count
    count = 1

    # allowed_domains = ['XXX.com']
    # start_urls = ['http://www.weather.com.cn/']
    def start_requests(self):
        yield scrapy.Request(url='http://www.weather.com.cn', callback=self.parse)

    def parse(self, response):
        href_list = response.xpath("//a/@href") # 爬取初始网页下图片的所在网页链接
        for href in href_list:
            # print(href.extract())e
            H = str(href.extract())
            if count > PIC_LIMIT:
                return
            if len(H) > 0 and H[0] == 'h':
                yield scrapy.Request(url=href.extract(), callback=self.parse1)

    def parse1(self, response):
        a_list = response.xpath("//img/@src") # 爬取图片下载链接
        for a in a_list:
            if count > PIC_LIMIT:
                return
            # print(a.extract())
            url = urllib.request.urljoin(response.url, a.extract())
            # print(url)
            yield scrapy.Request(url=url, callback=self.parse2)

    def parse2(self, response):
        global count
        count += 1
        if count > PIC_LIMIT:
            return
        item = Pro3Test2Item()
        item["ext"] = response.url[-4:]
        item["data"] = response.body
        item["count"] = count
        item["url"] = response.url
        return item

3.编写pipeline类:

class Pro3Test2Pipeline:
    def process_item(self, item, spider):
        path = "D:/py_download/" + "第" + str(item["count"]) + "张" + item["ext"]  # 指定下载路径
        with open(path, 'wb') as f:
            f.write(item["data"])
        f.close()
        print("downloaded " + str(item["count"]) + "张" + item["ext"] + " 图片链接:" + item["url"])
        return item

4.输出结果展示:
数据采集与融合技术_实验3

5.图片爬取结果:
数据采集与融合技术_实验3

6.代码链接:https://gitee.com/huang-dunn/crawl_project/tree/master/实验三作业2

2)心得体会:对scrapy框架的使用更加的熟练,对xpath匹配文本也理解的更深。

  • 作业③

    1)

– 要求:爬取豆瓣电影数据使用scrapy和xpath,并将内容存储到数据库,同时将图片存储在
– imgs路径下。
– 候选网站: https://movie.douban.com/top250
– 输出信息:

序号 电影名称 导演 演员 简介 电影评分 电影封面
1 肖申克的救赎 弗兰克·德拉邦特 蒂姆·罗宾斯 希望让人* 9.7 ./imgs/xsk.jpg
2...

完成过程:
1.编写item类:

class Pro3Test3Item(scrapy.Item):
    no = scrapy.Field()  # 序号
    name = scrapy.Field()  # 电影名称
    director = scrapy.Field()  # 导演
    actor = scrapy.Field()  # 演员
    grade = scrapy.Field()  # 电影评分
    url = scrapy.Field()  # 图片链接
    inf = scrapy.Field()  # 简介
    pass

2.编写spiders类:

class MovieSpider(scrapy.Spider):
    name = 'movie'

    # allowed_domains = ['XXX.com']
    # start_urls = ['http://XXX.com/']

    def start_requests(self):
        cookie = {}
        for i in range(0, 11):
            yield scrapy.Request(url='https://movie.douban.com/top250?start=' + str(i * 25) + '&filter=',
                                 callback=self.parse)

    def parse(self, response):
        li_list = response.xpath('//*[@id="content"]/div/div[1]/ol/li') #使用Xpath进行信息标识
        item = Pro3Test3Item()
        for li in li_list:
            item["no"] = li.xpath('./div/div[1]/em/text()').extract_first().strip()
            item["name"] = li.xpath('./div/div[2]/div[1]/a/span[1]/text()').extract_first()
            temp_ = li.xpath('./div/div[2]/div[2]/p[1]/text()[1]').extract_first().split("   ")[9]
            temp = temp_.split(":")
            item["director"] = temp[1].split("   ")[0]
            if len(temp) > 2:
                item["actor"] = temp[2]
            else:
                item["actor"] = 'None'
            item["grade"] = li.xpath('./div/div[2]/div[2]/div/span[2]//text()').extract_first()
            item["inf"] = li.xpath('./div/div[2]/div[2]/p[2]/span/text()').extract_first()
            if item["inf"] == '':
                item["inf"] = 'None'
            item["url"] = li.xpath('./div/div[1]/a/img/@src').extract_first()
            print(item["no"], item["name"], item["director"], item["grade"], item["inf"])
            yield item

3.编写数据库类:

class MovieDB:
    def __init__(self):
        self.con = sqlite3.connect("movies.db")
        self.cursor = self.con.cursor()

    def openDB(self):
        try:
            self.cursor.execute(
                "create table movies (序号 int(128),电影名称 varchar(128),导演 varchar(128),"
                "演员 varchar(128),简介 varchar(128),电影评分 varchar(128),电影封面 varchar(128),"
                "constraint pk_movies primary key (序号))")

        except:
            self.cursor.execute("delete from movies")

    def closeDB(self):
        self.con.commit()
        self.con.close()

    def insert(self, no, name, director, actor, grade, inf, image):
        try:
            self.cursor.execute("insert into movies (序号,电影名称,导演,演员,简介,电影评分,电影封面) "
                                "values (?,?,?,?,?,?,?)",
                                (int(no), name, director, actor, inf, grade, image))
        except Exception as err:
            print(err)

4.编写pipeline类:

class Pro3Test3Pipeline:
    def __init__(self):
        self.db = MovieDB()

    def open_spider(self, spider):
        self.db.openDB()

    def process_item(self, item, spider):
        data = requests.get(item['url']).content
        path = r"D:/example/pro3_test3/pro3_test3/images/" + "第" + str(item["no"]) + "张" + ".jpg"  # 指定下载路径
        with open(path, 'wb') as f:
            f.write(data)
        f.close()
        print("downloaded " + str(item["no"]) + "张" + "jpg" + " 图片链接:" + item["url"])

        self.db.insert(int(item["no"]), item["name"], item["director"], item["actor"], item["grade"], item["inf"],
                       item["url"])
        return item

    def close_spider(self, spider):
        self.db.closeDB()

5.输出结果展示:
数据采集与融合技术_实验3

6.爬取图片展示:
数据采集与融合技术_实验3

7.数据库存储结果展示:
数据采集与融合技术_实验3

8.相关代码链接:https://gitee.com/huang-dunn/crawl_project/tree/master/实验3作业3

2)心得体会:对scrapy框架的使用更加熟练,也对数据库的操作更加了解。

上一篇:优酷程序


下一篇:Flutter垃圾回收器