CrawlSpider爬取读书网

crawlspider简介

定义一些规则用于提取页面符合规则的数据,然后继续爬取。

一、开始一个读书网项目

scrapy startproject dushu
cd dushu/dushu/spiders
scrapy genspider -t crawl ds www.dushu.com

二、链接提取规则

Rule(LinkExtractor(allow=r'/book/1163_\d+.html'), callback='parse_item', follow=True)

class DsSpider(CrawlSpider):
    name = 'ds'
    allowed_domains = ['www.dushu.com']
    start_urls = ['https://www.dushu.com/book/1163_1.html']
    rules = (
        Rule(LinkExtractor(allow=r'/book/1163_\d+.html'), callback='parse_item', follow=True),
    )

三、修改parse_item方法用于解析数据

def parse_item(self, response):
        item = {}
        div_list = response.xpath('//div[@class="bookslist"]/ul/li/div')
        for div in div_list:
            item['src'] = div.xpath('./div/a/img/@data-original').extract_first()
            item['alt'] = div.xpath('./div/a/img/@alt').extract_first()
            item['author'] = div.xpath('./p[1]/a[1]/text()|./p[1]/text()').extract_first()
            yield item

四、修改pipelines.py文件用于写入数据

class DushuPipeline(object):
    def open_spider(self,spider):
        self.fp = open('dushu.json','w',encoding='utf-8')
    def process_item(self, item, spider):
        # obj = json.loads(str(item))
        # str = json.dumps(obj,ensure_ascii=False)
        self.fp.write(str(item))
        return item
    def close_spider(self,spider):
        self.fp.close()

五、修改UA及是否遵循robots.txt

# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/4.0 (compatible; MSIE 6.0; AOL 9.0; Windows NT 5.0;'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

六、运行scrapy项目

scrapy crawl ds
上一篇:CrawlSpider


下一篇:Scrapy框架:CrawlSpider和Scrapy Shell,微信小程序社区CrawlSpider案例