scrapy项目5:爬取ajax形式加载的数据,并用ImagePipeline保存图片

1.目标分析:

  我们想要获取的数据为如下图:

    1).每本书的名称

    2).每本书的价格

    3).每本书的简介

  scrapy项目5:爬取ajax形式加载的数据,并用ImagePipeline保存图片

2.网页分析:  

  网站url:http://e.dangdang.com/list-WY1-dd_sale-0-1.html

  如下图所示,每当我们将滚动条滚动到页面底部是,会自动加载数据,并且url不发生变化,诸如此种加载方式即为ajax方式加载的数据

  scrapy项目5:爬取ajax形式加载的数据,并用ImagePipeline保存图片

  第一步:通过Fiddler抓取加载过程中的数据,并观察规律:

  图一:如下图:滚动鼠标让数据加载3次,下图是三次数据加载过程中Fiddler抓取的数据信息,对三次数据加载过程中的抓包信息中的 三个聚焦点进行关注并对比分析

  scrapy项目5:爬取ajax形式加载的数据,并用ImagePipeline保存图片

  聚焦点1:规律,通过如下两幅图,可以看出,在数据加载过程中,只有start和end在变化,并且end-start=20,两次相连的的数据加载的start满足,后一个是在前一个的基础上加21

  第一次加载数据:

  scrapy项目5:爬取ajax形式加载的数据,并用ImagePipeline保存图片  

  第二次加载数据:

  scrapy项目5:爬取ajax形式加载的数据,并用ImagePipeline保存图片

  聚焦点2:(第一次加载)下图中红框中的url第一个问好后的参数就是聚焦点1中的Querystring中的数据,

  scrapy项目5:爬取ajax形式加载的数据,并用ImagePipeline保存图片

  将上图中红色url复制在下方

   http://e.dangdang.com/media/api.go?action=mediaCategoryLeaf&promotionType=1&deviceSerialNo=html5&macAddr=html5&channelType=html5&permanentId=20180603095934721824754968589520886&returnType=json&channelId=70000&clientVersionNo=5.8.4&platformSource=DDDS-P&fromPlatform=106&deviceType=pconline&token=&start=21&end=41&category=WY1&dimension=dd_sale&order=0

  将上述url复制到谷歌浏览器(带有xpath插件),我们就会得到如下的json数据(即为焦点三种相同的数据),这样的话我们想上述抓包获取的url发请求就能够拿到对应的json数据,我们要爬取的目标在图中用红框标出

  scrapy项目5:爬取ajax形式加载的数据,并用ImagePipeline保存图片

3 scrapy代码如下:

  1).框架结构:

  scrapy项目5:爬取ajax形式加载的数据,并用ImagePipeline保存图片

  2).book.py文件

 # -*- coding: utf-8 -*-
import scrapy
from Ebook.items import EbookItem
import json
class BookSpider(scrapy.Spider):
name = "book"
allowed_domains = ["dangdang.com"]
# start_urls = ['http://e.dangdang.com/media/api.go?']
# 重写start_requests方法
def start_requests(self):
# 通过Fiddler抓包获取的不含参数信息的url
url = 'http://e.dangdang.com/media/api.go?'
# 请求头信息User-Agent
headers = {
'User-Agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:60.0) Gecko/20100101 Firefox/60.0',
}
# QueryString中start参数的初始值为21,在此处用start_num表示
start_num = 21 while True:
# QueryString中end参数为start+20,在此处用end_num表示
end_num = start_num + 20
# 构造QueryString查询字符串
formdata = {
'action':'mediaCategoryLeaf',
'promotionType':'',
'deviceSerialNo':'html5',
'macAddr':'html5',
'channelType':'html5',
'permanentId':'',
'returnType':'json',
'channelId':'',
'clientVersionNo':'5.8.4',
'platformSource':'DDDS-P',
'fromPlatform':'',
'deviceType':'pconline',
'token':'',
'start':str(start_num),
'end':str(end_num),
'category':'SK',
'dimension':'dd_sale',
'order':'',
}
# 发送请求
yield scrapy.FormRequest(url=url,formdata=formdata,headers=headers,callback = self.parse)
# 给start加21获取下一页的数据,如此规律循环直到所有数据抓取完毕
start_num += 21
# parse用来解析相应数据,即我们爬取到的json数据
def parse(self, response):
data = json.loads(response.text)['data']['saleList']
for each in data:
item = EbookItem()
info_dict = each['mediaList'][0]
# 书的标题
item['book_title'] = info_dict['title']
# 书的价格
item['book_price'] = info_dict['salePrice']
# 书的简介
item['book_desc'] = info_dict['descs']
# 书的封面图片链接
item['book_coverPic'] = info_dict['coverPic']
yield item

book.by

  3).items.py文件

 # -*- coding: utf-8 -*-

 # Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html import scrapy
class EbookItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
# 标题
book_title = scrapy.Field()
# 价格
book_price = scrapy.Field()
# 简介
book_desc = scrapy.Field()
# 封面图片链接
book_coverPic = scrapy.Field()

items.py

  4).pipelines.py文件 ImagesPipeline类的学习文档http://scrapy-chs.readthedocs.io/zh_CN/1.0/topics/media-pipeline.html#scrapy.pipeline.images.ImagesPipeline.item_completed

 # -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html import json
from scrapy.pipelines.images import ImagesPipeline
import scrapy
from scrapy.utils.project import get_project_settings
import os
# EbookImagesPipeline是用来下载书的封面图片并的自定义类,继承于ImagesPipeline类
class EbookImagesPipeline(ImagesPipeline):
# settings.py文件中设置的下载下来的图片存贮的路径
IMAGES_STORE = get_project_settings().get('IMAGES_STORE')
# 发送url请求的方法
def get_media_requests(self,item,info):
# 从item中获取图片链接
image_url = item['book_coverPic']
headers = {
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Mobile Safari/537.36'
}
# 发送请求下载图片
yield scrapy.Request(image_url,headers=headers)
# 当所有图片被下载完成后,item_completed方法被调用
def item_completed(self,result,item,info):
# result的结构为:(success, file_info_or_error)
# success:布尔型,表示图片是否下载成功
# file_info_or_error:是一个包含下列关键字的字典
# -->url:图片下载的url
# -->path:图片的存贮路径
# -->checksum:图片内容的MD5 hash
image_path = [x['path'] for ok,x in result if ok]
# 对下载下来的图片重命名
os.rename(self.IMAGES_STORE + '/' + image_path[0], self.IMAGES_STORE + '/'+ item['book_title'].encode('utf-8') + '.jpg')
return item
# 将数据保存到本地
class EbookPipeline(object):
def __init__(self):
# 定义文件
self.filename = open('bookinfo.json','w')
def process_item(self, item, spider):
# 将item中数据转化为json
text = json.dumps(dict(item),ensure_ascii=False) + '\n'
# 将内容写入到文件中
self.filename.write(text.encode('utf-8'))
return item
def close_spider(self,spider):
# 关闭文件
self.filename.close()

pipelines.py

  5).settings.py文件:根据项目需求进行配置即可

 # -*- coding: utf-8 -*-

 # Scrapy settings for Ebook project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# http://doc.scrapy.org/en/latest/topics/settings.html
# http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
# http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html BOT_NAME = 'Ebook'
SPIDER_MODULES = ['Ebook.spiders']
NEWSPIDER_MODULE = 'Ebook.spiders'
# 图片存储路径--新增
IMAGES_STORE = '/home/python/Desktop/01-爬虫/01-爬虫0530/ajax形式加载/Ebook/Ebook/spiders/bookimage'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:60.0) Gecko/20100101 Firefox/60.0'
# log日志存放文件
LOG_FILE = '123.log'
LOG_LEVEL = 'INFO'
# Obey robots.txt rules
# ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
# 下载延迟
DOWNLOAD_DELAY = 2.5
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
# 关闭cookies
COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'Ebook.middlewares.MyCustomSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'Ebook.middlewares.MyCustomDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
# 配置管道文件
ITEM_PIPELINES = {
'Ebook.pipelines.EbookPipeline': 300,
'Ebook.pipelines.EbookImagesPipeline':350,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

settings.py

 4.结果:

  1).bookinfo.json文件

  scrapy项目5:爬取ajax形式加载的数据,并用ImagePipeline保存图片

  2).书的封面图片

  scrapy项目5:爬取ajax形式加载的数据,并用ImagePipeline保存图片

  

  

  

  

  

  

  

  

  

  

上一篇:Spark架构与作业执行流程简介


下一篇:mysql学习(十二)内置函数