Scrapy 如何正确配置、验证xpath?

1、Scrapy Shell介绍

Scrapy终端是一个交互终端,供您在未启动spider的情况下尝试及调试您的爬取代码。

其本意是用来测试提取数据的代码,不过您可以将其作为正常的Python终端,在上面测试任何的Python代码。

该终端是用来测试XPath或CSS表达式,查看他们的工作方式及从爬取的网页中提取的数据。 在编写您的spider时,该终端提供了交互性测试您的表达式代码的功能,免去了每次修改后运行spider的麻烦。

一旦熟悉了Scrapy终端后,您会发现其在开发和调试spider时发挥的巨大作用。


2、Scrapy Shell安装

如果您安装了 IPython ,Scrapy终端将使用 IPython (替代标准Python终端)。 IPython 终端与其他相比更为强大,提供智能的自动补全,高亮输出,及其他特性。 强烈推荐您安装 IPython ,特别是如果您使用Unix系统(IPython 在Unix下工作的很好)。

安装参考:

http://scrapy-chs.readthedocs.io/zh_CN/0.24/topics/shell.html


IPython安装:

pip install ipython (前提python已安装)


3、启动终端 scrapy shell

使用命令格式:scrapy shell 待爬取的链接URL

举例如下:


[root@yang testing]# scrapy shell http://www.11467.com/tianjin/dir/a05-p4.htm

2016-10-13 14:10:27 [scrapy] INFO: Scrapy 1.1.2 started (bot: scrapybot)

2016-10-13 14:10:27 [scrapy] INFO: Overridden settings: {'LOGSTATS_INTERVAL': 0, 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter'}

2016-10-13 14:10:27 [py.warnings] WARNING: :0: UserWarning: You do not have a working installation of the service_identity module: 'No module named pyasn1_modules.rfc2459'. P.


2016-10-13 14:10:27 [scrapy] INFO: Enabled extensions:

['scrapy.extensions.telnet.TelnetConsole',

'scrapy.extensions.corestats.CoreStats']

2016-10-13 14:10:27 [scrapy] INFO: Enabled downloader middlewares:

['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',

'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',

'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',

'scrapy.downloadermiddlewares.retry.RetryMiddleware',

'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',

'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',

'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',

'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',

'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',

'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',

'scrapy.downloadermiddlewares.stats.DownloaderStats']

2016-10-13 14:10:27 [scrapy] INFO: Enabled spider middlewares:

['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',

'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',

'scrapy.spidermiddlewares.referer.RefererMiddleware',

'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',

'scrapy.spidermiddlewares.depth.DepthMiddleware']

2016-10-13 14:10:27 [scrapy] INFO: Enabled item pipelines:

[]

2016-10-13 14:10:27 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6031

2016-10-13 14:10:27 [scrapy] INFO: Spider opened

2016-10-13 14:10:27 [scrapy] DEBUG: Crawled (200) <GET http://www.11467.com/tianjin/dir/a05-p4.htm> (referer: None)

2016-10-13 14:10:28 [traitlets] DEBUG: Using default logger

2016-10-13 14:10:28 [traitlets] DEBUG: Using default logger

[s] Available Scrapy objects:

[s] crawler <scrapy.crawler.Crawler object at 0x2e8d9d0>

[s] item {}

[s] request <GET http://www.11467.com/tianjin/dir/a05-p4.htm>

[s] response <200 http://www.11467.com/tianjin/dir/a05-p4.htm>

[s] settings <scrapy.settings.Settings object at 0x2e8d950>

[s] spider <DefaultSpider 'default' at 0x331ed90>

[s] Useful shortcuts:

[s] shelp() Shell help (print this help)

[s] fetch(req_or_url) Fetch request (or URL) and update local objects

[s] view(response) View response in a browser

In [1]:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

//验证xpath的正确性,若正确会有如下的输出;否则,则不会有输出。


In [1]: sel.xpath('//ul[@id="dirlist"]/li/dl/dt/h4/a/text()')

2016-10-13 14:12:46 [py.warnings] WARNING: shell:1: ScrapyDeprecationWarning: "sel" shortcut is deprecated. Use "response.xpath()", "response.css()" or "response.selector" instead


Out[1]:

[<Selector xpath='//ul[@id="dirlist"]/li/dl/dt/h4/a/text()' data=u'\u5929\u6d25\u5e02\u4e1c\u4e3d\u533a\u755c\u7267\u6c34\u4ea7\u5c40\u755c\u79bd\u826f\u79cd\u573a'>,

<Selector xpath='//ul[@id="dirlist"]/li/dl/dt/h4/a/text()' data=u'\u5929\u6d25\u5e02\u4e1c\u4e3d\u533a\u4e61\u9547\u755c\u7267\u517d\u533b\u7ba1\u7406\u7ad9'>,

<Selector xpath='//ul[@id="dirlist"]/li/dl/dt/h4/a/text()' data=u'\u5929\u6d25\u5e02\u4e1c\u4e3d\u533a\u8352\u8349\u5768\u4e61\u517d\u533b\u7ad9'>,

<Selector xpath='//ul[@id="dirlist"]/li/dl/dt/h4/a/text()' data=u'\u5929\u6d25\u5e02\u4e1c\u4e3d\u533a\u4e48\u516d\u6865\u56de\u65cf\u4e61\u519c\u4e1a\u516c\u53f8\u66d9\u5149\u79cd\u7f8a\u573a'>,

<Selector xpath='//ul[@id="dirlist"]/li/dl/dt/h4/a/text()' data=u'\u5929\u6d25\u5e02\u5927\u6e2f\u533a\u755c\u79bd\u826f\u79cd\u573a'>,

<Selector xpath='//ul[@id="dirlist"]/li/dl/dt/h4/a/text()' data=u'\u5929\u6d25\u5e02\u5927\u6e2f\u533a\u519c\u6797\u755c\u7267\u5c40\u755c\u7267\u517d\u533b\u7ad9\uff08\u5929\u6d25\u5e02\u5927\u6e2f\u533a\u52a8\u7269\u68c0\u75ab\u7ad9\uff09'>,

1

2

3

4

5

6

7

8

9

10

4.Xpath地址获取方式

sel.xpath(‘//ul[@id=”dirlist”]/li/dl/dt/h4/a/text()’)解析:

如下图所示:

//ul[@id=”dirlist”] :覆盖整个左侧区域;

/li/dl/dt/h4/a/text(): 定位到具体的公司名称。Scrapy 如何正确配置、验证xpath?

5.采集结果验证

采集目标:

"http://www.11467.com/tianjin/dir/a.htm",

"http://www.11467.com/tianjin/dir/a.htm",

.........

"http://www.11467.com/tianjin/dir/s.htm"

1

2

3

4

有效公司数目:16万8510条。

[root@yan scrapy_tianjin02]# cat tianjin1013.txt | wc -l

168510

[root@yan scrapy_tianjin02]# tail -f tianjin1013.txt

1

2

3

足千里(天津)科技有限公司

最美花开(天津)网络科技有限公司

醉潮尚(天津)餐饮管理有限公司

遵化市珍珠甘栗食品有限公司天津分公司

遵化燕山塔陵有限公司天津办事处

遵义神龙医药保健有限责任公司驻天津办事处

1

2

3

4

5

6

6.使用工具

1)浏览器:firefox

2) 插件:firepath, firebug

3)插件使用与下载地址:

http://blog.csdn.net/qiyueqinglian/article/details/49280221

上一篇:图片在 canvas 中的 选中/平移/缩放/旋转,包含了所有canvas的2D变化,让你认识到数学的重要性


下一篇:CSS背景图拉伸自适应尺寸,全浏览器兼容