模拟登陆大体思路见此博文,本篇文章只是将登陆在scrapy中实现而已
之前介绍过通过requests的session 会话模拟登陆;必须是session,涉及到验证码和xsrf的
写入cookie验证的问题;在scrapy中不需担心此问题,因为Request会保证这是一个会话,并且自动传递cookies
原理想通,因为验证码识别的问题,这里先使用cookie模拟登陆
# -*- coding: utf-8 -*- import scrapy
import json
import re class ZhihuSpider(scrapy.Spider): name = "zhihu"
allowed_domains = ["zhihu.com"]
start_urls = ['http://www.zhihu.com/']
#头部
headers = {
"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36",
"Host":"www.zhihu.com",
"Referer":"https://www.zhihu.com/",
}
#从已经登陆的浏览在中copy下来的
cookies = {
"d_c0":"",
"l_cap_id":"",
"r_cap_id":"",
"cap_id":"",
"_zap":"",
"__utmc":"",
"__utmb":"",
"__utmv":"",
"__utma":"",
"__utmz":"",
"q_c1":"",
}
#最开始请求的reqeust函数,自动调用,将首次获取的response返回给登陆函数(里面有xsrf)
def start_requests(self):
#必须带上cookie;return返回,不用生成器,只需爬取登陆页面一次,而且必须返回一个可迭代对象,所以是列表
return [scrapy.Request(url="https://www.zhihu.com/#signin",cookies=self.cookies,headers=self.headers,callback=self.login)] #知乎登录
def login(self,response):
#正则匹配出xsrf
response_text = response.text
match_obj = re.match('.*name="_xsrf" value="(.*?)"', response_text, re.DOTALL)
if match_obj:
xsrf = (match_obj.group(1)) url = "https://www.zhihu.com/login/phone_num"
data={
"_xsrf":xsrf,
'remember_me': 'true',
"password":"",
"phone_num":""
} #将获取到的xsrf加载到cookie中
self.cookies["_xsrf"] = xsrf
#通过FormRequest提交表单,这里的request对象和之前的session一样,还是处于刚刚的对话中;回调给检查登陆的函数
return [scrapy.FormRequest(url=url,headers=self.headers,formdata=data,callback=self.check_login)] #查看登录状态;登陆成功则默认回调parse函数进行解析网页
def check_login(self,response):
text_json = json.load(response.text)
if "msg" in text_json and text_json["msg"]=="\u767b\u5f55\u6210\u529f":
for urls in self.start_urls:
yield scrapy.Request(url=urls,dont_filter=True,headers=self.headers) def parse(self, response):
pass