首先是简单的网页抓取程序:
[python] import sys, urllib2
req = urllib2.Request("http://blog.csdn.net/nevasun")
fd = urllib2.urlopen(req)
while True:
data = fd.read(1024)
if not len(data):
break
sys.stdout.write(data)
import sys, urllib2
req = urllib2.Request("http://blog.csdn.net/nevasun")
fd = urllib2.urlopen(req)
while True:
data = fd.read(1024)
if not len(data):
break
sys.stdout.write(data)
在终端运行提示urllib2.HTTPError: HTTP Error 403: Forbidden,怎么回事呢?
这是由于网站禁止爬虫,可以在请求加上头信息,伪装成浏览器访问。添加和修改:
[python] headers = {'User-Agent':'Mozilla/5.0 (windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'}
req = urllib2.Request("http://blog.csdn.net/nevasun", headers=headers)
headers = {'User-Agent':'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'}
req = urllib2.Request("http://blog.csdn.net/nevasun", headers=headers)
再试一下,2881064151HTTP Error 403没有了,但是中文全都是乱码。又是怎么回事?
这是由于网站是utf-8编码的,需要转换成本地系统的编码格式:
[python]import sys, urllib2
headers = {'User-Agent':'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'}
req = urllib2.Request("http://blog.csdn.net/nevasun", headers=headers)
content = urllib2.urlopen(req).read() # UTF-8
type = sys.getfilesystemencoding() # local encode format
print content.decode("UTF-8").encode(type) # convert encode format
import sys, urllib2
headers = {'User-Agent':'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'}
req = urllib2.Request("http://blog.csdn.net/nevasun", headers=headers)
content = urllib2.urlopen(req).read() # UTF-8
type = sys.getfilesystemencoding() # local encode format
print content.decode("UTF-8").encode(type) # convert encode format
可以抓取中文页面了。