目录
在爬微博,解析网页的时候
源代码
import urllib.request
from bs4 import BeautifulSoup
import re
import urllib.request,urllib.error
import xlwt
import requests #数据抓取库
#1.数据抓取
url = 'https://s.weibo.com/top/summary'
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36 Edg/95.0.1020.44',
'Referer': 'https://detail.tmall.com/item.htm?id=539845300741&rn=014fb1b89d7688b86780dbfa0d5037ee&abbucket=0',
}
request=urllib.request.Request(url, headers=headers)
response = urllib.request.urlopen(request)
html = response.read().decode('UTF-8')
#print(response.text)
#2.数据解析
soup = BeautifulSoup(html, "html.parser")
出现的错误
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xca in position 339: invalid continuation byte
解决方法
将'UTF-8'换为'ANSI'
html = response.read().decode('ANSI')
问题原因
原页面的编码方式为:ANSI
参考文献
python 报错"UnicodeDecodeError: 'utf-8' codec can't decode byte"的解决办法_米兰小子SHC-CSDN博客