目标网站:https://news.163.com/special/epidemic/
任务:爬取当日各地疫情基本状况
适合人群:了解基本python代码,小项目实训
代码如下
先引入爬虫利器requests 和 数据处理小能手pandas
import requests
import pandas as pd
下面函数是获取json数据
def get_page(url):
headers={'User-Agent':'XXXXXXX'}
r=requests.get(url, headers=headers)
r.encoding = r.apparent_encoding
a=r.json()
return a
下面就需要稍微分析一下网页源码了 ,毕竟是提取有效信息嘛
def parse_page(html):
all=[]
china = html['data']['areaTree'][0]['children']
for i in range(len(china)):
provinceName=china[i]['name']
for j in range(len(china[i]['children'])):
cityName = china[i]['children'][j]['name']
confirm = china[i]['children'][j]['today']['confirm']
dead = china[i]['children'][j]['today']['dead']
heal = china[i]['children'][j]['today']['heal']
suspect = china[i]['children'][j]['today']['suspect']
lastUpdateTime = china[i]['children'][j]['lastUpdateTime']
a = {'province':provinceName,'city':cityName,
'confirm':confirm,'dead':dead,'heal':heal,
'suspect':suspect,'lastUpdateTime':lastUpdateTime}
all.append(a)
return all
下面是将有效数据保存到文件中
def save_file(all):
df = pd.DataFrame(all)
order=['province','city','confirm','dead','heal','suspect','lastUpdateTime']
df = df[order]
df.to_csv('pachong.csv',index=True,header=True)
上面都是函数 下面就是执行啦
url = "https://c.m.163.com/ug/api/wuhan/app/data/list-total?t=316639086783"
dataJson = get_page(url)
allData = parse_page(dataJson)
save_file(allData)
老规矩 有问题私聊