淘宝商品信息爬取

这两天做的python课设有一个关于python爬虫的题目,要求是从某宝爬取,那今天就来个某宝的商品信息爬取的内容吧!

  • 首先确定个目标,根据某关键词搜索,从获取的页面信息中提取商品标题、价格、发货地点、付款人数、以及点名这些信息,这些信息都是直接在网页源代码中。
  • ok,目标定好了,就直接瞄准进攻吧!在淘宝中随便输入一个关键词,看一下url,顺便来个翻页,查看一下url的变化,为了方便查看不同页码的url的不同,就把他们放一起了,依次是1,2,3,4
  • https://s.taobao.com/search?q=%E7%BE%BD%E7%BB%92%E6%9C%8D&imgfile=&js=1&stats_click=search_radio_all%3A1&initiative_id=staobaoz_20201220&ie=utf8,
    https://s.taobao.com/search?q=%E7%BE%BD%E7%BB%92%E6%9C%8D&imgfile=&js=1&stats_click=search_radio_all%3A1&initiative_id=staobaoz_20201220&ie=utf8&bcoffset=3&ntoffset=3&p4ppushleft=1%2C48&s=44,
    https://s.taobao.com/search?q=%E7%BE%BD%E7%BB%92%E6%9C%8D&imgfile=&js=1&stats_click=search_radio_all%3A1&initiative_id=staobaoz_20201220&ie=utf8&bcoffset=0&ntoffset=6&p4ppushleft=1%2C48&s=88
  • 这么长,这么多,看不懂,怎么办,莫慌!那就把url解码工具来帮忙吧!http://tool.chinaz.com/tools/urlencode.aspx,然后把我们的url复制进去,然后你就会发现,oh,q=跟的是一个关键字的编码后的结果。那还有一串是啥啊?嗯,把三个url中重复的删掉!别问我怎么知道的,既然一样,那就是可有可无的了,可有可无的=没用的!删去的这一部分就不管他了,然后那两个带offset的,那个偏移量,一般给他删了也没啥问题,然后你就会发现,最后只剩下s了,第一页没有,默认0,第二页44,第三页88,那这个s就是跟翻页有关了,s = 44 * (页数 - 1)
  • OK,那我们就可以先构造url了,先把头头拿过来https://s.taobao.com/search?+q=编码后的字符+&s=(页码 - 1) x 44,url编码可以用urllib.parse.quote(‘字符’)就行了,先整个20页。
    key = '手套'
    key = parse.quote(key)
    url = 'https://s.taobao.com/search?q={}&s={}'
    page = 20
    for i in range(page):
        url_page = url.format(key, i * 44)
        print(url_page)

然后当我们按照正常步骤构造headers请求头,用get()方法获取的时候,你会发现,呦吼,炸了,不行,返回的内容不对,唉,那该咋整啊,作业咋办啊,面向csdn编程不是随便说说的,然后我就知道了,用爬虫爬淘宝,需要“假登录”,获取头部headers信息,我们只弄个ua是肯定不行的,然后把弄好的headers作为参数传给qequests.get(url,headers = header)就行了,那该咋弄这个headers啊,右键,打开浏览器抓包工具,然后network,ctrl+r刷新一波,在all里面找到search?开头的,对他进行右键,copy as curl(bash),然后打开https://curl.trillworks.com/,然后粘贴,右边Python requests里直接复制headers,再请求就完事了!

  • 网页源码请求到了,该提取信息了,由于内容在script中,所以,re !
    然后我就贴心的把正则表达式贴过来了(小声比比,这是我拿老师写的改的)
    title = re.findall(r'\"raw_title\"\:\"(.*?)\"', response)

    nick = re.findall(r'\"nick\"\:\"(.*?)\"', response)

    item_loc = re.findall(r'\"item_loc\"\:\"(.*?)\"', response)

    price = re.findall(r'\"view_price\"\:\"(.*?)\"', response)

    sales = re.findall(r'\"view_sales\"\:\"(.*?)\"', response)

正则表达式匹配得到的内容是一个列表,

  • title是标题
  • nick是店铺(这个多了一个哦,然后我看了看,最后一个没用,就切片把最后一个丢了就行)
  • item_loc是地区
  • price是价格
  • sales是付款人数
    咳咳,然后再来一波对应储存就行了,这边推荐您使用csv呢,亲!
    附上源码
from urllib import parse
from fake_useragent import UserAgent
import requests
import re
import time
import csv
import os


def get_response(url):
    ua = UserAgent()

    headers = {
        'authority': 's.taobao.com',
        'cache-control': 'max-age=0',
        'upgrade-insecure-requests': '1',
        'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36',
        'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
        'sec-fetch-site': 'same-origin',
        'sec-fetch-mode': 'navigate',
        'sec-fetch-user': '?1',
        'sec-fetch-dest': 'document',
        'accept-language': 'zh-CN,zh;q=0.9',
        'cookie': 'cna=jKMMGOupxlMCAWpbGwO3zyh4; tracknick=tb311115932; _cc_=URm48syIZQ%3D%3D; thw=cn; hng=CN%7Czh-CN%7CCNY%7C156; miid=935759921262504718; t=bd88fe30e6685a4312aa896a54838a7e; sgcookie=E100kQv1bRHxrwnulL8HT5z2wacaf40qkSLYMR8tOCmVIjE%2FxrR5nzhju3UySug2dFrigMAy3v%2FjkNElYj%2BDcqmgdA%3D%3D; uc3=nk2=F5RGNwnC%2FkUVLHU%3D&vt3=F8dCuf2OXoGHiuEl2D8%3D&id2=VyyUy7sStBYaoA%3D%3D&lg2=U%2BGCWk%2F75gdr5Q%3D%3D; lgc=tb311115932; uc4=nk4=0%40FY4NAq0PgYBeuIHFyHE%2F9QSZnG6juw%3D%3D&id4=0%40VXtbYhfspVba1o0MN1OuNaxcY%2BUP; enc=tJQ9f26IYMQmwsNzfEZi6fJNcflLvL6bdcU4yyus3rqfsM37Mpy1jvcSMZ%2BYSaE5vziMtC9svi%2B4JVMfCnIsWA%3D%3D; _samesite_flag_=true; cookie2=112f2a76112f88f183403c6a3c4b721f; _tb_token_=eeeb18eb59e1; tk_trace=oTRxOWSBNwn9dPyorMJE%2FoPdY8zfvmw%2Fq5v3iwJfzrr80CDMiLUbZX4jcwHeizGatsFqHolN1SmeHD692%2BvAq7YJ%2FbITqs68WMjdAhcxP7WLdArSe8thnE40E0eWE4GQTvQP9j5XSLFbjZAE7XgwagUcgW%2Fg6rXAuZaws1NrrZksnq%2BsYQUb%2FHT%2Fa1m%2Fctub0jBbjlmp8ZDJGSpGyPMgg561G3vjIRPVnkhRCyG9GgwteJUZAsyQIkeh7xtdyN%2BF50TIambWylXMZhQW7LQGZ48rHl3Q; lLtC1_=1; v=0; mt=ci=-1_0; _m_h5_tk=b0940eb947e1d7b861c7715aa847bfc7_1608386181566; _m_h5_tk_enc=6a732872976b4415231b3a5270e90d9c; xlly_s=1; alitrackid=www.taobao.com; lastalitrackid=www.taobao.com; JSESSIONID=136875559FEC7BCA3591450E7EE11104; uc1=cookie14=Uoe0ZebpXxPftA%3D%3D; tfstk=cgSFBiAIAkEUdZx7kHtrPz1rd-xdZBAkGcJ2-atXaR-zGpLhi7lJIRGJQLRYjef..; l=eBI8YSBIOXAWZRYCBOfaourza779sIRYSuPzaNbMiOCP9_fp5rvCWZJUVfT9CnGVh6SBR3-wPvUJBeYBqnY4n5U62j-la_Dmn; isg=BAsLX3b80AwyYAwAj8PO7RC0mq_1oB8iDqsYtX0I5sqhnCv-BXFHcGI-cpxyuXca',
    }
    response = requests.get(url, headers=headers).content.decode('utf-8')
    # "raw_title":"卡蒙手套女2020秋冬季新款运动保暖护手休闲针织触屏防寒羊皮手套"
    # "view_price":"208.00"
    # "nick":"intersport旗舰店"
    # "item_loc":"江苏 连云港"
    # "view_sales":"0人付款"
    title = re.findall(r'\"raw_title\"\:\"(.*?)\"', response)

    nick = re.findall(r'\"nick\"\:\"(.*?)\"', response)[:-1]

    item_loc = re.findall(r'\"item_loc\"\:\"(.*?)\"', response)

    price = re.findall(r'\"view_price\"\:\"(.*?)\"', response)

    sales = re.findall(r'\"view_sales\"\:\"(.*?)\"', response)
    return [title, nick, item_loc, price, sales]


def tocsv(file, filename):
    with open(filename, 'a+', encoding='utf-8') as f:
        f.seek(0)
        write = csv.writer(f)
        if f.read() == '':
            write.writerow(('标题', '店铺', '地点', '付款人数', '价格'))

        for i in range(len(file[0])):
            write.writerow((file[0][i], file[1][i], file[2][i], file[3][i], file[4][i]))


if __name__ == '__main__':
    filename = 'taobao.csv'
    key = '手套'
    key = parse.quote(key)
    url = 'https://s.taobao.com/search?q={}&s={}'
    page = 20
    if os.path.exists('taobao.csv'):
        os.remove('taobao.csv')
    for i in range(page):
        url_page = url.format(key, i * 44)
        print(url_page)
        res = get_response(url_page)
        time.sleep(1)
        tocsv(res, filename=filename)

上一篇:CSS学习Day9之背景属性


下一篇:解决国内NPM安装依赖速度慢问题