简书非官方大数据(二)

PS:这条很重要,我的文章中所说的大数据并不是现在很火的大数据话题,前几天看过一篇大数据的文章,简单来说:当一台电脑没法处理或你现在的条件没法处理的数据就可以谈的上大数据了,这个没有指定的数据量。
爬虫爬了一晚上,到目前为止已爬取170W+,大早上想了一下,效率不够,我又不会分布式爬虫,也只好停下来改代码了,这时细心的朋友就会想到我要解释断点续爬了啊(断了之后又要重头开始么?)。但今天也只是伪断点续爬,但会给你们提供一个思路。

爬取热门和城市URL

import requests
from lxml import etree
import pymongo

client = pymongo.MongoClient('localhost', 27017)
jianshu = client['jianshu']
topic_urls = jianshu['topic_urls']

host_url = 'http://www.jianshu.com'
hot_urls = ['http://www.jianshu.com/recommendations/collections?page={}&order_by=hot'.format(str(i)) for i in range(1,40)]
city_urls = ['http://www.jianshu.com/recommendations/collections?page={}&order_by=city'.format(str(i)) for i in range(1,3)]

def get_channel_urls(url):
    html = requests.get(url)
    selector = etree.HTML(html.text)
    infos = selector.xpath('//div[@class="count"]')
    for info in infos:
        part_url = info.xpath('a/@href')[0]
        article_amounts = info.xpath('a/text()')[0]
        focus_amounts = info.xpath('text()')[0].split('·')[1]
        # print(part_url,article_amounts,focus_amounts)
        topic_urls.insert_one({'topicurl':host_url + part_url,'article_amounts':article_amounts,
                              'focus_amounts':focus_amounts})

# for hot_url in hot_urls:
#     get_channel_urls(hot_url)

for city_url in city_urls:
    get_channel_urls(city_url)

这部分代码是爬取URL存储到topic_urls表中,其它爬取细节比较简单,就不多述。

爬取文章作者及粉丝

import requests
from lxml import etree
import time
import pymongo

client = pymongo.MongoClient('localhost', 27017)
jianshu = client['jianshu']
author_urls = jianshu['author_urls']
author_infos = jianshu['author_infos']

headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36',
    'Connection':'keep-alive'
}

def get_article_url(url,page):
    link_view = '{}?order_by=added_at&page={}'.format(url,str(page))
    try:
        html = requests.get(link_view,headers=headers)
        selector = etree.HTML(html.text)
        infos = selector.xpath('//div[@class="name"]')
        for info in infos:
            author_name = info.xpath('a/text()')[0]
            authorurl = info.xpath('a/@href')[0]
            if 'http://www.jianshu.com'+ authorurl in [item['author_url'] for item in author_urls.find()]:
                pass
            else:
            # print('http://www.jianshu.com'+authorurl,author_name)
                author_infos.insert_one({'author_name':author_name,'author_url':'http://www.jianshu.com'+authorurl})
                get_reader_url(authorurl)
        time.sleep(2)
    except requests.exceptions.ConnectionError:
        pass

# get_article_url('http://www.jianshu.com/c/bDHhpK',2)
def get_reader_url(url):
    link_views = ['http://www.jianshu.com/users/{}/followers?page={}'.format(url.split('/')[-1],str(i)) for i in range(1,100)]
    for link_view in link_views:
        try:
            html = requests.get(link_view,headers=headers)
            selector = etree.HTML(html.text)
            infos = selector.xpath('//li/div[@class="info"]')
            for info in infos:
                author_name = info.xpath('a/text()')[0]
                authorurl = info.xpath('a/@href')[0]
                # print(author_name,authorurl)
                author_infos.insert_one({'author_name': author_name, 'author_url': 'http://www.jianshu.com' + authorurl})
        except requests.exceptions.ConnectionError:
            pass
# get_reader_url('http://www.jianshu.com/u/7091a52ac9e5')

1 简书对爬虫还是比较友好的,加了一个代理就行(但大家不要恶意爬取,维护网络安全)。
2 中途出现了二次错误,加了二个try就好了,之前有考虑过是否会出错,简书翻页如果超过了最后一页会自动跳转到第二页(手动尝试了下),所以调了一个很大的阈值,不想到会出错。
3 出现错误不想爬重复数据以及一个用户可以发表很多篇文章,所以在get_article_url中加了一个判断,大概意思是说:如果爬去的url在用户表中,我就不进行访问,存储,爬取粉丝等操作了。

运行入口

import sys
sys.path.append("..")
from multiprocessing import Pool
from channel_extract import topic_urls
from page_spider import get_article_url

db_topic_urls = [item['topicurl'] for item in topic_urls.find()]
shouye_url = ['http://www.jianshu.com/c/bDHhpK']
x = set(db_topic_urls)
y = set(shouye_url)
rest_urls = x - y

def get_all_links_from(channel):
    for num in range(1,5000):
        get_article_url(channel,num)

if __name__ == '__main__':

    pool = Pool(processes=4)
    pool.map(get_all_links_from,rest_urls)

1 今天还在爬首页(因为num之前取的17000(首页文章太多)),我想了下首页的文章大部分是其它专题推送过来的,就不爬取了,续爬的话我就用二个集合相减,去掉首页的链接,进而爬取。
2 为什么说是伪断点爬取呢?因为下次报错还是要重新开始(除非改程序),但这里提供了一个思路给大家,通过集合相减,去爬取其余的信息。

上一篇:为什么delphi控件前面都有t


下一篇:SpringBoot-03-之热部署