提示:所有代码已经开源到最大同性交友网站,有兴趣的朋友可以试试:Git地址
未经作者允许不得私自转发
请注明原作者:https://editor.csdn.net/md?articleId=121915057
文章目录
- 项目背景
- 一、安装Scrapy框架
- 二、Scrapy使用步骤
- 三 数据存储的`pipelines.py`文件
- 四 使用下载器中间件
- 五 改写为scrapy-Redis模拟分布式
- 六 使用scrapyd+scrapydweb可视化管理
- 七 数据分析+大屏可视化
- 最后
项目背景
提示:项目背景:
毕业将近,大部分学生出去实习或工作面临找租房的压力,据研究发现:75%的大学生认为压力主要来源于社会就业。50%的大学生对于自己毕业后的发展前途感到迷茫,没有目标;41.7%的大学生表示目前没考虑太多;只有8.3%的人对自己的未来有明确的目标并且充满信心。很明显,就业已经是每个大学生一进校门就要考虑的事情了。中国是个人口大国,大学生毕业人数逐年上升,而社会需求的岗位数量却变化不大。根据对中国未来新增劳动力人口的测算,未来数年中国青年新增劳动力人口每年仍保持在1500~2200万之间的高位,供大于求,大学毕业生初次就业率逐年下降,直接导致就业压力大。
并且,学业压力是作为学生永远需要面对的主题,因就业压力而造成的新的学业压力也逐渐增大。大学生为了适应社会的需求,把自己变成全能型人才,这就需要学习很多功课,去拿到各种证书 来应对就业。可是人的精力是有限的,往往顾此失彼,结果是学什么都不专。大学教材更新的滞后也给学生带来更多的学习负担。学生一方面要学习学校规定的早已经不实用的课程,一方面要寻找将来能用上的课程去学习,还有各种硬性的考级科目,使学生应对无暇。因为就业压力还使更多的大学生选择了继续读硕士博士课程,为了考研,很多学生是在毕业前一两年就开始准备了,各种考研培训班到处都是,大学生既要付出时间还要付出金钱。
为了使大学生更好的在外面租到房子,避坑,并且对新房子和二手房有更加好的了解,我们从网上爬取下一定数据,并且做可视化分析,解决毕业大学生的租房子问题。
一、安装Scrapy框架
cmd中输入:pip install scrapy
如果出现下面的情况,说明已经成功
PS:windows安装scrapy有非常多的坑,有问题自己百度,可以参考:scrapy安装
二、Scrapy使用步骤
如果你要系统学习:scrapy官网
2.1 创建爬虫项目
进入cmd,找到想要建立工程的地方:
输入命令:scrapy startproject myspider
输入:cd myspider
建立爬虫:scrapy genspider [爬虫名] [域名]
2.1.1 建立好后的爬虫目录
2.2配置setting.py
文件
然后把下面代码打开:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en'
}
2.3配置items.py
文件
这个文件中是要创建的爬取实列化对象
import scrapy
class New_houseItem(scrapy.Item):
#省
province=scrapy.Field()
#城市
city=scrapy.Field()
#小区
name=scrapy.Field()
#价格
price=scrapy.Field()
#几居
rooms=scrapy.Field()
#面积
area=scrapy.Field()
#行政区
district=scrapy.Field()
#有没有正在销售
sale=scrapy.Field()
#房子详情url
new_url=scrapy.Field()
class Old_houseItem(scrapy.Item):
# 省份
province = scrapy.Field()
# 城市
city = scrapy.Field()
# 小区名字
name = scrapy.Field()
# 地址
address = scrapy.Field()
# 总价
price = scrapy.Field()
# 单价
unit = scrapy.Field()
# 原始的url
old_url = scrapy.Field()
# 信息
infos = scrapy.Field()
2.4配置pafangzi.py(爬虫)
文件
2.4.1配置(爬虫)
2.4.2 网站抓包
按F12打开调试工具
建议使用Xpath Helper,简单好用:
获取你要的内容:
2.4.3 编写函数
进一步解析(上代码)安排!
def parse(self, response):
# 获取到所有城市的标签
trs = response.xpath("//div[@class = 'outCont']//tr")
province = None
# 遍历得到每一行数据
for tr in trs:
# 获取省份和对应城市的两个td标签
tds = tr.xpath(".//td[not(@class)]")
# 省份名称
province_text = tds[0]
# 省份对应的城市名称及链接
city_info = tds[1]
# 提取省份名称
province_text = province_text.xpath(".//text()").get()
province_text = re.sub(r"\s", "", province_text)
if province_text:
province = province_text
# 不爬取其他
if province == "其它":
continue
# 提取城市名称及链接
city_links = city_info.xpath(".//a")
for city_link in city_links:
# 获取城市
city = city_link.xpath(".//text()").get()
# 获取城市链接
city_url = city_link.xpath(".//@href").get()
# 构建新房链接
url_split = city_url.split("fang")
url_former = url_split[0]
url_backer = url_split[1]
newhouse_url = url_former + "newhouse.fang.com/house/s/"
# 构建二手房链接
old_url = url_former + "esf.fang.com/"
# print("++" * 20)
# print("省份:", province)
# print("城市:", city)
# print("新房链接:", newhouse_url)
# print("二手房链接:", old_url)
# print("++" * 20)
# 返回新房信息再解析
yield scrapy.Request(url=newhouse_url, callback=self.parse_newhouse, meta={"info": (province, city)})
# 返回二手房信息再解析
yield scrapy.Request(url=old_url, callback=self.parse_oldhouse, meta={"info": (province, city)})
2.4.4 回调函数二次解析
# 新房页面解析
def parse_newhouse(self, response):
province, city = response.meta.get("info")
lis = response.xpath("//div[contains(@class, 'nl_con')]/ul/li[not(@style)]")
for li in lis:
# 获取房产名字
name = li.xpath(".//div[@class='nlcd_name']/a/text()").get().strip()
# 获取几居室
rooms = li.xpath(".//div[contains(@class, 'house_type')]/a//text()").getall()
# 获取面积
area = li.xpath(".//div[contains(@class, 'house_type')]/text()").getall()
area = "".join(area).strip()
area = re.sub(r"/|-|/s| |\n", "", area)
# 获取地址
# address = li.xpath(".//div[@class = 'address']/a/@title").get()
# 获取是哪个区的房子
district = li.xpath(".//div[@class = 'address']/a//text()").getall()
district = "".join(district)
district = re.search(r".*\[(.+)\].*", district).group(1)
# 获取是否在售
sale = li.xpath(".//div[contains(@class, 'fangyuan')]/span/text()").get().strip()
# 获取价格
price = li.xpath(".//div[@class = 'nhouse_price']//text()").getall()
price = "".join(price).strip()
# 获取详情页url
new_url = li.xpath(".//div[@class = 'nlcd_name']/a/@href").get()
# 构建item返回
item = New_houseItem(province=province, city=city, name=name, rooms=rooms, area=area,
district=district, sale=sale, price=price, new_url=new_url)
yield item
# 爬取下一页数据
next_url = response.xpath("//div[@class = 'page']//a[@class = 'next']/@href").get()
if next_url:
yield scrapy.Request(url=response.urljoin(next_url), callback=self.parse_newhouse,
meta={"info": (province, city)})
# 二手房页面解析
def parse_oldhouse(self, response):
province, city = response.meta.get("info")
dls = response.xpath("//div[contains(@class, 'shop_list')]/dl[@dataflag = 'bg']")
for dl in dls:
item = Old_houseItem(province=province, city=city)
# 房子名字
name = dl.xpath(".//p[@class = 'add_shop']/a/@title").get()
item["name"] = name
# 信息(几室几厅(rooms),面积(area), 层(floor), 朝向(toward), 年代(year))
infos = dl.xpath(".//p[@class = 'tel_shop']/text()").get().strip()
infos = "".join(infos).strip()
infos = re.sub(r"'|\|\r|\n|/s| ", "", infos)
item['infos'] = infos
# 地址
address = dl.xpath(".//p[@class = 'add_shop']/span/text()").get()
item['address'] = address
# 价格
price = dl.xpath(".//dd[@class = 'price_right']/span[1]//text()").getall()
price = "".join(price).strip()
item['price'] = price
# 均价
unit = dl.xpath(".//dd[@class = 'price_right']/span[2]/text()").get()
item['unit'] = unit
# 原始url
old_url = dl.xpath(".//h4[@class = 'clearfix']/a/@href").getall()
old_url = "".join(old_url)
old_url = response.urljoin(old_url)
item['old_url'] = old_url
yield item
# 下一页url
next_url = response.xpath("//div[@class = 'page_al']/p[last()-1]/a/@href").get()
if next_url:
yield scrapy.Request(url=response.urljoin(next_url), callback=self.parse_oldhouse,
meta={"info": (province, city)})
整体代码块:
import scrapy
import re
from ..items import New_houseItem, Old_houseItem
class PafangziSpider(scrapy.Spider):
name = 'pafangzi'
allowed_domains = ['fang.com']
start_urls = ['https://www.fang.com/SoufunFamily.htm']
def parse(self, response):
# 后获取到所有城市的标签
trs = response.xpath("//div[@class = 'outCont']//tr")
province = None
# 遍历得到每一行数据
for tr in trs:
# 获取省份和对应城市的两个td标签
tds = tr.xpath(".//td[not(@class)]")
# 省份名称
province_text = tds[0]
# 省份对应的城市名称及链接
city_info = tds[1]
# 提取省份名称
province_text = province_text.xpath(".//text()").get()
province_text = re.sub(r"\s", "", province_text)
if province_text:
province = province_text
# 不爬取其他
if province == "其它":
continue
# 提取城市名称及链接
city_links = city_info.xpath(".//a")
for city_link in city_links:
# 获取城市
city = city_link.xpath(".//text()").get()
# 获取城市链接
city_url = city_link.xpath(".//@href").get()
# 构建新房链接
url_split = city_url.split("fang")
url_former = url_split[0]
url_backer = url_split[1]
newhouse_url = url_former + "newhouse.fang.com/house/s/"
# 构建二手房链接
old_url = url_former + "esf.fang.com/"
# print("++" * 20)
# print("省份:", province)
# print("城市:", city)
# print("新房链接:", newhouse_url)
# print("二手房链接:", old_url)
# print("++" * 20)
# 返回新房信息再解析
yield scrapy.Request(url=newhouse_url, callback=self.parse_newhouse, meta={"info": (province, city)})
# 返回二手房信息再解析
yield scrapy.Request(url=old_url, callback=self.parse_oldhouse, meta={"info": (province, city)})
# 新房页面解析
def parse_newhouse(self, response):
province, city = response.meta.get("info")
lis = response.xpath("//div[contains(@class, 'nl_con')]/ul/li[not(@style)]")
for li in lis:
# 获取房产名字
name = li.xpath(".//div[@class='nlcd_name']/a/text()").get().strip()
# 获取几居室
rooms = li.xpath(".//div[contains(@class, 'house_type')]/a//text()").getall()
# 获取面积
area = li.xpath(".//div[contains(@class, 'house_type')]/text()").getall()
area = "".join(area).strip()
area = re.sub(r"/|-|/s| |\n", "", area)
# 获取地址
# address = li.xpath(".//div[@class = 'address']/a/@title").get()
# 获取是哪个区的房子
district = li.xpath(".//div[@class = 'address']/a//text()").getall()
district = "".join(district)
district = re.search(r".*\[(.+)\].*", district).group(1)
# 获取是否在售
sale = li.xpath(".//div[contains(@class, 'fangyuan')]/span/text()").get().strip()
# 获取价格
price = li.xpath(".//div[@class = 'nhouse_price']//text()").getall()
price = "".join(price).strip()
# 获取详情页url
new_url = li.xpath(".//div[@class = 'nlcd_name']/a/@href").get()
# 构建item返回
item = New_houseItem(province=province, city=city, name=name, rooms=rooms, area=area,
district=district, sale=sale, price=price, new_url=new_url)
yield item
# 爬取下一页数据
next_url = response.xpath("//div[@class = 'page']//a[@class = 'next']/@href").get()
if next_url:
yield scrapy.Request(url=response.urljoin(next_url), callback=self.parse_newhouse,
meta={"info": (province, city)})
# 二手房页面解析
def parse_oldhouse(self, response):
province, city = response.meta.get("info")
dls = response.xpath("//div[contains(@class, 'shop_list')]/dl[@dataflag = 'bg']")
for dl in dls:
item = Old_houseItem(province=province, city=city)
# 房子名字
name = dl.xpath(".//p[@class = 'add_shop']/a/@title").get()
item["name"] = name
# 信息(几室几厅(rooms),面积(area), 层(floor), 朝向(toward), 年代(year))
infos = dl.xpath(".//p[@class = 'tel_shop']/text()").get().strip()
infos = "".join(infos).strip()
infos = re.sub(r"'|\|\r|\n|/s| ", "", infos)
item['infos'] = infos
# 地址
address = dl.xpath(".//p[@class = 'add_shop']/span/text()").get()
item['address'] = address
# 价格
price = dl.xpath(".//dd[@class = 'price_right']/span[1]//text()").getall()
price = "".join(price).strip()
item['price'] = price
# 均价
unit = dl.xpath(".//dd[@class = 'price_right']/span[2]/text()").get()
item['unit'] = unit
# 原始url
old_url = dl.xpath(".//h4[@class = 'clearfix']/a/@href").getall()
old_url = "".join(old_url)
old_url = response.urljoin(old_url)
item['old_url'] = old_url
yield item
# 下一页url
next_url = response.xpath("//div[@class = 'page_al']/p[last()-1]/a/@href").get()
if next_url:
yield scrapy.Request(url=response.urljoin(next_url), callback=self.parse_oldhouse,
meta={"info": (province, city)})
三 数据存储的pipelines.py
文件
我们习惯性存到mongodb数据库,想要详细了解mongodb,请你参考:windosw操作mongodb
3.1 安装连接数据库的库
打开cmd
输入:pip install pymongo
3.2 书写pipelines.py
文件
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
# useful for handling different item types with a single interface
from itemadapter import ItemAdapter
# 存入MongoDB数据库
import pymongo
from .items import New_houseItem, Old_houseItem
class MongodbPipline(object):
def __init__(self):
# 建立数据库连接
client = pymongo.MongoClient('127.0.0.1', 27017)
# 连接所需数据库, fangzi为数据库名字
db = client['fangzi']
# 连接所用集合,也就是通常所说的表,newhouse为表名
self.post_newhouse = db['newhouse3'] # 新房
self.post_oldhouse = db['oldhouse3'] # 二手房
def process_item(self, item, spider):
if isinstance(item, New_houseItem):
# 把item转化为字典形式
postItem = dict(item)
# 向数据库插入一条记录
print(postItem)
self.post_newhouse.insert_one(postItem)
if isinstance(item, Old_houseItem):
# 把item转化为字典形式
postItem = dict(item)
# 向数据库插入一条记录
print(postItem)
self.post_oldhouse.insert_one(postItem)
3.3 setting.py
中配置启用mongodb
添加:# 数据库配置信息 MONGODB_HOST = '127.0.0.1' MONGODB_PORT = 27017 MONGODB_DBNAME = 'fangzi' MONGODB_DOCNAME1 = 'newhouse3' MONGODB_DOCNAME2 = 'oldhouse3'
打开:
ITEM_PIPELINES = {
#存为Json文件
# 'myspider.pipelines.PaFangPipeline': 300,
# 存入到MongoDB
'myspider.pipelines.MongodbPipline': 300,
}
到此,第一步操作,我们的爬虫就完成了!
到爬虫文件中,转到终端
输入:scrapy crawl [爬虫名]
等待运行结束,你的数据库中就有
四 使用下载器中间件
打开middlewares.py
文件
4.1 设置请求头
from scrapy import signals
import random
# useful for handling different item types with a single interface
from itemadapter import is_item, ItemAdapter
# 设置随机请求头
class UserAgentDownloadMiddleware(object):
# User-Agent中的请求头
User_Agents = [
"Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",
"Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",
"Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0",
"Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; .NET4.0C; .NET4.0E; .NET CLR 2.0.50727; .NET CLR 3.0.30729; .NET CLR 3.5.30729; InfoPath.3; rv:11.0) like Gecko",
"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)",
"Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)",
"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)",
"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
"Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
"Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; en) Presto/2.8.131 Version/11.11",
"Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Maxthon 2.0)",
"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; TencentTraveler 4.0)",
"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)",
"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; The World)",
"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SE 2.X MetaSr 1.0; SE 2.X MetaSr 1.0; .NET CLR 2.0.50727; SE 2.X MetaSr 1.0)",
"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; 360SE)",
"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Avant Browser)",
"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)",
"Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5",
"Mozilla/5.0 (iPod; U; CPU iPhone OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5",
"Mozilla/5.0 (iPad; U; CPU OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5",
"Mozilla/5.0 (Linux; U; Android 2.3.7; en-us; Nexus One Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
"MQQBrowser/26 Mozilla/5.0 (Linux; U; Android 2.3.7; zh-cn; MB200 Build/GRJ22; CyanogenMod-7) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
"Opera/9.80 (Android 2.3.4; Linux; Opera Mobi/build-1107180945; U; en-GB) Presto/2.8.149 Version/11.10",
"Mozilla/5.0 (Linux; U; Android 3.0; en-us; Xoom Build/HRI39) AppleWebKit/534.13 (KHTML, like Gecko) Version/4.0 Safari/534.13",
"Mozilla/5.0 (BlackBerry; U; BlackBerry 9800; en) AppleWebKit/534.1+ (KHTML, like Gecko) Version/6.0.0.337 Mobile Safari/534.1+",
"Mozilla/5.0 (hp-tablet; Linux; hpwOS/3.0.0; U; en-US) AppleWebKit/534.6 (KHTML, like Gecko) wOSBrowser/233.70 Safari/534.6 TouchPad/1.0",
"Mozilla/5.0 (SymbianOS/9.4; Series60/5.0 NokiaN97-1/20.0.019; Profile/MIDP-2.1 Configuration/CLDC-1.1) AppleWebKit/525 (KHTML, like Gecko) BrowserNG/7.1.18124",
"Mozilla/5.0 (compatible; MSIE 9.0; Windows Phone OS 7.5; Trident/5.0; IEMobile/9.0; HTC; Titan)",
"UCWEB7.0.2.37/28/999",
"NOKIA5700/ UCWEB7.0.2.37/28/999",
"Openwave/ UCWEB7.0.2.37/28/999",
"Mozilla/4.0 (compatible; MSIE 6.0; ) Opera/UCWEB7.0.2.37/28/999"
]
# 定义函数随机获取一个请求头
def process_request(self, request, spider):
user_agent = random.choice(self.User_Agents)
request.headers['User-Agent'] = user_agent
4.2 设置代理IP(如果你有的话^ _ ^)
import base64
# 设置代理ip
class DuXiangIPProxyDownloadMiddleware(object):
def process_request(self, request, spider):
# 代理的ip地址以及端口号
proxy = "代理ip地址 : 端口号"
# 代理的账号和密码
user_password = "账号 : 密码"
request.meta["proxy"] = proxy
# 把密码设置进去,先进行base64转换. b64_user_password需要是bytes数据类型,而user_password是Unicode(str)类型,所以需要先编码。
b64_user_password = base64.b64encode(user_password.encode('utf-8'))
# 设置在请求头中. Basic是str数据类型,b64_user_password是bytes数据类型,所以首先解码
request.headers["Proxy-Authorization"] = "Basic " + b64_user_password.decode('utf-8')
不要忘了打开setting
DOWNLOADER_MIDDLEWARES = {
# 'myspider.middlewares.MyspiderDownloaderMiddleware': 543,
#UA池中间件
'myspider.middlewares.UserAgentDownloadMiddleware':543,
}
有没有感觉一切都非常简单!
为了提高爬虫性能,现在我们玩一点好玩的?
将爬虫改写为分布式爬虫
五 改写为scrapy-Redis模拟分布式
5.1 下载安装redis
5.2 打开redis
- 打开
cmd
命令行 - 进入Redis文件目录
- 输入:
redis-server.exe redis.windows.conf
出现这个说明成功
- 新开一个cmd,输入:redis-cli
5.3 配置python中的设置
5.3.1 安装scrapy-redis
输入pip install scrapy-redis
5.3.2 配置
然后在setting.py
中设置
(因为我们存数据库就不存redis服务器了)
# Scrapy-Redis相关配置
# 确保request存储到redis中
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
#
# # 确保所有的爬虫共享相同的去重指纹
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
#
# # 设置redis为item pipeline(数据保存在了redis服务器上)
# # ITEM_PIPELINES = {
# # 'scrapy_redis.pipelines.RedisPipeline': 300
# # }
#
# # 在redis中保持scrapy-redis用到的队列,不会清理redis中的队列,从而可以实现暂停和恢复的功能
SCHEDULER_PERSIST = True
#
# # 设置连接Redis信息
REDIS_HOST = '127.0.0.1' # Redis服务器的主机地址(安装Redis的电脑的ip)
REDIS_PORT = 6379
5.3.3 启动分布式
最最最Z13时刻
多开几个端口,爬虫进入监听,然后redis-cli窗口输入:lpush myspider:start_urls [爬取网址]
回车开始爬取
六 使用scrapyd+scrapydweb可视化管理
是不是感觉命令行操作有点恼火?我也觉得
你想要深入学习?scrapyd手册
6.1 安装scrapyd
输入:pip install scrapyd
输入:pip install scrapyd-client
命令行中输入
scrapyd
在浏览器中输入:http://localhost:6800/,如果出现下面界面则表示启动成功(不要关闭cmd,后面步骤还需要)
6.2 配置scrapyd
打开scrapy项目,有个scrapy.cfg文件,按如下进行配置
进入项目文件夹输入:scrapyd-deploy -l
已经扫描到项目,然后编译scrapyd-deploy myspider -p myspider
最后上传curl http://localhost:6800/schedule.json -d project=myspider -d spider=pafangzi
6.3 安装scrapydweb
输入pip install scrapydweb
会出现一个配置文件
6.4 运行scrapydweb(保证scrapyd服务器处于运行状态)
终端:scrapydweb
出现可视化界面
之后就可以愉快的玩耍了!
七 数据分析+大屏可视化
因为没有JavaScript大佬教我,所以选择BI工具,安排!
输入:pip install pandas
import pymongo
import pandas as pd
client = pymongo.MongoClient('127.0.0.1', 27017)
# 连接所需数据库, fangzi为数据库名字
db = client['fangzi']
# 连接数据库
old_table = db['oldhouse3']
new_table=db['newhouse3']
data = pd.DataFrame(list(old_table.find()))
筛选自己要的数据,分组存表,导入BI工具,
剩下的就交给你们发挥洛^ _ ^
奉上大图
最后
记得给一个一建三连哦!
未经作者允许不得私自转发
请注明原作者:https://editor.csdn.net/md?articleId=121915057