scrapy使用pipeline保存不同的表单Item到数据库、本地文件

文章目录

步骤1:构造Item

import scrapy


class StockItem(scrapy.Item):
    stock_code = scrapy.Field()
    company_name = scrapy.Field()
    stock_type = scrapy.Field()


class CompanyInfoItem(scrapy.Item):
    name = scrapy.Field()
    company_name = scrapy.Field()

步骤2:构造Pipeline

from .items import StockItem, CompanyInfoItem
import pymysql


class MyPipeline(object):
    def __init__(self): # 这里可以定义数据库的一些配置
        host = "127.0.0.1"
        user = "testuser"
        password = "testpassword"
        db = "test_db"

        self.conn = pymysql.connect(host=host, user=user, password=password, database=db)
        self.cursor = self.conn.cursor()

    def process_item(self, item, spider):
        if isinstance(item, StockItem):
            print("StockItem") # StockItem的处理逻辑
        elif isinstance(item, CompanyInfoItem):
            print('CompanyInfoItem') # CompanyInfoItem的处理逻辑

步骤3:setting配置pipeline

ITEM_PIPELINES = {
    'company_finance.pipelines.MyPipeline': 300,
}
上一篇:爬虫框架scrapy--1环境搭建及项目创建基本步骤


下一篇:Python------python爬虫数据导入MongoDB数据库!!!