fabric1.4.1核心模块及配置文件不完全解读

fabric1.4.1核心模块及配置文件不完全解读

核心模块及功能

模块名称 功能
peer 主节点模块,负责储存区块链数据,运行维护链码
orderer 交易打包、排序模块
cryptogan 组织和证书生成模块
configtxgen 区块和交易生成模块
configtxlator 区块和交易解析模块

核心模块位于bin目录下

模块配置

  1. fabric核心模块的配置信息是由配置文件、命令行选项、环境变量三个部分组成,其中配置文件和环境变量之间的关系容易导致系统启动错误。
  2. 三处配置之间的优先级关系:环境变量>配置文件>命令选项
  3. 环境变量和配置文件可以相互转化,但是建议全部配置在环境变量中或者全部配置在配置文件中。基于Docker运行,建议采用环境变量的配置方式,如果使用命令行直接启动,建议采用配置文件。

    cryptogen模块

  4. 命令
    cryptogen --help显示cryptogen模块的命令行选项
    运行结果:
[root@node1 bin]# ./cryptogen --help
usage: cryptogen [<flags>] <command> [<args> ...]

Utility for generating Hyperledger Fabric key material

Flags:
  --help  Show context-sensitive help (also try --help-long and --help-man).

Commands:
  help [<command>...]           //显示帮助信息
    Show help.

  generate [<flags>]            //根据配置文件生成证书和私钥信息
    Generate key material

  showtemplate                  //显示系统默认cryptogen配置文件信息
    Show the default configuration template

  version                       //显示当前模块的版本号
    Show version information

  extend [<flags>]              //扩展现有网络
    Extend existing network
  1. 配置文件crypto-config.yaml
    注:示例文件来自于e2e_cli实例
OrdererOrgs:                        //定义orderer节点
  - Name: Orderer                   //orderer节点的名称
    Domain: example.com             //orderer节点的根域名
    CA:
        Country: US
        Province: California
        Locality: San Francisco
    Specs:                          
      - Hostname: orderer           //orderer节点的主机名
PeerOrgs:
  - Name: Org1                      //组织1的名称
    Domain: org1.example.com        //组织1的根域名
    EnableNodeOUs: true             
    CA:
        Country: US
        Province: California
        Locality: San Francisco
    Template:
      Count: 2                      //组织1中的节点数目
    Users:                          //组织1中的用户数目
      Count: 1
  - Name: Org2
    Domain: org2.example.com
    EnableNodeOUs: true
    CA:
        Country: US
        Province: California
        Locality: San Francisco
    Template:
      Count: 2
    Users:
      Count: 1

configtxgen模块

  1. 命令
[root@node1 bin]# ./configtxgen --help
Usage of ./configtxgen:
  -asOrg string
        Performs the config generation as a particular organization (by name), only including values in the write set that org (likely) has privilege to set
  -channelCreateTxBaseProfile string        
        Specifies a profile to consider as the orderer system channel current state to allow modification of non-application parameters during channel create tx generation. Only valid in conjuction with 'outputCreateChannelTx'.
  -channelID string     
        The channel ID to use in the configtx
  -configPath string
        The path containing the configuration to use (if set)
  -inspectBlock string
        Prints the configuration contained in the block at the specified path
  -inspectChannelCreateTx string        //打印创建通道的交易的配置文件
        Prints the configuration contained in the transaction at the specified path
  -outputAnchorPeersUpdate string
        Creates an config update to update an anchor peer (works only with the default channel creation, and only for the first update)
  -outputBlock string
        The path to write the genesis block to (if set)
  -outputCreateChannelTx string
        The path to write a channel creation configtx to (if set)
  -printOrg string
        Prints the definition of an organization as JSON. (useful for adding an org to a channel manually)
  -profile string
        The profile from configtx.yaml to use for generation. (default "SampleInsecureSolo")
  -version
        Show version information
[root@node1 bin]# 

常用命令解析:

  • -asOrg string :所属组织
  • -channelID string :channel名称,如果没有,系统会提供一个默认值
  • -inspectBlock string :打印定制区块文件中的配置内容
  • -inspectChannelCreateTx string :打印创建通道的交易的配置文件
  • -outputAnchorPeersUpdate string :更新呢channel配置信息
  • -outputBlock string :输出区块文件的路径
  • -outputCreateChannelTx string :标识输出创始区块文件
  • -profile string :配置文件的节点
  • -version :显示版本信息
  1. 配置文件configtx.yaml
    注:示例文件来自于e2e_cli实例
//orderer节点相关信息
Organizations:
    //orderer节点配置信息
    - &OrdererOrg
        //orderer节点名称
        Name: OrdererOrg

        //orderer节点编号
        ID: OrdererMSP

        //msp文件夹路径
        MSPDir: crypto-config/ordererOrganizations/example.com/msp
        Policies:
            Readers:
                Type: Signature
                Rule: "OR('OrdererMSP.member')"
            Writers:
                Type: Signature
                Rule: "OR('OrdererMSP.member')"
            Admins:
                Type: Signature
                Rule: "OR('OrdererMSP.admin')"

    //orderer节点中包含的组织,如果有多个组织可以配置多个。
    - &Org1
        Name: Org1MSP       //组织名称
        ID: Org1MSP         //组织编号
        //组织msp文件名
        MSPDir: crypto-config/peerOrganizations/org1.example.com/msp
        Policies:
            Readers:
                Type: Signature
                Rule: "OR('Org1MSP.admin', 'Org1MSP.peer', 'Org1MSP.client')"
            Writers:
                Type: Signature
                Rule: "OR('Org1MSP.admin', 'Org1MSP.client')"
            Admins:
                Type: Signature
                Rule: "OR('Org1MSP.admin')"
                
        //锚节点配置,定义锚节点位置,可用于跨组织的数据传播或同步
        AnchorPeers:
            //本组织锚节点访问地址
            - Host: peer0.org1.example.com
                //本组织锚节点访问的端口
              Port: 7051
    - &Org2
        Name: Org2MSP
        ID: Org2MSP
        MSPDir: crypto-config/peerOrganizations/org2.example.com/msp
        Policies:
            Readers:
                Type: Signature
                Rule: "OR('Org2MSP.admin', 'Org2MSP.peer', 'Org2MSP.client')"
            Writers:
                Type: Signature
                Rule: "OR('Org2MSP.admin', 'Org2MSP.client')"
            Admins:
                Type: Signature
                Rule: "OR('Org2MSP.admin')"
        AnchorPeers:
            - Host: peer0.org2.example.com
              Port: 7051
    - &Org3
        Name: Org3MSP
        ID: Org3MSP
        MSPType: idemix
        MSPDir: crypto-config/idemix/idemix-config
        Policies:
            Readers:
                Type: Signature
                Rule: "OR('Org3MSP.admin', 'Org3MSP.peer', 'Org3MSP.client')"
            Writers:
                Type: Signature
                Rule: "OR('Org3MSP.admin', 'Org3MSP.client')"
            Admins:
                Type: Signature
                Rule: "OR('Org3MSP.admin')"
        AnchorPeers:
            - Host: peer0.org3.example.com
              Port: 7051

//功能特=特性集合
Capabilities:
    //全局频道功能配置,频道功能必须同时适用并支持排序服务节点及peer节点
    Channel: &ChannelCapabilities
        V1_3: true
    //排序服务功能配置
    Orderer: &OrdererCapabilities
        V1_1: true
    //应用功能配置
    Application: &ApplicationCapabilities
        V1_3: true
        V1_2: false
        V1_1: false
        
Application: &ApplicationDefaults
    Organizations:
    Policies:
        Readers:
            Type: ImplicitMeta
            Rule: "ANY Readers"
        Writers:
            Type: ImplicitMeta
            Rule: "ANY Writers"
        Admins:
            Type: ImplicitMeta
            Rule: "MAJORITY Admins"
    Capabilities:
        <<: *ApplicationCapabilities

//orderer节点的配置,Orderer指定fabric网络的启动类型、区块生成配置以及排序服务的地址
Orderer: &OrdererDefaults

    //orderer节点启动类型和共识方式
    OrdererType: kafka
    //orderer监听的地址
    Addresses:
        - orderer.example.com:7050
    //批处理超时:在创建批处理之前等待的时间
    BatchTimeout: 2s
    BatchSize:
        //最大消息计数:批处理的最大消息数量
        MaxMessageCount: 10
        //绝对最大字节:批处理中序列化消息的绝对最大字节数
        AbsoluteMaxBytes: 98 MB
        PreferredMaxBytes: 512 KB

    //kafka相关配置
    Kafka:
        Brokers:
            - kafka0:9092
            - kafka1:9092
            - kafka2:9092
            - kafka3:9092
    Organizations:
    Policies:
        Readers:
            Type: ImplicitMeta
            Rule: "ANY Readers"
        Writers:
            Type: ImplicitMeta
            Rule: "ANY Writers"
        Admins:
            Type: ImplicitMeta
            Rule: "MAJORITY Admins"
        BlockValidation:
            Type: ImplicitMeta
            Rule: "ANY Writers"
    Capabilities:
        <<: *OrdererCapabilities
Channel: &ChannelDefaults
    Policies:
        Readers:
            Type: ImplicitMeta
            Rule: "ANY Readers"
        Writers:
            Type: ImplicitMeta
            Rule: "ANY Writers"
        Admins:
            Type: ImplicitMeta
            Rule: "MAJORITY Admins"
    Capabilities:
        <<: *ChannelCapabilities
        
//以下部分定义了整个系统的配置信息,指定configtxgen工具的参数
Profiles:
    //组织定义标识符,可以自定义,命令中的-profile参数对应该标识符
    //命令示例:./bin/configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block
    TwoOrgsOrdererGenesis:
        <<: *ChannelDefaults
        
        //Orderer属性配置,系统关键字不得更改
        Orderer:
            <<: *OrdererDefaults
            Organizations:
                //OrdererOrg是官方样板给出的名称,实际生产环境中可自定义
                - *OrdererOrg
        
        //定义了系统中包含的组织
        Consortiums:
            SampleConsortium:
                //系统中包含的组织
                Organizations:
                    - *Org1
                    - *Org2
                    - *Org3
                    
    //以下是channel的配置信息
    //通道定义标识符,可自定义
    TwoOrgsChannel:
        Consortium: SampleConsortium
        Application:
            <<: *ApplicationDefaults
            Organizations:
                - *Org1
                - *Org2
                - *Org3

profiles节点详解:profiles节点定义了整个系统的结构和channel的结构,配置文件中的Profiles关键字不允许修改,否则配置失效。

orderer模块

  1. 命令
[root@node1 bin]# ./orderer --help
usage: orderer [<flags>] <command> [<args> ...]
Hyperledger Fabric orderer node
Flags:
  --help  Show context-sensitive help (also try --help-long and --help-man).
Commands:
  help [<command>...]
    Show help.
  start*
    Start the orderer node
  version
    Show version information
  benchmark
    Run orderer in benchmark mode
[root@node1 bin]# 

常用命令解析

  • help :显示求助信息
  • start* :启动orderer节点
  • version :显示版本信息
  • benchmark :采用基准模式运行orderer
  1. 配置docker-compose-orderer.yaml
    注:示例文件来自于fabric基于kafka生产环境部署实例,实例参考博客园作者灵龙相关实例:https://www.cnblogs.com/llongst/p/9608886.html
version: '2'
services:
  orderer0.example.com:
    container_name: orderer0.example.com
    image: hyperledger/fabric-orderer
    
    //环境变量
    environment:
    //general节点相关配置
      - ORDERER_GENERAL_LOGLEVEL=debug      //日志级别
      - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0       //监听地址
      
      //账本的类型,账本有ram,json,file三种可选。ram表示账本数据保存在内存中,一般用于测试环境;json和file表示账本数据保存在文件中,用于生产环境。
      - ORDERER_GENERAL_GENESISMETHOD=file
      //创世区块文件的路径
      - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
      //orderer模块的编号,在configtxgen模块的配置文件中指定
      - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
      //orderer模块msp文件路径
      - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp

      //orderer模块TLS设置
      //TLS激活标记,true表示激活,flase表示关闭
      - ORDERER_GENERAL_TLS_ENABLED=true
      //服务器私钥文件路径
      - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
      //服务器数字证书文件路径
      - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
      //根CA服务器证书文件的路径
      - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
      
      //ORDERER_KAFKA是kafka生产者和消费者应该注意的配置
      //RETRY:如果orderer在启动的时候,kafka还没有启动或者kafka宕机时重试的次数
        //LONGINTERVAL:长重试状态下重试的时间间隔
      - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s 
        //长重试状态下最多重试时间
      - ORDERER_KAFKA_RETRY_LONGTOTAL=100s
      //操作失败短重试状态下重试的时间间隔
      - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
      //短重试状态下最多的重试时间
      - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
      //kafka客户端的日志级别,在orderer的运行日志中显示kafka的日志信息
      - ORDERER_KAFKA_VERBOSE=true
    
    //当前容器启动之后的工作路径
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric
    command: orderer
    //外界物理机路径挂载或者指引到容器内的路径
    volumes:
      - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
      - ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/msp:/var/hyperledger/orderer/msp
      - ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/tls/:/var/hyperledger/orderer/tls
    
    //当前容器启动之后映射到物理机上的端口号
    ports:
      - 7050:7050
    extra_hosts:
     - "kafka0:192.168.111.139"
     - "kafka1:192.168.111.131"
     - "kafka2:192.168.111.132"
     - "kafka3:192.168.111.140"

peer模块

  1. 命令
[root@node1 bin]# ./peer --help
Usage:
  peer [command]
Available Commands:
  chaincode   Operate a chaincode: install|instantiate|invoke|package|query|signpackage|upgrade|list.
  channel     Operate a channel: create|fetch|join|list|update|signconfigtx|getinfo.
  help        Help about any command
  logging     Logging configuration: getlevel|setlevel|getlogspec|setlogspec|revertlevels.
  node        Operate a peer node: start|status.
  version     Print fabric peer version.
Flags:
  -h, --help   help for peer
Use "peer [command] --help" for more information about a command.
  1. 配置docker-compose-peer.yaml
    注:示例文件来自于fabric基于kafka生产环境部署实例,实例参考博客园作者灵龙相关实例:https://www.cnblogs.com/llongst/p/9608886.html
version: '2'
services:
  peer0.org1.example.com:
    container_name: peer0.org1.example.com
    hostname: peer0.org1.example.com
    image: hyperledger/fabric-peer
    
    //环境变量
    environment:
        //节点编号
       - CORE_PEER_ID=peer0.org1.example.com
       //访问地址
       - CORE_PEER_ADDRESS=peer0.org1.example.com:7051
       //chaincode的监听地址
       - CORE_PEER_CHAINCODELISTENADDRESS=peer0.org1.example.com:7052
       //当前节点所属组织编号
       - CORE_PEER_LOCALMSPID=Org1MSP
       //docker服务器域名的地址,默认取unix域套接字
       - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
       #设定peer模块启动之后的日志级别
       - CORE_LOGGING_LEVEL=DEBUG
       //用户组织节点(leader)的生成方式
       - CORE_PEER_GOSSIP_USELEADERELECTION=true
       //当前节点是否为用户组织节点,false代表不是用户组织节点
       - CORE_PEER_GOSSIP_ORGLEADER=false
       //节点被组织外部节点感知时的地址,默认为空,表示不被其他组织所感知
       - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051
       - CORE_PEER_PROFILE_ENABLED=true
       
       //peer模块TLS设置
       //TLS激活标记,true表示激活,flase表示关闭
       - CORE_PEER_TLS_ENABLED=true
       //服务器证书文件路径
       - CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
       //服务器私钥文件路径
       - CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
       //根CA服务器证书文件路径
       - CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
    //当前容器启动之后的工作路径
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
    command: peer node start
    //外界物理机路径挂载或者指引到容器内的路径
    volumes:
       - /var/run/:/host/var/run/
       - ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp
       - ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls
    //当前容器启动之后映射到物理机上的端口号
    ports:
      - 7051:7051
      - 7052:7052
      - 7053:7053
    extra_hosts:
      - "orderer0.example.com:192.168.152.160"
      - "orderer1.example.com:192.168.152.156"
      - "orderer2.example.com:192.168.152.157"

  cli:
    container_name: cli
    image: hyperledger/fabric-tools
    tty: true
    environment:
      - GOPATH=/opt/gopath
      - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
      - CORE_LOGGING_LEVEL=DEBUG
      - CORE_PEER_ID=cli
      - CORE_PEER_ADDRESS=peer0.org1.example.com:7051
      - CORE_PEER_LOCALMSPID=Org1MSP
      - CORE_PEER_TLS_ENABLED=true
      - CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
      - CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
      - CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
      - CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
    volumes://需要从本地映射到docker容器中的文件
        - /var/run/:/host/var/run/
        //将本地的链码映射到docker容器中
        - ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/kafkapeer/chaincode/go
        - ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
        - ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
    extra_hosts:
      - "orderer0.example.com:192.168.152.160"
      - "orderer1.example.com:192.168.152.156"
      - "orderer2.example.com:192.168.152.157"
      - "peer0.org1.example.com:192.168.152.160"
      - "peer1.org1.example.com:192.168.152.156"
      - "peer0.org2.example.com:192.168.152.157" 
      - "peer1.org2.example.com:192.168.152.161"

拓展

zookeeper

配置文件docker-compose-zookeeper.yaml

version: '2'
services:
  zookeeper0:
    container_name: zookeeper0
    hostname: zookeeper0
    image: hyperledger/fabric-zookeeper
    restart: always
    environment:
        //当前节点在zookeeper集群中的id
      - ZOO_MY_ID=1
      //组成当前zookpeer集群的服务器的列表
      - ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
    ports:
      - 2181:2181
      - 2888:2888
      - 3888:3888
    extra_hosts:
      - "zookeeper0:192.168.152.160"
      - "zookeeper1:192.168.152.156"
      - "zookeeper2:192.168.152.157"
      - "kafka0:192.168.152.160"
      - "kafka1:192.168.152.156"
      - "kafka2:192.168.152.157"
      - "kafka3:192.168.152.161"

kafka

配置文件docker-compose-kafka.yaml

version: '2'
services:
  kafka0:
    container_name: kafka0
    hostname: kafka0
    image: hyperledger/fabric-kafka
    restart: always
    environment:
        //消息最大字节数
      - KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
      //副本获取最大字节数
      - KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
      //非一致性的leader选举
      - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
    environment:
        //是唯一的非负整数ID进行标识,这个ID可以作为代理(Broker)的名字,值可以自定义,但是要确保唯一性
      - KAFKA_BROKER_ID=1
      //最小同步备份数,
      - KAFKA_MIN_INSYNC_REPLICAS=2
      //默认复制因子,其值小于kafka集群数量
      - KAFKA_DEFAULT_REPLICATION_FACTOR=3
      //kafka连接的zookpeer节点的集合
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
    ports:
      - 9092:9092
    extra_hosts:
      - "zookeeper0:192.168.152.160"
      - "zookeeper1:192.168.152.156"
      - "zookeeper2:192.168.152.157"
      - "kafka0:192.168.152.160"
      - "kafka1:192.168.152.156"
      - "kafka2:192.168.152.157"
      - "kafka3:192.168.152.161"
上一篇:3. SOFAJRaft源码分析— 是如何进行选举的?


下一篇:巨杉数据库入选年度Gartner Peer Insights报告,获得市场高度评价