mongodb副本集搭建

副本集(Replica Set)是一组MongoDB实例组成的集群,由一个主(Primary)服务器和多个备份(Secondary)服务器构成。通过Replication,将数据的更新由Primary推送到其他实例上,在一定的延迟之后,每个MongoDB实例维护相同的数据集副本。通过维护冗余的数据库副本,能够实现数据的异地备份,读写分离和自动故障转移。

也就是说如果主服务器崩溃了,备份服务器会自动将其中一个成员升级为新的主服务器。使用复制功能时,如果有一台服务器宕机了,仍然可以从副本集的其他服务器*问数据。如果服务器上的数据损坏或者不可访问,可以从副本集的某个成员中创建一份新的数据副本。

早期的MongoDB版本使用master-slave,一主一从和MySQL类似,但slave在此架构中为只读,当主库宕机后,从库不能自动切换为主。目前已经淘汰master-slave模式,改为副本集,这种模式下有一个主(primary),和多个从(secondary),只读。支持给它们设置权重,当主宕掉后,权重最高的从切换为主。在此架构中还可以建立一个仲裁(arbiter)的角色,它只负责裁决,而不存储数据。此架构中读写数据都是在主上,要想实现负载均衡的目的需要手动指定读库的目标server。

简而言之MongoDB 副本集是有自动故障恢复功能的主从集群,有一个Primary节点和一个或多个Secondary节点组成。类似于MySQL的MMM架构。更多关于副本集的介绍请见官方文档:

官方文档地址:

https://docs.mongodb.com/manual/replication/

副本集架构图:
mongodb副本集搭建
mongodb副本集搭建
mongodb副本集搭建

21.34 mongodb副本集搭建

我这里使用了三台机器搭建副本集:

192.168.77.128 (primary)
192.168.77.130 (secondary)
192.168.77.134 (secondary)

这三台机器上都已经安装好了MongoDB。

开始搭建:
1.编辑三台机器的配置文件,更改或增加以下内容:

[root@localhost ~]# vim /etc/mongod.conf
replication:   # 取消这行的注释
  oplogSizeMB: 20  # 增加这一行配置定义oplog的大小,注意前面需要有两个空格
  replSetName: zero  # 定义复制集的名称,同样的前面需要有两个空格

注:需要确保每台机器的配置文件中的bindIp都有配置监听自身的内网IP

2.编辑完成后,分别重启三台机器的MongoDB服务:

[root@localhost ~]# systemctl restart mongod.service
[root@localhost ~]# ps aux |grep mongod
mongod     2578  0.7  8.9 1034696 43592 ?       Sl   18:21   0:00 /usr/bin/mongod -f /etc/mongod.conf
root       2605  0.0  0.1 112660   964 pts/0    S+   18:21   0:00 grep --color=auto mongod
[root@localhost ~]# netstat -lntp |grep mongod
tcp        0      0 192.168.77.134:27017    0.0.0.0:*               LISTEN      2578/mongod         
tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      2578/mongod         
[root@localhost ~]#

3.关闭三台机器的防火墙,或者清空iptables规则

4.连接主机器的MongoDB,在主机器上运行命令mongo,然后配置副本集:

[root@localhost ~]# mongo
> use admin
switched to db admin
> config={_id:"zero",members:[{_id:0,host:"192.168.77.128:27017"},{_id:1,host:"192.168.77.130:27017"},{_id:2,host:"192.168.77.134:27017"}]}  # 分别配置三台机器的ip
{
    "_id" : "zero",  # 副本集的名称
    "members" : [
        {
            "_id" : 0,
            "host" : "192.168.77.128:27017"
        },
        {
            "_id" : 1,
            "host" : "192.168.77.130:27017"
        },
        {
            "_id" : 2,
            "host" : "192.168.77.134:27017"
        }
    ]
}
> rs.initiate(config)  # 初始化
{
    "ok" : 1,
    "operationTime" : Timestamp(1515465317, 1),
    "$clusterTime" : {
        "clusterTime" : Timestamp(1515465317, 1),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    }
}
zero:PRIMARY> rs.status()  # 查看状态
{
    "set" : "zero",
    "date" : ISODate("2018-01-09T02:37:13.713Z"),
    "myState" : 1,
    "term" : NumberLong(1),
    "heartbeatIntervalMillis" : NumberLong(2000),
    "optimes" : {
        "lastCommittedOpTime" : {
            "ts" : Timestamp(1515465429, 1),
            "t" : NumberLong(1)
        },
        "readConcernMajorityOpTime" : {
            "ts" : Timestamp(1515465429, 1),
            "t" : NumberLong(1)
        },
        "appliedOpTime" : {
            "ts" : Timestamp(1515465429, 1),
            "t" : NumberLong(1)
        },
        "durableOpTime" : {
            "ts" : Timestamp(1515465429, 1),
            "t" : NumberLong(1)
        }
    },
    "members" : [
        {
            "_id" : 0,
            "name" : "192.168.77.128:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 527,
            "optime" : {
                "ts" : Timestamp(1515465429, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-01-09T02:37:09Z"),
            "infoMessage" : "could not find member to sync from",
            "electionTime" : Timestamp(1515465327, 1),
            "electionDate" : ISODate("2018-01-09T02:35:27Z"),
            "configVersion" : 1,
            "self" : true
        },
        {
            "_id" : 1,
            "name" : "192.168.77.130:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 116,
            "optime" : {
                "ts" : Timestamp(1515465429, 1),
                "t" : NumberLong(1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1515465429, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-01-09T02:37:09Z"),
            "optimeDurableDate" : ISODate("2018-01-09T02:37:09Z"),
            "lastHeartbeat" : ISODate("2018-01-09T02:37:13.695Z"),
            "lastHeartbeatRecv" : ISODate("2018-01-09T02:37:13.661Z"),
            "pingMs" : NumberLong(0),
            "syncingTo" : "192.168.77.128:27017",
            "configVersion" : 1
        },
        {
            "_id" : 2,
            "name" : "192.168.77.134:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 116,
            "optime" : {
                "ts" : Timestamp(1515465429, 1),
                "t" : NumberLong(1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1515465429, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-01-09T02:37:09Z"),
            "optimeDurableDate" : ISODate("2018-01-09T02:37:09Z"),
            "lastHeartbeat" : ISODate("2018-01-09T02:37:13.561Z"),
            "lastHeartbeatRecv" : ISODate("2018-01-09T02:37:13.660Z"),
            "pingMs" : NumberLong(0),
            "syncingTo" : "192.168.77.128:27017",
            "configVersion" : 1
        }
    ],
    "ok" : 1,
    "operationTime" : Timestamp(1515465429, 1),
    "$clusterTime" : {
        "clusterTime" : Timestamp(1515465429, 1),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    }
}
zero:PRIMARY>

以上我们需要关注三台机器的stateStr状态,主机器的stateStr状态需要为PRIMARY,两台从机器的stateStr状态需要为SECONDARY才是正常。

如果出现两个从上的stateStr状态为"stateStr" : "STARTUP", 则需要进行如下操作:

> config={_id:"zero",members:[{_id:0,host:"192.168.77.128:27017"},{_id:1,host:"192.168.77.130:27017"},{_id:2,host:"192.168.77.134:27017"}]}
> rs.reconfig(config)

然后再次查看状态:rs.status(),确保从的状态变为SECONDARY。

21.35 mongodb副本集测试

1.在主机器上创建库以及创建集合:

zero:PRIMARY> use testdb  # 创建库
switched to db testdb
zero:PRIMARY> db.test.insert({AccountID:1,UserName:"zero",password:"123456"})  # 创建集合,并且插入一条数据
WriteResult({ "nInserted" : 1 })
zero:PRIMARY> show dbs  # 查看所有的库
admin   0.000GB
config  0.000GB
local   0.000GB
testdb  0.000GB
zero:PRIMARY> show tables  # 查看当前库的集合
test
zero:PRIMARY>

2.然后到从机器上查看是否有同步主机器上的数据:

[root@localhost ~]# mongo
zero:SECONDARY> show dbs
2018-01-09T18:46:09.959+0800 E QUERY    [thread1] Error: listDatabases failed:{
    "operationTime" : Timestamp(1515466399, 1),
    "ok" : 0,
    "errmsg" : "not master and slaveOk=false",
    "code" : 13435,
    "codeName" : "NotMasterNoSlaveOk",
    "$clusterTime" : {
        "clusterTime" : Timestamp(1515466399, 1),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    }
} :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs@src/mongo/shell/mongo.js:65:1
shellHelper.show@src/mongo/shell/utils.js:813:19
shellHelper@src/mongo/shell/utils.js:703:15
@(shellhelp2):1:1
zero:SECONDARY> rs.slaveOk()  # 如果出现以上错误,需要执行这条命令
zero:SECONDARY> show dbs  # 然后就不会报错了
admin   0.000GB
config  0.000GB
local   0.000GB
testdb  0.000GB
zero:SECONDARY> use testdb
switched to db testdb
zero:SECONDARY> show tables
test
zero:SECONDARY>

如上可以看到数据已经成功同步到从机器上了。

副本集更改权重模拟主宕机

使用rs.config()命令可以查看每台机器的权重:

zero:PRIMARY> rs.config()
{
    "_id" : "zero",
    "version" : 1,
    "protocolVersion" : NumberLong(1),
    "members" : [
        {
            "_id" : 0,
            "host" : "192.168.77.128:27017",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 1,
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        },
        {
            "_id" : 1,
            "host" : "192.168.77.130:27017",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 1,
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        },
        {
            "_id" : 2,
            "host" : "192.168.77.134:27017",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 1,
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        }
    ],
    "settings" : {
        "chainingAllowed" : true,
        "heartbeatIntervalMillis" : 2000,
        "heartbeatTimeoutSecs" : 10,
        "electionTimeoutMillis" : 10000,
        "catchUpTimeoutMillis" : -1,
        "catchUpTakeoverDelayMillis" : 30000,
        "getLastErrorModes" : {

        },
        "getLastErrorDefaults" : {
            "w" : 1,
            "wtimeout" : 0
        },
        "replicaSetId" : ObjectId("5a542a65e491a43160eb92f0")
    }
}
zero:PRIMARY>

priority的值表示该机器的权重,默认都为1。

增加一条防火墙规则来阻断通信模拟主机器宕机:

# 注意这是在主机器上执行
[root@localhost ~]# iptables -I INPUT -p tcp --dport 27017 -j DROP

然后到从机器上查看状态:

zero:SECONDARY> rs.status()
{
    "set" : "zero",
    "date" : ISODate("2018-01-09T14:06:24.127Z"),
    "myState" : 1,
    "term" : NumberLong(4),
    "heartbeatIntervalMillis" : NumberLong(2000),
    "optimes" : {
        "lastCommittedOpTime" : {
            "ts" : Timestamp(1515506782, 1),
            "t" : NumberLong(4)
        },
        "readConcernMajorityOpTime" : {
            "ts" : Timestamp(1515506782, 1),
            "t" : NumberLong(4)
        },
        "appliedOpTime" : {
            "ts" : Timestamp(1515506782, 1),
            "t" : NumberLong(4)
        },
        "durableOpTime" : {
            "ts" : Timestamp(1515506782, 1),
            "t" : NumberLong(4)
        }
    },
    "members" : [
        {
            "_id" : 0,
            "name" : "192.168.77.128:27017",
            "health" : 0,
            "state" : 8,
            "stateStr" : "(not reachable/healthy)",
            "uptime" : 0,
            "optime" : {
                "ts" : Timestamp(0, 0),
                "t" : NumberLong(-1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(0, 0),
                "t" : NumberLong(-1)
            },
            "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
            "optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"),
            "lastHeartbeat" : ISODate("2018-01-09T14:06:20.243Z"),
            "lastHeartbeatRecv" : ISODate("2018-01-09T14:06:23.491Z"),
            "pingMs" : NumberLong(0),
            "lastHeartbeatMessage" : "Couldn't get a connection within the time limit",
            "configVersion" : -1
        },
        {
            "_id" : 1,
            "name" : "192.168.77.130:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 1010,
            "optime" : {
                "ts" : Timestamp(1515506782, 1),
                "t" : NumberLong(4)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1515506782, 1),
                "t" : NumberLong(4)
            },
            "optimeDate" : ISODate("2018-01-09T14:06:22Z"),
            "optimeDurableDate" : ISODate("2018-01-09T14:06:22Z"),
            "lastHeartbeat" : ISODate("2018-01-09T14:06:23.481Z"),
            "lastHeartbeatRecv" : ISODate("2018-01-09T14:06:23.178Z"),
            "pingMs" : NumberLong(0),
            "syncingTo" : "192.168.77.134:27017",
            "configVersion" : 1
        },
        {
            "_id" : 2,
            "name" : "192.168.77.134:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 1250,
            "optime" : {
                "ts" : Timestamp(1515506782, 1),
                "t" : NumberLong(4)
            },
            "optimeDate" : ISODate("2018-01-09T14:06:22Z"),
            "electionTime" : Timestamp(1515506731, 1),
            "electionDate" : ISODate("2018-01-09T14:05:31Z"),
            "configVersion" : 1,
            "self" : true
        }
    ],
    "ok" : 1,
    "operationTime" : Timestamp(1515506782, 1),
    "$clusterTime" : {
        "clusterTime" : Timestamp(1515506782, 1),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    }
}
zero:PRIMARY>

如上,可以看到 192.168.77.128 的 stateStr 的值变成 not reachable/healthy 了,而 192.168.77.134 自动切换成主了,也可以看到192.168.77.134 的 stateStr 的值变成 了PRIMARY。因为权重是相同的,所以切换是有一定的随机性的。

接下来我们指定每台机器权重,让权重高的机器自动切换为主。
1.先删除192.168.77.128上的防火墙规则:

[root@localhost ~]# iptables -D INPUT -p tcp --dport 27017 -j DROP

2.回到192.168.77.134机器上,指定各个机器的权重:

zero:PRIMARY> cfg = rs.conf()
zero:PRIMARY> cfg.members[0].priority = 3
3
zero:PRIMARY> cfg.members[1].priority = 2
2
zero:PRIMARY> cfg.members[2].priority = 1
zero:PRIMARY> rs.reconfig(cfg)
{
    "ok" : 1,
    "operationTime" : Timestamp(1515507322, 1),
    "$clusterTime" : {
        "clusterTime" : Timestamp(1515507322, 1),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    }
}
zero:PRIMARY>

3.这时候192.168.77.128应该就切换成主了,到192.168.77.128上执行rs.config()进行查看:

zero:PRIMARY> rs.config()
{
    "_id" : "zero",
    "version" : 2,
    "protocolVersion" : NumberLong(1),
    "members" : [
        {
            "_id" : 0,
            "host" : "192.168.77.128:27017",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 3,
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        },
        {
            "_id" : 1,
            "host" : "192.168.77.130:27017",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 2,
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        },
        {
            "_id" : 2,
            "host" : "192.168.77.134:27017",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 1,
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        }
    ],
    "settings" : {
        "chainingAllowed" : true,
        "heartbeatIntervalMillis" : 2000,
        "heartbeatTimeoutSecs" : 10,
        "electionTimeoutMillis" : 10000,
        "catchUpTimeoutMillis" : -1,
        "catchUpTakeoverDelayMillis" : 30000,
        "getLastErrorModes" : {

        },
        "getLastErrorDefaults" : {
            "w" : 1,
            "wtimeout" : 0
        },
        "replicaSetId" : ObjectId("5a542a65e491a43160eb92f0")
    }
}
zero:PRIMARY>

如上,可以看到每个机器权重的变化,192.168.77.128也自动切换回主角色了。如果192.168.77.128再宕掉的话,那么192.168.77.130就会是候选主节点,因为除了192.168.77.128之外它的权重最高。




本文转自 ZeroOne01 51CTO博客,原文链接:http://blog.51cto.com/zero01/2059033,如需转载请自行联系原作者

上一篇:关于CAS服务器磁盘占用的问题,锁定目录惹的祸


下一篇:MongoDB常用操作