mongo分布式集群搭建

一、基本概念

mongodb分片集群的实例有5种类型。

  • a) 配置节点config,用来存储元数据,即存储数据的位置信息等,配置节点1M的空间约等于数据节点存储200M的数据;
  • b) 主分片节点shard,用来存储数据,被查询;
  • c) 分片复本节点replication,用来存储数据,当主分片挂了之后,会被选为主分片;
  • d) 仲裁节点arbitor,当分片节点的主分片实例挂了之后,负责选举对应的复本作为主分片,本身并不存储任何数据;
  • e) 分发节点mongos,用来处理所有的外部请求,将请求分发至配置节点取得元数据,然后再从指定的分片中取得数据,最后返回至分发节点,其本身也不存储任何信息,只做分发请求使用,分发节点所占内存和资源比较小,部署时可以将分发节点放在应用机器上。

二、环境准备

关闭防火墙与SELinux

systemctl stop firewalld.service

systemctl disable firewalld.service 

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

修改主机名、配置host文件,过程略过!

安装java环境

yum install java -y

创建普通用户

useradd mongo

echo 123456 | passwd --stdin mongo

修改资源使用配置文件

vim /etc/security/limits.conf

mongo    soft  nproc  65535

mongo    hard  nproc  65535

mongo    soft  nofile 81920

mongo    hard  nofile 81920

关闭大页内存

vi /etc/rc.local   

# 末尾添加

if test -f /sys/kernel/mm/transparent_hugepage/enabled; then

 echo never > /sys/kernel/mm/transparent_hugepage/enabled

fi

if test -f /sys/kernel/mm/transparent_hugepage/defrag; then

  echo never > /sys/kernel/mm/transparent_hugepage/defrag

fi

source /etc/rc.local

# 确认配置生效(默认情况下always)

cat /sys/kernel/mm/transparent_hugepage/enabled    

always madvise [never]

cat /sys/kernel/mm/transparent_hugepage/defrag

always madvise [never]

三、安装和部署mongo

以下操作全部更换为mongo用户操作即可!

主机角色分配

分片/端口(角色) mongo01 mongo02 mongo03
config 27018 27018 27018
shard1 27019(master) 27019(replication) 27019(arbitor)
shard2 27020(arbitor) 27020(master) 27020(replication)
shard3 27021(replication) 27021(arbitor) 27021(master)
mongos 27017 27017 27017

获取软件包

wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-3.4.4.tgz

tar -xzvf mongodb-linux-x86_64-3.4.4.tgz

cd mongodb-linux-x86_64-3.4.4/

mkdir {data,logs,conf}

config.yml

配置节点必须在分发节点之前完成启动和配置,数据节点必须在集群初始化之前完成启动和配置,所谓初始化就是分发节点和配置节点配置好后,将各个分片加入到集群中。

每台机器配置一样,注意配置的IP和端口。

sharding:

 clusterRole: configsvr

replication:

 replSetName: chinadaas

 #所有config 节点一致

systemLog:

 destination: file

 path: "/home/mongo/mongodb-linux-x86_64-3.4.4/logs/config.log

 logAppend: true

 logRotate: rename

net:

 bindIp: 192.168.99.251

 port: 27018

storage:

 dbPath: "/home/mongo/mongodb-linux-x86_64-3.4.4/data/config

processManagement:

 fork: true

mongos.yml

每台器配置一样,注意配置的IP和端口

shareding:

 configDB: chinadaas/192.168.99.251:27018,192.168.99.252:27018,192.168.99.253:27018

# 指定配置节点

systemLog:

 destination: file

 path: "/home/mongo/mongodb-linux-x86_64-3.4.4/logs/mongos.log"

 logAppend: true

net:

 bindIp: 192.168.99.251,127.0.0.1

 port: 27017

processManagement:

 fork: true

shard1.yml(主分片)

mongo 主分片和副本配置文件相同,但要注意根据角色分配修改replSetName,端口,日志路径,存储路径。

sharding:

 clusterRole: shardsvr

replication:

 replSetName: shard1

systemLog:

 destination: file

 logAppend: true

 logRotate: rename

 path: /home/mongo/mongodb-linux-x86_64-3.4.4/logs/shard1.log

processManagement:

 fork: true

net:

 bindIp: 192.168.99.251

 port: 27019

 http:

  enabled: false

 maxIncomingConnections: 65535

operationProfiling:

 mode: slowOp

 slowOpThresholdMs: 100

storage:

 dbPath: /home/mongo/mongodb-linux-x86_64-3.4.4/data/shard1

 wiredTiger:

  engineConfig:

   cacheSizeGB: 40

   directoryForIndexes: true

  indexConfig:

   prefixCompression: true

 directoryPerDB: true

setParameter:

 replWriterThreadCount: 64

shard2.yml(仲裁节点)

仲裁节点分片配置相同,但注意端口

sharding:

 clusterRole: shardsvr

replication:

 replSetName: shard2

systemLog:

 destination: file

 logAppend: true

 logRotate: rename

 path: /home/mongo/mongodb-linux-x86_64-3.4.4/logs/shard2.log

processManagement:

 fork: true

net:

 bindIp: 192.168.99.251

 port: 27020

operationProfiling:

 mode: slowOp

 slowOpThresholdMs: 100

storage:

 dbPath: /home/mongo/mongodb-linux-x86_64-3.4.4/data/shard2

按照该表格分配shard的角色

分片/端口(角色) mongo01 mongo02 mongo03
config 27018 27018 27018
shard1 27019(master) 27019(replication) 27019(arbitor)
shard2 27020(arbitor) 27020(master) 27020(replication)
shard3 27021(replication) 27021(arbitor) 27021(master)
mongos 27017 27017 27017

分配完成后,启动config、shard进程:

mongod -f /home/mongo/mongodb-linux-x86_64-3.4.4/conf/config.yml

mongod -f /home/mongo/mongodb-linux-x86_64-3.4.4/conf/shard1.yml

mongod -f /home/mongo/mongodb-linux-x86_64-3.4.4/conf/shard2.yml

mongod -f /home/mongo/mongodb-linux-x86_64-3.4.4/conf/shard3.yml

初始化mongo集群

mongo --port 27018 -host 192.168.99.251
> rs.initiate( {

	_id: "chinadaas",

   configsvr: true,

   members: [

​     { _id: 0, host: "192.168.99.251:27018" },

​     { _id: 1, host: "192.168.99.252:27018" },

​     { _id: 2, host: "192.168.99.253:27018" }

   ]

});

> rs.status()

配置分片角色

shard1

> mongo -host 192.168.99.251 --port 27019

rs.initiate(

​    { _id:"shard1", members:[

​      {_id:0,host:"192.168.99.251:27019"},

​      {_id:1,host:"192.168.99.252:27019"},

​      {_id:2,host:"192.168.99.253:27019",arbiterOnly:true}

​    ]

  }

);

> rs.status()

shard2

mongo -host 192.168.99.252 --port 27020

> rs.initiate(

​    { _id:"shard2", members:[

​      {_id:0,host:"192.168.99.252:27020"},

​      {_id:1,host:"192.168.99.253:27020"},

​      {_id:2,host:"192.168.99.251:27020",arbiterOnly:true}

​    ]

  }

);

> rs.status()

shard3

mongo -host 192.168.99.253 --port 27021

> rs.initiate(

​    { _id:"shard3", members:[

​      {_id:0,host:"192.168.99.253:27021"},

​      {_id:1,host:"192.168.99.251:27021"},

​      {_id:2,host:"192.168.99.252:27021",arbiterOnly:true}

​    ]

  }

);

> rs.status()

启动mongos,插入数据便于查看各个分片情况

mongos -f /home/mongo/mongodb-linux-x86_64-3.4.4/conf/mongos.yml

mongo --port 27017

> use admin

> db.runCommand( { addshard : "shard1/192.168.99.251:27019,192.168.99.252:27019,192.168.99.253:27019"});

> db.runCommand( { addshard : "shard2/192.168.99.251:27020,192.168.99.252:27020,192.168.99.253:27020"});

> db.runCommand( { addshard : "shard3/192.168.99.251:27021,192.168.99.252:27021,192.168.99.253:27021"});

> sh.status()
上一篇:mongo下载安装


下一篇:docker 安装 mongo