文章目录
环境信息
Info1 | Detail |
---|---|
OS | CentOS7 |
ZK Version | 3.5.8 |
JDK Version | 1.8 + (1.7以上即可) |
IP Info | 192.168.126.133 |
其实我就一台Server …
伪集群 … 【其实和在多台部署上是一样一样的,我也木有这么多server~】
JDK
ZK Java编写的,当然少不了JDK了 ~
[root@localhost conf]# java -version
openjdk version "c.0_262"
OpenJDK Runtime Environment (build 1.8.0_262-b10)
OpenJDK 64-Bit Server VM (build 25.262-b10, mixed mode)
[root@localhost conf]#
总览
集群架构及角色解读
-
Leader: 处理所有的事务请求(读+写请求),集群中只能有一个Leader
-
Follower:只能处理读请求,同时参与选举 。 作为 Leader的候选节点,如果Leader宕机,Follower节点要参与到新的Leader选举中,有可能成为新的Leader节点
-
Observer:只能处理读请求,不参与选举
安装目录
部署十步曲
- Step1:配置JAVA环境
- Step2:下载并解压zookeeper
- Step3:copy zoo_sample.cfg文件zoo1.cfg
- Step4:修改配置文件zoo-1.cfg
- Step5: 复制三zoo1.cfg ,分别为zoo2.cfg,zoo3.cfg ,zoo4.cfg,修改dataDir和clientPort (如果是同一主机确保在同一主机上不同即可)
- Step6: 标识Server ID , 创建四个文件夹
${ZK_HOME}/data/zk1,${ZK_HOME}/data/zk2,${ZK_HOME}/data/zk3,${ZK_HOME}/data/zk4
,在每个目录中创建文件myid 文件,写入当前实例的server id,即1,2,3,4 - Step7: 启动4个节点
./bin/zkServer.sh start ./conf/zoo1.cfg
./bin/zkServer.sh start ./conf/zoo2.cfg
./bin/zkServer.sh start ./conf/zoo3.cfg
./bin/zkServer.sh start ./conf/zoo4.cfg
- Step8: 检测集群状态
./bin/zkServer.sh status ./conf/zoo1.cfg
./bin/zkServer.sh status ./conf/zoo2.cfg
./bin/zkServer.sh status ./conf/zoo3.cfg
./bin/zkServer.sh status ./conf/zoo4.cfg
- Step9: 客户端连接
./zkCli.sh -server 192.168.126.133:2181,192.168.126.133:2182,192.168.126.133:2183,192.168.126.133:2184
- Step10: 查看集群配置
[zk: 192.168.126.133:2181,192.168.126.133:2182,192.168.126.133:2183,192.168.126.133:2184(CONNECTED) 0] get /zookeeper/config
server.1=192.168.126.133:2188:3188:participant
server.2=192.168.126.133:2189:3189:participant
server.3=192.168.126.133:2190:3190:participant
server.4=192.168.126.133:2191:3191:observer
version=0
[zk: 192.168.126.133:2181,192.168.126.133:2182,192.168.126.133:2183,192.168.126.133:2184(CONNECTED) 1]
下面看下细节
配置文件
node1
[root@localhost conf]# cat zoo1.cfg | grep -v "#"
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/root/zkcluster/zk3.5.8Cluster/data/zk1
clientPort=2181
server.1=192.168.126.133:2188:3188
server.2=192.168.126.133:2189:3189
server.3=192.168.126.133:2190:3190
server.4=192.168.126.133:2191:3191:observer
node2
[root@localhost conf]# cat zoo2.cfg | grep -v "#"
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/root/zkcluster/zk3.5.8Cluster/data/zk2
clientPort=2182
server.1=192.168.126.133:2188:3188
server.2=192.168.126.133:2189:3189
server.3=192.168.126.133:2190:3190
server.4=192.168.126.133:2191:3191:observer
node3
[root@localhost conf]# cat zoo3.cfg | grep -v "#"
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/root/zkcluster/zk3.5.8Cluster/data/zk3
clientPort=2183
server.1=192.168.126.133:2188:3188
server.2=192.168.126.133:2189:3189
server.3=192.168.126.133:2190:3190
server.4=192.168.126.133:2191:3191:observer
node4(Observer节点)
[root@localhost conf]# cat zoo4.cfg | grep -v "#"
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/root/zkcluster/zk3.5.8Cluster/data/zk4
clientPort=2184
server.1=192.168.126.133:2188:3188
server.2=192.168.126.133:2189:3189
server.3=192.168.126.133:2190:3190
server.4=192.168.126.133:2191:3191:observer
配置参数解读
- tickTime:用于配置Zookeeper中最小时间单位的长度,很多运行时的时间间隔都是使用tickTime的倍数来表示的
- initLimit:用于配置Leader服务器等待Follower启动,并完成数据同步的时间。Follower服务器再启动过程中,会与Leader建立连接并完成数据的同步,从而确定自己对外提供服务的起始状态。Leader服务器允许Follower在initLimit 时间内完成这个工作
- syncLimit:Leader 与Follower心跳检测的最大延时时间
- dataDir:Zookeeper 保存数据的目录,默认情况下,Zookeeper 将写数据的日志文件也保存在这个目录里
- dataLogDir: Zookeeper服务器存储事务日志的目录,默认为dataDir
- clientPort:客户端连接 Zookeeper 服务器的端口,Zookeeper 会监听这个端口,接受客户端的访问请求\
- server.A=B:C:D:E
A 是一个数字,表示这个是第几号服务器,唯一标示 B 是服务器的 ip 地址 C 表示的是这个服务器与集群中的 Leader 服务器交换信息的端口 D 选举端口 E observer标识
myid 节点唯一标示
4个节点的myid信息如下
启动节点
[root@localhost conf]# pwd
/root/zkcluster/zk3.5.8Cluster/conf
[root@localhost conf]#
[root@localhost conf]#
[root@localhost conf]#
[root@localhost conf]#
[root@localhost conf]#
[root@localhost conf]#
[root@localhost conf]# ../bin/zkServer.sh start ./zoo1.cfg
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: ./zoo1.cfg
Starting zookeeper ... STARTED
[root@localhost conf]# ../bin/zkServer.sh start ./zoo2.cfg
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: ./zoo2.cfg
Starting zookeeper ... STARTED
[root@localhost conf]# ../bin/zkServer.sh start ./zoo3.cfg
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: ./zoo3.cfg
Starting zookeeper ... STARTED
[root@localhost conf]# ../bin/zkServer.sh start ./zoo4.cfg
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: ./zoo4.cfg
Starting zookeeper ... STARTED
[root@localhost conf]#
节点状态查看
[root@localhost conf]# ../bin/zkServer.sh status ./zoo1.cfg
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: ./zoo1.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
[root@localhost conf]#
[root@localhost conf]# ../bin/zkServer.sh status ./zoo2.cfg
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: ./zoo2.cfg
Client port found: 2182. Client address: localhost.
Mode: leader
[root@localhost conf]# ../bin/zkServer.sh status ./zoo3.cfg
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: ./zoo3.cfg
Client port found: 2183. Client address: localhost.
Mode: follower
[root@localhost conf]# ../bin/zkServer.sh status ./zoo4.cfg
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: ./zoo4.cfg
Client port found: 2184. Client address: localhost.
Mode: observer
[root@localhost conf]#
或者使用客户端连上server 查看
get /zookeeper/config
完事儿 一切正常 没啦
结
行了就到这儿吧, 你也去搭建一套吧(别像我一样,扣扣搜搜都在一台server上)