Percona XtraDB Cluster添加仲裁节点

Galera Arbitrator是Percona XtraDB集群的成员,用于投票,以防您拥有少量服务器(通常为两个)并且不希望添加更多资源。Galera仲裁器不需要专用服务器。它可以安装在运行其他应用程序的机器上。只要确保它具有良好的网络连接。Galera Arbitrator是参与投票的群集的成员,但不是实际复制(虽然它接收与其他节点相同的数据)。此外,它不包含在流量控制计算中。本文给出添加仲裁节点示例。

一、PXC集群环境描述

192.168.1.248 CentOS7.4
192.168.1.249 CentOS7.4
192.168.1.253 CentOS6.7(新增用于作为仲裁节点)

如下图,将仲裁节点添加到现有集群
Percona XtraDB Cluster添加仲裁节点

二、添加仲裁节点

# yum install Percona-XtraDB-Cluster-garbd-57

# rpm -ql Percona-XtraDB-Cluster-garbd-57
/etc/init.d/garb      ##启动脚本
/etc/sysconfig/garb  ##配置文件
/usr/bin/garbd
/usr/share/doc/percona-xtradb-cluster-garbd-3/COPYING
/usr/share/doc/percona-xtradb-cluster-garbd-3/README
/usr/share/man/man8/garbd.8.gz
/var/lib/galera

# vim /etc/sysconfig/garb 

GALERA_NODES="192.168.1.248:4567, 192.168.1.249:4567, 192.168.1.253:4567"
GALERA_GROUP=pxc-cluster

##配置后请删除无关的全部内容,否则报如下错误
Garbd config /etc/sysconfig/garb is not configured yet    [失败]

# /etc/init.d/garb start
正在启动 /usr/bin/garbd:                                  [确定]

# /etc/init.d/garb start  
正在启动 /usr/bin/garbd:                                  [确定]
# 
# /etc/init.d/garb status
garbd (pid 5198) 正在运行...

三、验证集群

# tail -fn 50 /var/log/message
Apr 13 09:13:22 ydq4 garbd[7854]: CRC-32C: using hardware acceleration.
Apr 13 09:13:22 ydq4 garbd[7854]: Read config: #012#011daemon:  1#012#011name:    
garb#012#011address: gcomm://192.168.1.248:4567,192.168.1.249:4567,192.168.1.253:4567#012#011group:  
pxc-cluster#012#011sst:    trivial#012#011donor:  #012#011options: gcs.fc_limit=9999999; 
gcs.fc_factor=1.0; gcs.fc_master_slave=yes#012#011cfg:    #012#011log:    
Apr 13 09:13:22 ydq4 garbd[7856]: Using CRC-32C for message checksums.
Apr 13 09:13:22 ydq4 garbd[7856]: gcomm thread scheduling priority set to other:0 
Apr 13 09:13:22 ydq4 garbd[7856]: Fail to access the file (./gvwstate.dat) error (No such file or directory). 
It is possible if node is booting for first time or re-booting after a graceful shutdown
Apr 13 09:13:22 ydq4 garbd[7856]: Restoring primary-component from disk failed. Either node is booting for 
first time or re-booting after a graceful shutdown
Apr 13 09:13:22 ydq4 garbd[7856]: GMCast version 0
Apr 13 09:13:22 ydq4 garbd[7856]: (dc339071, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
Apr 13 09:13:22 ydq4 garbd[7856]: (dc339071, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
Apr 13 09:13:22 ydq4 garbd[7856]: EVS version 0
Apr 13 09:13:22 ydq4 garbd[7856]: gcomm: connecting to group 'pxc-cluster', peer '192.168.1.248:4567,192.168.1.249:4567,192.168.1.253:4567'
Apr 13 09:13:22 ydq4 garbd[7856]: (dc339071, 'tcp://0.0.0.0:4567') connection established to dc339071 tcp://192.168.1.253:4567
Apr 13 09:13:22 ydq4 garbd[7856]: (dc339071, 'tcp://0.0.0.0:4567') address 'tcp://192.168.1.253:4567' points to own listening address, blacklisting
Apr 13 09:13:22 ydq4 garbd[7856]: (dc339071, 'tcp://0.0.0.0:4567') connection established to 66d23117 tcp://192.168.1.249:4567
Apr 13 09:13:22 ydq4 garbd[7856]: (dc339071, 'tcp://0.0.0.0:4567') connection established to a643db62 tcp://192.168.1.248:4567
Apr 13 09:13:22 ydq4 garbd[7856]: (dc339071, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: 
Apr 13 09:13:23 ydq4 garbd[7856]: declaring 66d23117 at tcp://192.168.1.249:4567 stable
Apr 13 09:13:23 ydq4 garbd[7856]: declaring a643db62 at tcp://192.168.1.248:4567 stable
Apr 13 09:13:23 ydq4 garbd[7856]: Node 66d23117 state primary
Apr 13 09:13:23 ydq4 garbd[7856]: Current view of cluster as seen by this node#012view (view_id(PRIM,66d23117,16)#012memb 
{#012#01166d23117,0#012#011a643db62,0#012#011dc339071,0#012#011}#012joined {#012#011}#012left {#012#011}#012partitioned {#012#011}#012)
Apr 13 09:13:23 ydq4 garbd[7856]: Save the discovered primary-component to disk
Apr 13 09:13:23 ydq4 garbd[7856]: open file(./gvwstate.dat.tmp) failed(Permission denied)
Apr 13 09:13:23 ydq4 garbd[7856]: gcomm: connected
Apr 13 09:13:23 ydq4 garbd[7856]: Shifting CLOSED -> OPEN (TO: 0)
Apr 13 09:13:23 ydq4 garbd[7856]: New COMPONENT: primary = yes, bootstrap = no, my_idx = 2, memb_num = 3
Apr 13 09:13:23 ydq4 garbd[7856]: STATE EXCHANGE: Waiting for state UUID.
Apr 13 09:13:23 ydq4 garbd[7856]: STATE EXCHANGE: sent state msg: dc5dbeeb-3eb7-11e8-8e45-e34b2f81e1c9
Apr 13 09:13:23 ydq4 garbd[7856]: STATE EXCHANGE: got state msg: dc5dbeeb-3eb7-11e8-8e45-e34b2f81e1c9 from 0 (pxc-cluster-node-2)
Apr 13 09:13:23 ydq4 garbd[7856]: STATE EXCHANGE: got state msg: dc5dbeeb-3eb7-11e8-8e45-e34b2f81e1c9 from 1 (pxc-cluster-node-1)
Apr 13 09:13:23 ydq4 garbd[7856]: STATE EXCHANGE: got state msg: dc5dbeeb-3eb7-11e8-8e45-e34b2f81e1c9 from 2 (garb)
Apr 13 09:13:23 ydq4 garbd[7856]: Quorum results:#012#011version    = 4,#012#011component  = PRIMARY,#012#011conf_id    = 4,
#012#011members    = 2/3 (primary/total),#012#011act_id    = 248529,#012#011last_appl. = -1,
#012#011protocols  = 0/7/3 (gcs/repl/appl),#012#011group UUID = cd96b06a-0a1d-11e8-99d2-837e6f3b95a9
Apr 13 09:13:23 ydq4 garbd[7856]: Flow-control interval: [8388607, 8388607]
Apr 13 09:13:23 ydq4 garbd[7856]: Trying to continue unpaused monitor
Apr 13 09:13:23 ydq4 garbd[7856]: Shifting OPEN -> PRIMARY (TO: 248529)
Apr 13 09:13:23 ydq4 garbd[7856]: Sending state transfer request: 'trivial', size: 7
Apr 13 09:13:23 ydq4 garbd[7856]: Member 2.0 (garb) requested state transfer from '*any*'. Selected 0.0 (pxc-cluster-node-2)(SYNCED) as donor.
Apr 13 09:13:23 ydq4 garbd[7856]: Shifting PRIMARY -> JOINER (TO: 248529)
Apr 13 09:13:23 ydq4 garbd[7856]: 0.0 (pxc-cluster-node-2): State transfer to 2.0 (garb) complete.
Apr 13 09:13:23 ydq4 garbd[7856]: 2.0 (garb): State transfer from 0.0 (pxc-cluster-node-2) complete.
Apr 13 09:13:23 ydq4 garbd[7856]: SST leaving flow control
Apr 13 09:13:23 ydq4 garbd[7856]: Shifting JOINER -> JOINED (TO: 248529)
Apr 13 09:13:23 ydq4 garbd[7856]: Member 0.0 (pxc-cluster-node-2) synced with group.
Apr 13 09:13:23 ydq4 garbd[7856]: Member 2.0 (garb) synced with group.
Apr 13 09:13:23 ydq4 garbd[7856]: Shifting JOINED -> SYNCED (TO: 248529)  -->这里提示状态由加入变为同步
Apr 13 09:13:26 ydq4 garbd[7856]: (dc339071, 'tcp://0.0.0.0:4567') turning message relay requesting off
### Author : Leshami
### Blog : http://blog.csdn.net/leshami
注意,其余的2个节点的wsrep_cluster_address参数要将仲裁节点地址添加进去
##在集群的节点查看,此时wsrep_cluster_size已经变成3了
mysql> show global status like '%wsrep_cluster%';
+--------------------------+--------------------------------------+
| Variable_name            | Value                                |
+--------------------------+--------------------------------------+
| wsrep_cluster_conf_id    | 5                                    |
| wsrep_cluster_size      | 3                                    |
| wsrep_cluster_state_uuid | cd96b06a-0a1d-11e8-99d2-837e6f3b95a9 |
| wsrep_cluster_status    | Primary                              |
+--------------------------+--------------------------------------+

-- pxc版本
mysql> show variables like 'version%';
+-------------------------+-----------------------------------------------------------------+
| Variable_name          | Value                                                          |
+-------------------------+-----------------------------------------------------------------+
| version                | 5.7.20-18-57-log                                                |
| version_comment        | Percona XtraDB Cluster (GPL),  WSREP version 29.24, wsrep_29.24 |
| version_compile_machine | x86_64                                                          |
| version_compile_os      | Linux                                                          |
+-------------------------+-----------------------------------------------------------------+
上一篇:Intel瞄准2000亿美元商机,以“数据为中心”结合云、边缘和人工智能


下一篇:五大策略助你完美备份数据