MGR重启

MGR 上次搭建OK 后虽然做了个简单的单节点重启, MGR可以自动完成重组和组成员重配.

这次把MGR全部打开了后,发现了各自为重! 并没有自动开启MGR

mysql> SELECT * FROM performance_schema.replication_group_members;

+---------------------------+-----------+-------------+-------------+--------------+

| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |

+---------------------------+-----------+-------------+-------------+--------------+

| group_replication_applier | | | NULL | OFFLINE |

+---------------------------+-----------+-------------+-------------+--------------+

1 row in set (0.00 sec)

mysql> START GROUP_REPLICATION;

ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.

mysql> SET GLOBAL group_replication_bootstrap_group=ON;

Query OK, 0 rows affected (0.00 sec)

mysql> START GROUP_REPLICATION;

ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.

2019-06-23T11:21:45.187551Z 3 [ERROR] Plugin group_replication reported: 'binlog_checksum should be NONE for Group Replication'

mysql> sET GLOBAL binlog_checksum=none;

Query OK, 0 rows affected (0.42 sec)

mysql> START GROUP_REPLICATION;

Query OK, 0 rows affected (2.75 sec)

mysql> SELECT * FROM performance_schema.replication_group_members;

+---------------------------+--------------------------------------+--------------+-------------+--------------+

| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |

+---------------------------+--------------------------------------+--------------+-------------+--------------+

| group_replication_applier | ad216d41-5c3a-11e8-8720-08002791d97c | 192.168.2.21 | 3306 | ERROR |

+---------------------------+--------------------------------------+--------------+-------------+--------------+

1 row in set (0.00 sec)

原来两个MGR-SLAVE 是从其他主从关系复制过来的MYSQL系统.

mysql> show slave status\G;

1. row

           Slave_IO_State: 

              Master_Host: 192.168.2.11

              Master_User: rpl

              Master_Port: 3306

            Connect_Retry: 60

          Master_Log_File: 

      Read_Master_Log_Pos: 4

           Relay_Log_File: MYSQL-SLAVE1-relay-bin.000001

怎么还记得老恋人呀?

STOP SLAVE;

RESET SLAVE ALL;

停止,并且清理所有的SLAVE信息.

重新设置主复制通道信息.

CHANGE MASTER TO MASTER_USER='repl',MASTER_PASSWORD='repl' FOR CHANNEL 'group_replication_recovery';

把3个参数添加到MY.CNF中

service mysqld stop

vim /etc/my.cnf

group_replication_allow_local_disjoint_gtids_join=ON

loose-group_replication_bootstrap_group=ON

binlog_checksum=none

service mysqld start

这样就自动启动MGR并且发现已经存在的成员,加入MGR组中.

我把节点1 也就是第一次配的节点 21 也改了

loose-group_replication_bootstrap_group=ON

结果呢! 死活加不进22和23的组中.

发现21 自己组个组!

我配的是多主模式啊? 难道会是单主!! 不可能.

loose-group_replication_bootstrap_group

表示由它负责建组 也就是MGR的主节点.

而 loose-group_replication_bootstrap_group

表示 启动MGR插件.

节点21主节点关闭后, 主节点就转移到其他节点上了.然后再重启就发现了两个主节点.

只有手工 SET GLOBAL loose-group_replication_bootstrap_group=OFF

重启大法是检验一切配置有没有问题的唯一真理

上一篇:【DB宝18】在Docker中安装使用MySQL高可用之MGR


下一篇:【DB吐槽大会】第43期 - PG 倒排索引启动和recheck代价高