https://dev.mysql.com/doc/refman/8.0/en/group-replication-getting-started.html
MGR 作为一个Server插件提供支持的,每个group的server都需要配置和加载这个插件
这一章主要教大家在三节点的MGR环境下,怎么一步步搭建起来的
18.2.1 Deploying Group Replication in Single-Primary Mode
每个group中的实例既可以是在单独的物理机部署,也可以在同一台物理机部署
这一节主要描述怎么在同一个物理机部署MGR
- Figure 18.4 Group Architecture
这个教程主要描述如何部署MGR,在构建MGR前如何配置每个实例,以及如何使用Performance Schema 来监控MGR正确运行
18.2.1.1 Deploying Instances for Group Replication
第一步:部署三个MySQL实例
由于接下来的步骤是在同一台物理机搭建多个实例的,因此每个MySQL实例都必须要指定一个特定的数据目录
mkdir data
mysql-8.0/bin/mysqld --initialize-insecure --basedir=$PWD/mysql-8.0 --datadir=$PWD/data/s1
mysql-8.0/bin/mysqld --initialize-insecure --basedir=$PWD/mysql-8.0 --datadir=$PWD/data/s2
mysql-8.0/bin/mysqld --initialize-insecure --basedir=$PWD/mysql-8.0 --datadir=$PWD/data/s3
在data/s1,data/s2,data/s3 里面都是初始化好的数据目录,里面有mysql 系统库等等
warnings:
不要在生产环境使用--initialize-insecure ,这里只是用来简化教程的,详情请看:
Section 18.5, “Group Replication Security”.
18.2.1.2 Configuring an Instance for Group Replication
- Group Replication Server Settings
安装和使用MGR插件,你必须正确配置MySQL Server才行
下面的配置将是你的MGR第一个实例的配置S1
[mysqld]
# server configuration
datadir=<full_path_to_data>/data/s1
basedir=<full_path_to_bin>/mysql-8.0/
port=24801
socket=<full_path_to_sock_dir>/s1.sock
如果你的三个实例都在一个机器上,那么你应该配置report_host=127.0.0.1 , 让其互相可联系
- Replication Framework
接下来的配置就是复制 所需要的
server_id=1
gtid_mode=ON
enforce_gtid_consistency=ON
binlog_checksum=NONE
如果你使用的版本低于8.0.3(8.0.3默认配置可以满足复制要求),那么需要在配置如下选项
log_bin=binlog
log_slave_updates=ON
binlog_format=ROW
master_info_repository=TABLE
relay_log_info_repository=TABLE
- Group Replication Settings
接下来的配置,就是组复制所需要的了
transaction_write_set_extraction=XXHASH64
group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
group_replication_start_on_boot=off
group_replication_local_address= "127.0.0.1:24901"
group_replication_group_seeds= "127.0.0.1:24901,127.0.0.1:24902,127.0.0.1:24903"
group_replication_bootstrap_group=off
a) transaction_write_set_extraction=XXHASH64 : 表示使用XXHASH64算法来编码write set
b) group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" : 告诉插件这个组已经创建了,它的名字是aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa
group_replication_group_name 的值必须是UUID,可以使用select UUID()来生产
c) group_replication_start_on_boot=off :表示当server开启的时候,并不自动开启MGR。
d)group_replication_local_address= "127.0.0.1:24901" : 告诉插件用127.0.0.1:24901进行内部通信,不是用来给业务查询的哦
推荐的端口是:33061 ,教程中是24901,因为是部署在同一台机器上
e)group_replication_group_seeds= "127.0.0.1:24901,127.0.0.1:24902,127.0.0.1:24903": 这里面列的ip,port是给该组新成员使用的,叫做种子成员。
在performance_schema.replication_group_members能查到
当开启组复制的时候,它是不会使用group_replication_group_seeds选项的,因为该机器是负责引导这个组的
换句话说,任何引导server的数据都是给其他加入成员的server服务的
第二个加入到组成员的server都必须询问,只有组成员列表的成员才能加入,任何缺少的数据都可以问负责引导的成员server获取,随后就加入到了这组group
第三个server可以询问前两个server成员中的任意一个询问、并同步数据
随后的server都是以同样的步骤来加入组
一个即将加入的成员必须跟种子成员(group_replication_group_seeds)进行通信
f)group_replication_bootstrap_group=off: 说明插件是否进行引导
重要: 这个选项只能使用一次,否则会出现脑裂的可能。当第一个server引导成online后,应该讲其从on变为off
配置这个group的其他server实例跟以上的方法非常相似,需要改变下特殊的选项如(server_id, datadir, group_replication_local_address)
18.2.1.3 User Credentials
MGR需要一个group_replication_recovery的复制通道来完成节点之间的数据恢复以及补偿
所以,这一节主要讲group_replication_recovery
开启server使用这个配置文件:
mysql-8.0/bin/mysqld --defaults-file=data/s1/s1.cnf
创建一个mysql用户,具有 REPLICATION-SLAVE 权限
如果你想避免这个grant操作在其他server也发生,可以如下配置
mysql> SET SQL_LOG_BIN=0;
相关创建用户的命令:
mysql> CREATE USER rpl_user@'%' IDENTIFIED BY 'password';
mysql> GRANT REPLICATION SLAVE ON *.* TO rpl_user@'%';
mysql> FLUSH PRIVILEGES;
如果之前这是了sql-log-bin,那么现在需要恢复原状
mysql> SET SQL_LOG_BIN=1;
使用change master命令来配置group_replication_recovery
mysql> CHANGE MASTER TO MASTER_USER='rpl_user', MASTER_PASSWORD='password' \\
FOR CHANNEL 'group_replication_recovery';
分布式recovery是加入一个组的第一步,用来获取自己没有的事务
如果这个group_replication_recovery通道没有配置正确,那么此server将不能从donar member中获取事务来进行数据同步恢复,因此就加入组失败
18.2.1.4 Launching Group Replication
s1配置正确后,接下来在sever执行如下命令
INSTALL PLUGIN group_replication SONAME 'group_replication.so';
重要:在你load MGR前,mysql.session(8.0.2引入)用户必须要存在,如果你的数据字典表是老版本,那么需要mysql_upgrade,否则会报错
There was an error when trying to access the server with user: mysql.session@localhost. Make sure the user is present in the server and that mysql_upgrade was ran after a server update..
可以通过如下命令来检测pugin是否正确
mysql> SHOW PLUGINS;
+----------------------------+----------+--------------------+----------------------+-------------+
| Name | Status | Type | Library | License |
+----------------------------+----------+--------------------+----------------------+-------------+
| binlog | ACTIVE | STORAGE ENGINE | NULL | PROPRIETARY |
(...)
| group_replication | ACTIVE | GROUP REPLICATION | group_replication.so | PROPRIETARY |
+----------------------------+----------+--------------------+----------------------+-------------+
开启group,在s1作为引导server,并开启MGR
引导过程,只能在一个server上设置,而且只能一次
这就是为什么配置文件中设置为off的原因
SET GLOBAL group_replication_bootstrap_group=ON;
START GROUP_REPLICATION;
SET GLOBAL group_replication_bootstrap_group=OFF;
一旦START GROUP_REPLICATION;成功,这个group就算启动成功了 , 你可以这样来check
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+---------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+---------------+
| group_replication_applier | ce9be252-2b71-11e6-b8f4-00212844f856 | myhost | 24801 | ONLINE |
+---------------------------+--------------------------------------+-------------+-------------+---------------+
为了论证它确实OK了,可以如下:
mysql> CREATE DATABASE test;
mysql> USE test;
mysql> CREATE TABLE t1 (c1 INT PRIMARY KEY, c2 TEXT NOT NULL);
mysql> INSERT INTO t1 VALUES (1, 'Luis');
mysql> SELECT * FROM t1;
+----+------+
| c1 | c2 |
+----+------+
| 1 | Luis |
+----+------+
mysql> SHOW BINLOG EVENTS;
+---------------+-----+----------------+-----------+-------------+--------------------------------------------------------------------+
| Log_name | Pos | Event_type | Server_id | End_log_pos | Info |
+---------------+-----+----------------+-----------+-------------+--------------------------------------------------------------------+
| binlog.000001 | 4 | Format_desc | 1 | 123 | Server ver: 8.0.2-gr080-log, Binlog ver: 4 |
| binlog.000001 | 123 | Previous_gtids | 1 | 150 | |
| binlog.000001 | 150 | Gtid | 1 | 211 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1' |
| binlog.000001 | 211 | Query | 1 | 270 | BEGIN |
| binlog.000001 | 270 | View_change | 1 | 369 | view_id=14724817264259180:1 |
| binlog.000001 | 369 | Query | 1 | 434 | COMMIT |
| binlog.000001 | 434 | Gtid | 1 | 495 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:2' |
| binlog.000001 | 495 | Query | 1 | 585 | CREATE DATABASE test |
| binlog.000001 | 585 | Gtid | 1 | 646 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:3' |
| binlog.000001 | 646 | Query | 1 | 770 | use `test`; CREATE TABLE t1 (c1 INT PRIMARY KEY, c2 TEXT NOT NULL) |
| binlog.000001 | 770 | Gtid | 1 | 831 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:4' |
| binlog.000001 | 831 | Query | 1 | 899 | BEGIN |
| binlog.000001 | 899 | Table_map | 1 | 942 | table_id: 108 (test.t1) |
| binlog.000001 | 942 | Write_rows | 1 | 984 | table_id: 108 flags: STMT_END_F |
| binlog.000001 | 984 | Xid | 1 | 1011 | COMMIT /* xid=38 */ |
+---------------+-----+----------------+-----------+-------------+--------------------------------------------------------------------+
18.2.1.5 Adding Instances to the Group
现在,group已经有一个成员s1了,也有一些数据在里面。
现在是时候在给这个group扩展之前配置的server了
- 18.2.1.5.1 Adding a Second Instance 添加第二个实例
为了给这个group添加第二个实例S2,首先要创建一个配置文件
这个配置文件跟s1类似,除了一些位置和目录信息、port、serverid 之外
[mysqld]
# server configuration
datadir=<full_path_to_data>/data/s2
basedir=<full_path_to_bin>/mysql-8.0/
port=24802
socket=<full_path_to_sock_dir>/s2.sock
#
# Replication configuration parameters
#
server_id=2
gtid_mode=ON
enforce_gtid_consistency=ON
binlog_checksum=NONE
#
# Group Replication configuration
#
transaction_write_set_extraction=XXHASH64
group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
group_replication_start_on_boot=off
group_replication_local_address= "127.0.0.1:24902"
group_replication_group_seeds= "127.0.0.1:24901,127.0.0.1:24902,127.0.0.1:24903"
group_replication_bootstrap_group= off
开启server
mysql-8.0/bin/mysqld --defaults-file=data/s2/s2.cnf
给group_replication_recovery 配置 recovery credentials
SET SQL_LOG_BIN=0;
CREATE USER rpl_user@'%' IDENTIFIED BY 'password';
GRANT REPLICATION SLAVE ON *.* TO rpl_user@'%';
SET SQL_LOG_BIN=1;
CHANGE MASTER TO MASTER_USER='rpl_user', MASTER_PASSWORD='password' \\
FOR CHANNEL 'group_replication_recovery';
安装MGR 插件
mysql> INSTALL PLUGIN group_replication SONAME 'group_replication.so';
将s2加入到group , 跟之前不一样的是:s2不需要设置group_replication_bootstrap_group=on了,因为s1已经引导过一次了
mysql> START GROUP_REPLICATION;
检测MGR是否加入了s2
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+---------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+---------------+
| group_replication_applier | 395409e1-6dfa-11e6-970b-00212844f856 | myhost | 24801 | ONLINE |
| group_replication_applier | ac39f1e6-6dfa-11e6-a69d-00212844f856 | myhost | 24802 | ONLINE |
+---------------------------+--------------------------------------+-------------+-------------+---------------+
如果s2标记为online,那么它必须要跟s1的数据自动保持一致。 请如下确认下
mysql> SHOW DATABASES LIKE 'test';
+-----------------+
| Database (test) |
+-----------------+
| test |
+-----------------+
mysql> SELECT * FROM test.t1;
+----+------+
| c1 | c2 |
+----+------+
| 1 | Luis |
+----+------+
mysql> SHOW BINLOG EVENTS;
+---------------+------+----------------+-----------+-------------+--------------------------------------------------------------------+
| Log_name | Pos | Event_type | Server_id | End_log_pos | Info |
+---------------+------+----------------+-----------+-------------+--------------------------------------------------------------------+
| binlog.000001 | 4 | Format_desc | 2 | 123 | Server ver: 8.0.3-log, Binlog ver: 4 |
| binlog.000001 | 123 | Previous_gtids | 2 | 150 | |
| binlog.000001 | 150 | Gtid | 1 | 211 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1' |
| binlog.000001 | 211 | Query | 1 | 270 | BEGIN |
| binlog.000001 | 270 | View_change | 1 | 369 | view_id=14724832985483517:1 |
| binlog.000001 | 369 | Query | 1 | 434 | COMMIT |
| binlog.000001 | 434 | Gtid | 1 | 495 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:2' |
| binlog.000001 | 495 | Query | 1 | 585 | CREATE DATABASE test |
| binlog.000001 | 585 | Gtid | 1 | 646 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:3' |
| binlog.000001 | 646 | Query | 1 | 770 | use `test`; CREATE TABLE t1 (c1 INT PRIMARY KEY, c2 TEXT NOT NULL) |
| binlog.000001 | 770 | Gtid | 1 | 831 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:4' |
| binlog.000001 | 831 | Query | 1 | 890 | BEGIN |
| binlog.000001 | 890 | Table_map | 1 | 933 | table_id: 108 (test.t1) |
| binlog.000001 | 933 | Write_rows | 1 | 975 | table_id: 108 flags: STMT_END_F |
| binlog.000001 | 975 | Xid | 1 | 1002 | COMMIT /* xid=30 */ |
| binlog.000001 | 1002 | Gtid | 1 | 1063 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:5' |
| binlog.000001 | 1063 | Query | 1 | 1122 | BEGIN |
| binlog.000001 | 1122 | View_change | 1 | 1261 | view_id=14724832985483517:2 |
| binlog.000001 | 1261 | Query | 1 | 1326 | COMMIT |
+---------------+------+----------------+-----------+-------------+--------------------------------------------------------------------+
如果s2的数据和s1的数据一样,那么说明s2真的加入成功了
- 18.2.1.5.2 Adding Additional Instances 添加其他的实例
添加第三个和其他的server加入到group的步骤跟添加s2是一模一样的,除了一些变量之外
下面罗列下步骤
1) Create the configuration file
[mysqld]
# server configuration
datadir=<full_path_to_data>/data/s3
basedir=<full_path_to_bin>/mysql-8.0/
port=24803
socket=<full_path_to_sock_dir>/s3.sock
#
# Replication configuration parameters
#
server_id=3
gtid_mode=ON
enforce_gtid_consistency=ON
binlog_checksum=NONE
#
# Group Replication configuration
#
group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
group_replication_start_on_boot=off
group_replication_local_address= "127.0.0.1:24903"
group_replication_group_seeds= "127.0.0.1:24901,127.0.0.1:24902,127.0.0.1:24903"
group_replication_bootstrap_group= off
2) Start the server
mysql-8.0/bin/mysqld --defaults-file=data/s3/s3.cnf
3) Configure the recovery credentials for the group_replication_recovery channel.
SET SQL_LOG_BIN=0;
CREATE USER rpl_user@'%' IDENTIFIED BY 'password';
GRANT REPLICATION SLAVE ON *.* TO rpl_user@'%';
FLUSH PRIVILEGES;
SET SQL_LOG_BIN=1;
CHANGE MASTER TO MASTER_USER='rpl_user', MASTER_PASSWORD='password' \\
FOR CHANNEL 'group_replication_recovery';
4) Install the Group Replication plugin and start it.
INSTALL PLUGIN group_replication SONAME 'group_replication.so';
START GROUP_REPLICATION;
5) 检查
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+---------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+---------------+
| group_replication_applier | 395409e1-6dfa-11e6-970b-00212844f856 | myhost | 24801 | ONLINE |
| group_replication_applier | 7eb217ff-6df3-11e6-966c-00212844f856 | myhost | 24803 | ONLINE |
| group_replication_applier | ac39f1e6-6dfa-11e6-a69d-00212844f856 | myhost | 24802 | ONLINE |
+---------------------------+--------------------------------------+-------------+-------------+---------------+
6) 确认数据是否ok
mysql> SHOW DATABASES LIKE 'test';
+-----------------+
| Database (test) |
+-----------------+
| test |
+-----------------+
mysql> SELECT * FROM test.t1;
+----+------+
| c1 | c2 |
+----+------+
| 1 | Luis |
+----+------+
mysql> SHOW BINLOG EVENTS;
+---------------+------+----------------+-----------+-------------+--------------------------------------------------------------------+
| Log_name | Pos | Event_type | Server_id | End_log_pos | Info |
+---------------+------+----------------+-----------+-------------+--------------------------------------------------------------------+
| binlog.000001 | 4 | Format_desc | 3 | 123 | Server ver: 8.0.3-log, Binlog ver: 4 |
| binlog.000001 | 123 | Previous_gtids | 3 | 150 | |
| binlog.000001 | 150 | Gtid | 1 | 211 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1' |
| binlog.000001 | 211 | Query | 1 | 270 | BEGIN |
| binlog.000001 | 270 | View_change | 1 | 369 | view_id=14724832985483517:1 |
| binlog.000001 | 369 | Query | 1 | 434 | COMMIT |
| binlog.000001 | 434 | Gtid | 1 | 495 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:2' |
| binlog.000001 | 495 | Query | 1 | 585 | CREATE DATABASE test |
| binlog.000001 | 585 | Gtid | 1 | 646 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:3' |
| binlog.000001 | 646 | Query | 1 | 770 | use `test`; CREATE TABLE t1 (c1 INT PRIMARY KEY, c2 TEXT NOT NULL) |
| binlog.000001 | 770 | Gtid | 1 | 831 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:4' |
| binlog.000001 | 831 | Query | 1 | 890 | BEGIN |
| binlog.000001 | 890 | Table_map | 1 | 933 | table_id: 108 (test.t1) |
| binlog.000001 | 933 | Write_rows | 1 | 975 | table_id: 108 flags: STMT_END_F |
| binlog.000001 | 975 | Xid | 1 | 1002 | COMMIT /* xid=29 */ |
| binlog.000001 | 1002 | Gtid | 1 | 1063 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:5' |
| binlog.000001 | 1063 | Query | 1 | 1122 | BEGIN |
| binlog.000001 | 1122 | View_change | 1 | 1261 | view_id=14724832985483517:2 |
| binlog.000001 | 1261 | Query | 1 | 1326 | COMMIT |
| binlog.000001 | 1326 | Gtid | 1 | 1387 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:6' |
| binlog.000001 | 1387 | Query | 1 | 1446 | BEGIN |
| binlog.000001 | 1446 | View_change | 1 | 1585 | view_id=14724832985483517:3 |
| binlog.000001 | 1585 | Query | 1 | 1650 | COMMIT |
+---------------+------+----------------+-----------+-------------+--------------------------------------------------------------------+