1 简介
MHA(Master High Availability)目前在MySQL高可用方面是一个相对成熟的解决方案,它由日本DeNA公司youshimaton(现就职于Facebook公司)开发,是一套优秀的作为MySQL高可用性环境下故障切换和主从提升的高可用软件。在MySQL故障切换过程中,MHA能做到在0~30秒之内自动完成数据库的故障切换操作,并且在进行故障切换的过程中,MHA能在最大程度上保证数据的一致性,以达到真正意义上的高可用。
2 环境
OS
|
CentOS |
7.6 |
|
|
|
|
|
软件包
|
mysql |
5.7.28 |
|
|
|
|
172.17.0.6 |
node1 |
master |
172.17.0.7 |
node2 |
slave1 |
172.17.0.8 |
node3 |
slave2 |
172.17.0.9 |
manager |
mha-manager |
3 依赖安装
1.yum install -y unzip perl openssl-devel.aarch64 openssl.aarch64 libaio.aarch64 numactl-libs.aarch64 net-tools perl-Data-Dumper.aarch64 perl-JSON.noarch initscripts iptables-services selinux-policy.noarch perl-DBD-MySQL.aarch64 openssh-clients openssh-server
2.关闭防火墙、selinux
4 主从搭建
本次搭建不涉及mysql数据库安装,
4.1 节点配置--修改my.cnf配置
4.1.1 开启日志、打开gtid
4.1.1.1 node1
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
server-id=1
log-bin=mysql-bin
binlog_format = row
gtid_mode=ON ##GTID复制又叫全局事物ID,代替了基于binlog和position号的主从复制搭建的方式,更便于主从复制的搭建
enforce_gtid_consistency=ON ##确保gtid全局的一致性
4.1.1.2 Slave1
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
server-id=2
log-bin=mysql-bin
binlog_format = row
gtid_mode=ON
enforce_gtid_consistency=ON
4.1.1.3 Slave2
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
server-id=3
log-bin=mysql-bin
binlog_format = row
gtid_mode=ON
enforce_gtid_consistency=ON
4.1.2 开启半同步复制
本次搭建主从使用半同步复制,建议无论主从都安装semisync_master.so和semisync_slave.so模块,为了避免在高可用模式下,主从切换后,半同步失效。
4.1.2.1 node1
1)安装相关插件
install plugin rpl_semi_sync_master soname 'semisync_master.so';
install plugin rpl_semi_sync_slave soname 'semisync_slave.so';
2)修改配置
方法1:(重启mysql后参数失效)
set
global
rpl_semi_sync_master_enabled=1;
set
global
rpl_semi_sync_master_timeout=2000;
set
global
rpl_semi_sync_slave_enabled=1;
方法2:修改my.cnf文件配置(参数永久有效)
rpl_semi_sync_master_enabled = 1;
rpl_semi_sync_master_timeout = 2000;
rpl_semi_sync_slave_enabled=1;
3)重启mysql服务,查看修改的参数
show global
variables like
'%rpl_semi%';
4.1.2.2 Salve1
1)安装相关插件
install plugin rpl_semi_sync_master soname 'semisync_master.so';
install plugin rpl_semi_sync_slave soname 'semisync_slave.so';
2)修改配置
方法1:(重启mysql后参数失效)
set
global
rpl_semi_sync_master_enabled=1;
set
global
rpl_semi_sync_master_timeout=2000;
set
global
rpl_semi_sync_slave_enabled=1;
方法2:修改my.cnf文件配置(参数永久有效)
rpl_semi_sync_master_enabled = 1;
rpl_semi_sync_master_timeout = 2000;
rpl_semi_sync_slave_enabled=1;
3)重启mysql服务,查看修改的参数
show global
variables like
'%rpl_semi%';
4.1.2.3 Salve2
1)安装相关插件
install plugin rpl_semi_sync_master soname 'semisync_master.so';
install plugin rpl_semi_sync_slave soname 'semisync_slave.so';
2)修改配置
方法1:(重启mysql后参数失效)
set
global
rpl_semi_sync_master_enabled=1;
set
global
rpl_semi_sync_master_timeout=2000;
set
global
rpl_semi_sync_slave_enabled=1;
方法2:修改my.cnf文件配置(参数永久有效)
rpl_semi_sync_master_enabled = 1;
rpl_semi_sync_master_timeout = 2000;
rpl_semi_sync_slave_enabled=1;
3)重启mysql服务,查看修改的参数
show global
variables like
'%rpl_semi%';
4.2 主从配置
node1为主,node2、node3为从
主从模型为:一主对多从
node1(主)---node2(从)
node1(主)---node3(从)
4.2.1 node1授权
mysql> alter user root@'localhost' identified by 'Aa!123456';
mysql> grant replication slave on *.* to root@'%' identified by 'Aa!123456';
mysql> grant all privileges on *.* to root@'%' identified by 'Aa!123456';
mysql> flush privileges;
mysql> show master status\G
4.2.2 node2授权
mysql> alter user root@'localhost' identified by 'Aa!123456';
mysql> grant all privileges on *.* to root@'%' identified by 'Aa!123456';
mysql> change master to master_host='172.17.0.6',master_user='root',master_password='Aa!123456',master_auto_position=1;
mysql> start slave;
mysql> show slave status\G;
4.2.3 node3授权
mysql> alter user root@'localhost' identified by 'Aa!123456';
mysql> grant all privileges on *.* to root@'%' identified by 'Aa!123456';
mysql> change master to master_host='172.17.0.6',master_user='root',master_password='Aa!123456',master_auto_position=1;
mysql> start slave;
mysql> show slave status\G;
4.2.4 测试同步
4.2.4.1 建库
1)node1
2)salve1、salve2
4.2.4.2建表
1)node1
2)salve1、salve2
4.2.4.3插入语句
1)node1
2)salve1、salve2
5 安装mha
5.1 node安装
5.1.1 Github下包
5.1.2 node1~3节点安装
rpm -ivh mha4mysql-node-0.58-0.el7.centos.noarch.rpm
5.2 manager安装
5.2.1 安装依赖包
1) yum install -y perl-Config-Tiny perl-Class-Load.noarch perl-Params-Validate.aarch64 perl-IO-Socket-SSL.noarch perl-Net-SSLeay.aarch64 perl-Sys-Syslog.aarch64
3) 安装依赖包
rpm -ivh perl-MIME-Types-1.38-2.el7.noarch.rpm
rpm -ivh perl-Email-Date-Format-1.002-15.el7.noarch.rpm
rpm -ivh perl-MIME-Lite-3.030-1.el7.noarch.rpm
rpm -ivh perl-Mail-Sender-0.8.23-1.el7.noarch.rpm
rpm -ivh perl-Mail-Sendmail-0.79-21.el7.noarch.rpm
rpm -ivh perl-Parallel-ForkManager-1.18-2.el7.noarch.rpm
rpm -ivh mha4mysql-node-0.58-0.el7.centos.noarch.rpm
4) 安装perl-Log-Dispatch
https://download-ib01.fedoraproject.org/pub/epel/7/aarch64/Packages/p/perl-Log-Dispatch-2.41-1.el7.1.noarch.rpm
yum install perl-Log-Dispatch-2.41-1.el7.1.noarch.rpm -y
5.2.2 安装manager
1)下载安装包
2)安装
rpm -ivh mha4mysql-manager-0.58-0.el7.centos.noarch.rpm
5.2.3 配置mha-manager
- 创建配置目录
mkdir /etc/mhamanger/ -p
2. 编辑配置文件
vim /etc/mhamanger/app.conf
[server default]
manager_workdir=/etc/mhamanger ##manager工作目录
manager_log=/etc/mhamanger/mha.log ##manager日志文件
master_binlog_dir=/var/lib/mysql ##master节点存放binlog日志路径
remote_workdir=/tmp ##发生切换时slave节点存放binlog日志路径
user=root ##目标mysql实例的管理帐号,尽量是root用户,因为运行所有的管理命令需要使用
password=Aa!123456 ##数据库用户密码
ping_interval=1 ##设置监控主库,发送ping包的时间间隔,尝试三次没有回应的时候自动进行failover
repl_password=Aa!123456 ##数据库用户密码
repl_user=root ##在所有slave上执行change master的复制用户名,这个用户最好是在主库上拥有replication slave权限
ssh_user=root ##访问MHA manger和MHA mysql节点登录用户
master_ip_failover_script=/usr/local/bin/master_ip_failover ##通常需要分配一个VIP供master用于对外提供读写服务,如果master挂了,HA软件指引备用服务器接管VIP
master_ip_online_change_script=/usr/local/bin/master_ip_failover ##这是一个与上面参数很接近的参数,但是这个参数不使用故障转移命令,而是master的在线change命令
[server1]
hostname=172.17.0.6
port=3306
[server2]
hostname=172.17.0.7
post=3306
#candidate_master=1 ##设置为候选master,如果设置该参数以后,发生主从切换以后将会将此从库提升为主库,即使这个主库不是集群中事件最新的slave
#check_repl_delay=0 ##配合candidate_master=1使用,关闭日志量的检查,强制选择候选节点。
[server3]
hostname=172.17.0.8
port=3306
说明:
主库宕机谁来接管?
1. 所有从节点日志都是一致的,默认会以配置文件的顺序去选择一个新主。
2. 从节点日志不一致,自动选择最接近于主库的从库
3. 如果对于某节点设定了权重(candidate_master=1),权重节点会优先选择。
5.2.4 配置vip
在master节点上,添加vip
5.2.5 master_ip_failover脚本
5.2.5.1修改内容
只需要修改以下4行内容:
my $vip = '172.17.0.100/16'; # Virtual IP
my $key = "0";
my $ssh_start_vip = "/sbin/ifconfig eth0:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig eth0:$key down";
5.2.5.2添加权限
chmod +x /etc/mha/master_ip_failover
5.2.5.3复制脚本
cp /usr/local/bin/master_ip_failover /usr/local/bin/master_ip_online_change_script
5.2.5.4脚本内容
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';
use Getopt::Long;
my (
$command, $ssh_user, $orig_master_host, $orig_master_ip,
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);
my $vip = '172.17.0.100/16'; # Virtual IP
my $key = "0";
my $ssh_start_vip = "/sbin/ifconfig eth0:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig eth0:$key down";
$ssh_user = "root";
GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
);
exit &main();
sub main {
print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";
if ( $command eq "stop" || $command eq "stopssh" ) {
# $orig_master_host, $orig_master_ip, $orig_master_port are passed.
# If you manage master ip address at global catalog database,
# invalidate orig_master_ip here.
my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {
# all arguments are passed.
# If you manage master ip address at global catalog database,
# activate new_master_ip here.
# You can also grant write access (create user, set read_only=0, etc) here.
my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host \n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
`ssh $ssh_user\@cluster1 \" $ssh_start_vip \"`;
exit 0;
}
else {
&usage();
exit 1;
}
}
# A simple system call that enable the VIP on the new master
sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
# A simple system call that disable the VIP on the old_master
sub stop_vip() {
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}
sub usage {
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}
5.3 故障转移时间设置
在搭建好mha后,mha的自动故障转移的时间设置为8小时,即如果上次故障转移是在8小时内完成的,则本次不自动执行故障转移
我们可以通过修改修改脚本内容,改为1分钟
vim /usr/share/perl5/vendor_perl/MHA/MasterFailover.pm
将480改为1,保存退出。
5.1 ssh互信
5.4.1 检查ssh服务
service sshd status
5.4.2 配置免密登录
1)在每个节点执行ssh-keygen -t rsa,生成私钥和公钥。
2)ssh-copy-id root@172.17.0.9 ,将公钥发送到manager机器。
3)然后在manager机器再将公钥发送到三个node节点
scp /root/.ssh/authorized_keys root@172.17.0.6:/root/.ssh
scp /root/.ssh/authorized_keys root@172.17.0.7:/root/.ssh
scp /root/.ssh/authorized_keys root@172.17.0.8:/root/.ssh
4)测试
masterha_check_ssh --conf=/etc/mhamanger/app.conf
5.5 检查状态
5.5.1 master节点授权
mysql> grant replication slave on *.* to root@'%' identified by 'Aa!123456';
mysql> grant super,replication client on *.* to root@'%' identified by 'Aa!123456';
mysql> flush privileges;
5.5.2 节点检测
masterha_check_repl --conf=/etc/mhamanger/app.conf
5.6 启动服务
nohup masterha_manager --conf=/etc/mhamanger/app.conf >/etc/mhamanger/mha.log 2>&1 &
masterha_check_status --conf=/etc/mhamanger/app.conf ##查看状态
至此,mysql双主+mha搭建完成