架构设计及必要配置
主机环境
IP 主机名 担任角色
192.168.192.128 node_master MySQL-Master| MHA-Node
192.168.192.129 node_slave MySQL-Slave | MHA-Node(备选Master)
192.168.192.130 manager_slave MySQL-Slave | MHA-Manager
.....................................................................................................................................................
为了节省机器,这里选择只读的从库192.168.192.129(从库不对外提供读的服务)作为候选主库,即candicate master,或是专门用于备份
同样,为了节省机器,这里选择192.168.192.130这台从库作为manager server(实际生产环节中,机器充足的情况下, 一般是专门选择一台机器作为Manager server)
温馨提示:
快速更改主机名(我的主机名是需要更改的)
[root@master1 ~]# hostnamectl set-hostname node_master
[root@node1 ~]# hostnamectl set-hostname node_slave
[root@node2 ~]# hostnamectl set-hostname manager_slave
.....................................................................................................................................................
必要配置
1. 添加每台机器的hosts文件实现主机名hostname登录(3台机器都要添加,不过这一步可也是可以不做的)
[root@node_master ~]# vim /etc/hosts
..............
192.168.192.128 node_master
192.168.192.129 node_slave
192.168.192.130 manager_slave
也可以添加完一台使用scp命令进行hosts文件拷贝(前提是其他机器没有其他的hosts文件设置)
[root@node_master ~]# scp /etc/hosts root@192.168.192.129:/etc/hosts
[root@node_master ~]# scp /etc/hosts root@192.168.192.130:/etc/hosts 2. 做服务器免密登录(3台机器都要添加,这一步是必须要做的,至关重要)
[root@node_master ~]# ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa
[root@node_master ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub "root@192.168.192.128"
[root@node_master ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub "root@192.168.192.129"
[root@node_master ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub "root@192.168.192.130" 测试能否免密登录(正常是不需要输入密码直接登录的)
[root@node_master ~]# ssh 192.168.192.128
Last login: Tue Dec 18 15:56:41 2018 from 192.168.192.1
[root@node_master ~]# [root@node_master ~]# ssh 192.168.192.129
Last login: Tue Dec 18 16:03:56 2018 from 192.168.192.128
[root@node_slave ~]# [root@node_master ~]# ssh 192.168.192.130
Last login: Tue Dec 18 16:04:02 2018 from 192.168.192.128
[root@manager_slave ~]#
为了确保万无一失在最好也在其他的两台机器上操作免密测试
安装MySQL并实现主从
主从环境设计及安装(MySQL5.7的版本)
环境设计(一主两从):
192.168.192.128 MySQL-Master(主库)
192.168.192.129 MySQL-Slave (从库)
192.168.192.130 MySQL-Slave (从库)
安装教程:https://www.cnblogs.com/brianzhu/p/8575243.html (采用yum安装方式)
配置主从
1.主从my.cnf配置文件设置
主从配置文件(yum安装的MySQL配置文件是/etc/my.cnf)修改为如下配置 ------主库(192.168.192.128)配置------
server-id=1 #数据库唯一ID,主从的标识号绝对不能重复
log-bin=mysql-bin #开启bin-log,并指定文件目录和文件名前缀
binlog-ignore-db=mysql #不同步mysql系统数据库。如果是多个不同步库,就以此格式另写几行;也可以在一行,中间逗号隔开
sync_binlog=1 #确保binlog日志写入后与硬盘同步
binlog_checksum=crc32 #跳过现有的采用checksum的事件,mysql5.6.5以后的版本中binlog_checksum=crc32,而低版本都是binlog_checksum=none
binlog_format=mixed #bin-log日志文件格式,设置为MIXED可以防止主键重复
validate_password_policy=0 #指定密码策略
validate_password = off #禁用密码策略 配置完成后保存,并重启MySQL服务
[root@node_master ~]# systemctl restart mysqld ------从库1(192.168.192.129)配置------
server-id=2 #数据库唯一ID,主从的标识号绝对不能重复
log-bin=mysql-bin #开启bin-log,并指定文件目录和文件名前缀
binlog-ignore-db=mysql #不同步mysql系统数据库(千万要注意:主从同步中的过滤字段要一致,否则后面使用masterha_check_repl 检查复制时就会出错!)
slave-skip-errors=all #跳过所有的错误错误,继续执行复制操作
validate_password_policy=0 #指定密码策略
validate_password = off #禁用密码策略 配置完成后保存,并重启MySQL服务
[root@node_slave ~]# systemctl restart mysqld ------从库2(192.168.192.130)配置------
server-id=3 #数据库唯一ID,主从的标识号绝对不能重复
log-bin=mysql-bin #开启bin-log,并指定文件目录和文件名前缀
binlog-ignore-db=mysql #不同步mysql系统数据库(千万要注意:主从同步中的过滤字段要一致,否则后面使用masterha_check_repl 检查复制时就会出错!)
slave-skip-errors=all #跳过所有的错误错误,继续执行复制操作
validate_password_policy=0 #指定密码策略
validate_password = off #禁用密码策略 配置完成后保存,并重启MySQL服务
[root@manager_slave ~]# systemctl restart mysqld 注意:
主从设置时,如果设置了binlog-ignore-db 和 replicate-ignore-db 过滤规则,则主从必须相同。即要使用binlog-ignore-db过滤字段,则主从配置都使用这个,
要是使用replicate-ignore-db过滤字段,则主从配置都使用这个,千万不能主从配置使用的过滤字段不一样!因为MHA 在启动时候会检测过滤规则,如果过滤规则不同,MHA 不启动监控和故障转移。 2.创建用户mha管理的账号(在三台节点上都需要执行)
mysql> grant super,reload,replication client,select on *.* to manager@'192.168.192.%' identified by 'Manager_1234';
Query OK, 0 rows affected, 1 warning (0.00 sec) mysql> grant create,insert,update,delete,drop on *.* to manager@'192.168.192.%';
Query OK, 0 rows affected (0.00 sec) 3.创建主从账号(在三台节点上都需要执行)
mysql> grant reload,super,replication slave on *.* to 'slave'@'192.168.192.%' identified by 'Slave_1234';
Query OK, 0 rows affected, 1 warning (0.00 sec) mysql> flush privileges;
Query OK, 0 rows affected (0.01 sec) 4.配置主从
在主服务器(192.168.192.128)上执行
mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000001 | 1169 | | mysql | |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec) 在两台从服务器(192.168.192.129,192.168.192.130)上执行
设置之前先把从库停掉
mysql> stop slave;
Query OK, 0 rows affected, 1 warning (0.00 sec) 配置主从
mysql> change master to master_host='192.168.192.128',master_port=3306,master_user='slave',master_password='Slave_1234',master_log_file='mysql-bin.000001',master_log_pos=1169;
Query OK, 0 rows affected, 2 warnings (0.01 sec) 启动主从
mysql> start slave;
Query OK, 0 rows affected (0.00 sec) 查看同步状态(Slave_IO_Running和Slave_SQL_Running为YES表示主从配置成功)
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.192.128
Master_User: slave
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 1169
Relay_Log_File: node_slave-relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
具体主从原理篇点击链接:https://www.cnblogs.com/brianzhu/p/10153802.html
具体主从部署篇点击链接:https://www.cnblogs.com/brianzhu/p/10154446.html
安装及配置MHA
MHA下载地址
----------------------------------------------MHA下载地址----------------------------------------------
mha包括manager节点和data节点,其中:
data节点包括原有的MySQL复制结构中的主机,至少3台,即1主2从,当master failover后,还能保证主从结构;只需安装node包。
manager server:运行监控脚本,负责monitoring 和 auto-failover;需要安装node包和manager包 下载地址:
MHA Node下载: https://coding.net/u/brian_zhu/p/mha4/git/raw/master/mha4mysql-node-0.58.tar.gz
MHA Manager下载:https://coding.net/u/brian_zhu/p/mha4/git/raw/master/mha4mysql-manager-0.58.tar.gz
安装MHA
1. MHA Node安装
----------------------------------------------MHA Node安装---------------------------------------------- 在所有data数据节点机上安装安装MHA Node(三台机器都要安装MHA Node) 先安装所需的perl模块
[root@node_master ~]# yum -y install perl perl-DBD-MySQL perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker perl-CPAN 下载解压
[root@node_master ~]# wget https://coding.net/u/brian_zhu/p/mha4/git/raw/master/mha4mysql-node-0.58.tar.gz
[root@node_master ~]# tar zxf mha4mysql-node-0.58.tar.gz
[root@node_master ~]# cd mha4mysql-node-0.58 编译安装
[root@node_master mha4mysql-node-0.58]# perl Makefile.PL
[root@node_master mha4mysql-node-0.58]# make && make install 2. MHA Manager安装
----------------------------------------------MHA Manager安装---------------------------------------------- 在manager节点(即192.168.192.130)上安装MHA Manager(注意manager节点也要安装MHA node) 安装epel-release源
[root@manager_slave ~]# yum -y install epel-release 安装perl的mysql包
[root@manager_slave ~]# yum install -y perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-Config-IniFiles perl-Time-HiRes -y 下载解压
[root@manager_slave ~]# wget https://coding.net/u/brian_zhu/p/mha4/git/raw/master/mha4mysql-manager-0.58.tar.gz
[root@manager_slave ~]# tar zxf mha4mysql-manager-0.58.tar.gz 编译安装
[root@manager_slave ~]# cd mha4mysql-manager-0.58/
[root@manager_slave mha4mysql-manager-0.58]# perl Makefile.PL
[root@manager_slave mha4mysql-manager-0.58]# make && make install 安装完MHA Manager后,在/usr/local/bin目录下会生成以下脚本
[root@manager_slave mha4mysql-manager-0.58]# ll /usr/local/bin/
total 39060
-r-xr-xr-x 1 root root 17639 Dec 18 16:56 apply_diff_relay_logs
-rwxr-xr-x 1 root root 11739376 Oct 18 09:41 docker-compose
-rwxr-xr-x 1 root root 28160480 Oct 23 16:42 docker-machine
-r-xr-xr-x 1 root root 4807 Dec 18 16:56 filter_mysqlbinlog
-r-xr-xr-x 1 root root 1995 Dec 18 17:00 masterha_check_repl
-r-xr-xr-x 1 root root 1779 Dec 18 17:00 masterha_check_ssh
-r-xr-xr-x 1 root root 1865 Dec 18 17:00 masterha_check_status
-r-xr-xr-x 1 root root 3201 Dec 18 17:00 masterha_conf_host
-r-xr-xr-x 1 root root 2517 Dec 18 17:00 masterha_manager
-r-xr-xr-x 1 root root 2165 Dec 18 17:00 masterha_master_monitor
-r-xr-xr-x 1 root root 2373 Dec 18 17:00 masterha_master_switch
-r-xr-xr-x 1 root root 5172 Dec 18 17:00 masterha_secondary_check
-r-xr-xr-x 1 root root 1739 Dec 18 17:00 masterha_stop
-r-xr-xr-x 1 root root 8337 Dec 18 16:56 purge_relay_logs
-r-xr-xr-x 1 root root 7525 Dec 18 16:56 save_binary_logs 其中:
masterha_check_repl 检查MySQL复制状况
masterha_check_ssh 检查MHA的SSH配置状况
masterha_check_status 检测当前MHA运行状态
masterha_conf_host 添加或删除配置的server信息
masterha_manager 启动MHA
masterha_stop 停止MHA
masterha_master_monitor 检测master是否宕机
masterha_master_switch 控制故障转移(自动或者手动)
masterha_secondary_check 多种线路检测master是否存活 另外:
在../mha4mysql-manager-0.58/samples/scripts/下还有以下脚本,需要将其复制到/usr/local/bin [root@manager_slave mha4mysql-manager-0.58]# ll ../mha4mysql-manager-0.58/samples/scripts/
total 32
-rwxr-xr-x 1 1000 1000 3648 Mar 23 2018 master_ip_failover #自动切换时VIP管理脚本,不是必须,如果我们使用keepalived的,我们可以自己编写脚本完成对vip的管理,比如监控mysql,如果mysql异常,我们停止keepalived就行,这样vip就会自动漂移
-rwxr-xr-x 1 1000 1000 9870 Mar 23 2018 master_ip_online_change #在线切换时VIP脚本,不是必须,同样可以可以自行编写简单的shell完成
-rwxr-xr-x 1 1000 1000 11867 Mar 23 2018 power_manager #故障发生后关闭master脚本,不是必须
-rwxr-xr-x 1 1000 1000 1360 Mar 23 2018 send_report #故障切换发送报警脚本,不是必须,可自行编写简单的shell完成 [root@manager_slave mha4mysql-manager-0.58]# cp ../mha4mysql-manager-0.58/samples/scripts/* /usr/local/bin/
配置MHA
1.MHA Manager配置(MHA的配置文件)
----------------------------------------------MHA Manager配置----------------------------------------------
在管理节点(192.168.192.130)上进行下面配置
[root@manager_slave mha4mysql-manager-0.58]# mkdir -p /etc/masterha
[root@manager_slave mha4mysql-manager-0.58]# cp samples/conf/app1.cnf /etc/masterha/
[root@manager_slave mha4mysql-manager-0.58]# vim /etc/masterha/app1.cnf
[server default]
manager_workdir=/var/log/masterha/app1 #设置manager的工作目录
manager_log=/var/log/masterha/app1/manager.log #设置manager的日志 ssh_user=root #ssh免密钥登录的帐号名
user=manager #manager用户
password=Manager_1234 #manager用户的密码
repl_user=slave #mysql复制帐号,用来在主从机之间同步二进制日志等
repl_password=Slave_1234 #设置mysql中root用户的密码,这个密码是前文中创建监控用户的那个密码
ping_interval=1 #设置监控主库,发送ping包的时间间隔,用来检查master是否正常,默认是3秒,尝试三次没有回应的时候自动进行railover
master_ip_failover_script= /usr/local/bin/master_ip_failover #设置自动failover时候的切换脚本
master_ip_online_change_script= /usr/local/bin/master_ip_online_change #设置手动切换时候的切换脚本 [server1]
hostname=192.168.192.128
port=3306
master_binlog_dir=/var/lib/mysql/ #设置master 保存binlog的位置,以便MHA可以找到master的日志,我这里的也就是mysql的数据目录 [server2]
hostname=192.168.192.129
port=3306
candidate_master=1 #设置为候选master,即master机宕掉后,优先启用这台作为新master,如果设置该参数以后,发生主从切换以后将会将此从库提升为主库,即使这个主库不是集群中事件最新的slave
check_repl_delay=0 #默认情况下如果一个slave落后master 100M的relay logs的话,MHA将不会选择该slave作为一个新的master,因为对于这个slave的恢复需要花费很长时间通过设置check_repl_delay=0,MHA触发切换在选择一个新的master的时候将会忽略复制延时,这个参数对于设置了candidate_master=1的主机非常有用,因为这个候选主在切换的过程中一定是新的master
master_binlog_dir=/var/lib/mysql/ [server3]
hostname=192.168.192.130
port=3306
#candidate_master=1
master_binlog_dir=/var/lib/mysql/ #[server4]
#hostname=host4
#no_master=1 2.设置relay log的清除方式(在两台slave节点上)
温馨提示:
MHA在发生切换的过程中,从库的恢复过程中依赖于relay log的相关信息,所以这里要将relay log的自动清除设置为OFF,采用手动清除relay log的方式。
在默认情况下,从服务器上的中继日志会在SQL线程执行完毕后被自动删除。但是在MHA环境中,这些中继日志在恢复其他从服务器时可能会被用到,因此需要禁用
中继日志的自动删除功能。定期清除中继日志需要考虑到复制延时的问题。在ext3的文件系统下,删除大的文件需要一定的时间,会导致严重的复制延时。为了避
免复制延时,需要暂时为中继日志创建硬链接,因为在linux系统中通过硬链接删除大文件速度会很快。(在mysql数据库中,删除大表时,通常也采用建立硬链接的方式) MHA节点中包含了pure_relay_logs命令工具,它可以为中继日志创建硬链接,执行SET GLOBAL relay_log_purge=1,等待几秒钟以便SQL线程切换到新的中继日志,
再执行SET GLOBAL relay_log_purge=0 pure_relay_logs脚本参数如下所示:
--user mysql 用户名
--password mysql 密码
--port 端口号
--workdir 指定创建relay log的硬链接的位置,默认是/var/tmp,由于系统不同分区创建硬链接文件会失败,故需要执行硬链接具体位置,成功执行脚本后,硬链接的中继日志文件被删除
--disable_relay_log_purge 默认情况下,如果relay_log_purge=1,脚本会什么都不清理,自动退出,通过设定这个参数,当relay_log_purge=1的情况下会将relay_log_purge设置为0。清理relay log之后,最后将参数设置为OFF。 清除relay log的方法(两台slave节点)
[root@node_slave ~]# mysql -uroot -p12345 -e 'set global relay_log_purge=0'
[root@manager_slave ~]# mysql -uroot -p12345 -e 'set global relay_log_purge=0' 为了定时提供一个定期清理relay的脚本
设置定期清理relay脚本(在两台slave节点上操作)
[root@node_slave ~]# vim /root/purge_relay_log.sh
#!/bin/bash
user=root
passwd=12345
port=3306
host=localhost
log_dir='/data/masterha/log'
work_dir='/data'
purge='/usr/local/bin/purge_relay_logs' if [ ! -d $log_dir ]
then
mkdir -p $log_dir
fi $purge --user=$user --passwd=$passwd --disable_relay_log_purge --port=$port --workdir=$work_dir >> $log_dir/purge_relay_logs.log 2>&1 给脚本添加执行权限
[root@node_slave ~]# chmod 755 /root/purge_relay_log.sh 添加到crontab定期执行
[root@node_slave ~]# crontab -e
0 5 * * * /bin/bash /root/purge_relay_log.sh 测试脚本执行
purge_relay_logs脚本删除中继日志不会阻塞SQL线程
下面我们手动执行以下purge_relay_log看看具体的情况
[root@node_slave ~]# /usr/local/bin/purge_relay_logs --user=root --host=localhost --password=12345 --disable_relay_log_purge --port=3306 --workdir=/data
2018-12-18 17:48:25: purge_relay_logs script started.
Found relay_log.info: /var/lib/mysql/relay-log.info
Opening /var/lib/mysql/node_slave-relay-bin.000001 ..
Opening /var/lib/mysql/node_slave-relay-bin.000002 ..
Executing SET GLOBAL relay_log_purge=1; FLUSH LOGS; sleeping a few seconds so that SQL thread can delete older relay log files (if it keeps up); SET GLOBAL relay_log_purge=0; .. ok.
2018-12-18 17:48:28: All relay log purging operations succeeded. 执行脚本看看
[root@node_slave ~]# sh purge_relay_log.sh 生成的了一个日志文件
[root@node_slave ~]# ll /data/masterha/log/
total 4
-rw-r--r-- 1 root root 234 Dec 18 17:49 purge_relay_logs.log
测试MHA
检查MHA集群的各个状态
1.检查MHA集群SSH
------------------------------检查SSH配置------------------------------
检查SSH配置(在manager节点(即192.168.192.130服务器上执行))
检查MHA Manger到所有MHA Node的SSH连接状态:
[root@manager_slave ~]# masterha_check_ssh --conf=/etc/masterha/app1.cnf
Tue Dec 18 17:53:34 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Tue Dec 18 17:53:34 2018 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Tue Dec 18 17:53:34 2018 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Tue Dec 18 17:53:34 2018 - [info] Starting SSH connection tests..
Tue Dec 18 17:53:35 2018 - [debug]
Tue Dec 18 17:53:34 2018 - [debug] Connecting via SSH from root@192.168.192.128(192.168.192.128:22) to root@192.168.192.129(192.168.192.129:22)..
Tue Dec 18 17:53:34 2018 - [debug] ok.
Tue Dec 18 17:53:34 2018 - [debug] Connecting via SSH from root@192.168.192.128(192.168.192.128:22) to root@192.168.192.130(192.168.192.130:22)..
Tue Dec 18 17:53:35 2018 - [debug] ok.
Tue Dec 18 17:53:36 2018 - [debug]
Tue Dec 18 17:53:35 2018 - [debug] Connecting via SSH from root@192.168.192.129(192.168.192.129:22) to root@192.168.192.128(192.168.192.128:22)..
Tue Dec 18 17:53:35 2018 - [debug] ok.
Tue Dec 18 17:53:35 2018 - [debug] Connecting via SSH from root@192.168.192.129(192.168.192.129:22) to root@192.168.192.130(192.168.192.130:22)..
Tue Dec 18 17:53:35 2018 - [debug] ok.
Tue Dec 18 17:53:36 2018 - [debug]
Tue Dec 18 17:53:35 2018 - [debug] Connecting via SSH from root@192.168.192.130(192.168.192.130:22) to root@192.168.192.128(192.168.192.128:22)..
Tue Dec 18 17:53:35 2018 - [debug] ok.
Tue Dec 18 17:53:35 2018 - [debug] Connecting via SSH from root@192.168.192.130(192.168.192.130:22) to root@192.168.192.129(192.168.192.129:22)..
Tue Dec 18 17:53:36 2018 - [debug] ok.
Tue Dec 18 17:53:36 2018 - [info] All SSH connection tests passed successfully.
出现上面的结果表示SSH配置成功 2.检查MySQL复制状况
------------------------------检查MySQL复制状况------------------------------
使用mha工具check检查repl环境(在manager节点(即192.168.192.130服务器上执行))
[root@manager_slave ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf 这里出现了一个小插曲
执行后出现了下面的错误
......................
Bareword "FIXME_xxx" not allowed while "strict subs" in use at /usr/local/bin/master_ip_failover line 93.
Execution of /usr/local/bin/master_ip_failover aborted due to compilation errors.
Tue Dec 18 18:28:28 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln229] Failed to get master_ip_failover_script status with return code 255:0.
Tue Dec 18 18:28:28 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln427] Error happened on checking configurations. at /usr/local/bin/masterha_check_repl line 48.
Tue Dec 18 18:28:28 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln525] Error happened on monitoring servers.
Tue Dec 18 18:28:28 2018 - [info] Got exit code 1 (Not master dead). 还是出现如上报错,原因是:
原来Failover两种方式:一种是虚拟IP地址,一种是全局配置文件。MHA并没有限定使用哪一种方式,而是让用户自己选择,虚拟IP地址的方式会牵扯到其它的软件,比如keepalive软件,而且还要修改脚本master_ip_failover 解决方法:
先暂时注释掉管理节点的/etc/masterha/app1.cnf文件中的master_ip_failover_script= /usr/local/bin/master_ip_failover这个选项。
后面引入keepalived后和修改该脚本以后再开启该选项
[root@manager_slave /]# cat /etc/masterha/app1.cnf | grep master_ip_failover_script
#master_ip_failover_script= /usr/local/bin/master_ip_failover 最后在通过masterha_check_repl脚本查看整个mysql集群的复制状态
[root@manager_slave /]# masterha_check_repl --conf=/etc/masterha/app1.cnf
Tue Dec 18 18:32:43 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Tue Dec 18 18:32:43 2018 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Tue Dec 18 18:32:43 2018 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Tue Dec 18 18:32:43 2018 - [info] MHA::MasterMonitor version 0.58.
Tue Dec 18 18:32:45 2018 - [info] GTID failover mode = 0
Tue Dec 18 18:32:45 2018 - [info] Dead Servers:
Tue Dec 18 18:32:45 2018 - [info] Alive Servers:
Tue Dec 18 18:32:45 2018 - [info] 192.168.192.128(192.168.192.128:3306)
Tue Dec 18 18:32:45 2018 - [info] 192.168.192.129(192.168.192.129:3306)
Tue Dec 18 18:32:45 2018 - [info] 192.168.192.130(192.168.192.130:3306)
......................
Tue Dec 18 18:32:48 2018 - [info] Checking replication health on 192.168.192.129..
Tue Dec 18 18:32:48 2018 - [info] ok.
Tue Dec 18 18:32:48 2018 - [info] Checking replication health on 192.168.192.130..
Tue Dec 18 18:32:48 2018 - [info] ok.
Tue Dec 18 18:32:48 2018 - [warning] master_ip_failover_script is not defined.
Tue Dec 18 18:32:48 2018 - [warning] shutdown_script is not defined.
Tue Dec 18 18:32:48 2018 - [info] Got exit code 0 (Not master dead). MySQL Replication Health is OK.
这个时候,发现整个复制环境状况是ok的了!!! 3.检查MHA Manager的状态
------------------------------检查MHA Manager的状态------------------------------
[root@manager_slave /]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 is stopped(2:NOT_RUNNING). 注意:如果正常,会显示"PING_OK",否则会显示"NOT_RUNNING",这代表MHA监控没有开启 开启MHA Manager监控
使用下面命令放在后台执行启动动作
[root@manager_slave /]# nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 &
启动参数介绍:
--remove_dead_master_conf 该参数代表当发生主从切换后,老的主库的ip将会从配置文件中移除。
--manger_log 日志存放位置
--ignore_last_failover 在缺省情况下,如果MHA检测到连续发生宕机,且两次宕机间隔不足8小时的话,则不会进行Failover,之所以这样限制是为了
避免ping-pong效应。该参数代表忽略上次MHA触发切换产生的文件,默认情况下,MHA发生切换后会在日志目录,也就是上面我
设置的/data产生app1.failover.complete文件,下次再次切换的时候如果发现该目录下存在该文件将不允许触发切换,除非
在第一次切换后收到删除该文件,为了方便,这里设置为--ignore_last_failover。
再次查看MHA Manager监控是否正常:
[root@manager_slave /]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 (pid:8162) is running(0:PING_OK), master:192.168.192.128 可以看见已经在监控了,而且master的主机为192.168.192.128 查看启动日志
[root@manager_slave /]# tail -n20 /var/log/masterha/app1/manager.log
Relay log found at /var/lib/mysql, up to manager_slave-relay-bin.000002
Temporary relay log file is /var/lib/mysql/manager_slave-relay-bin.000002
Checking if super_read_only is defined and turned on.. not present or turned off, ignoring.
Testing mysql connection and privileges..
mysql: [Warning] Using a password on the command line interface can be insecure.
done.
Testing mysqlbinlog output.. done.
Cleaning up test file(s).. done.
Tue Dec 18 18:35:11 2018 - [info] Slaves settings check done.
Tue Dec 18 18:35:11 2018 - [info]
192.168.192.128(192.168.192.128:3306) (current master)
+--192.168.192.129(192.168.192.129:3306)
+--192.168.192.130(192.168.192.130:3306) Tue Dec 18 18:35:11 2018 - [warning] master_ip_failover_script is not defined.
Tue Dec 18 18:35:11 2018 - [warning] shutdown_script is not defined.
Tue Dec 18 18:35:11 2018 - [info] Set master ping interval 1 seconds.
Tue Dec 18 18:35:11 2018 - [warning] secondary_check_script is not defined. It is highly recommended setting it to check master reachability from two or more routes.
Tue Dec 18 18:35:11 2018 - [info] Starting ping health check on 192.168.192.128(192.168.192.128:3306)..
Tue Dec 18 18:35:11 2018 - [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond..
其中"Ping(SELECT) succeeded, waiting until MySQL doesn't respond.."说明整个系统已经开始监控了。 4.关闭MHA Manager监控
------------------------------关闭MHA Manager监控------------------------------
关闭就很简单了,直接使用masterha_stop命令就可以完成了
[root@manager_slave ~]# masterha_stop --conf=/etc/masterha/app1.cnf
MHA Manager is not running on app1(2:NOT_RUNNING).
查看MHA Manager监控,发现已关闭
[root@manager_slave ~]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 is stopped(2:NOT_RUNNING).
配置VIP
配置keeplived实现vip
vip配置可以采用两种方式,一种通过keepalived的方式管理虚拟ip浮动;另外一种通过脚本方式启动虚拟ip的方式(即不需要keepalived或者heartbeat类似的软件)
通过keepalive的方式管理vip
---------------------------------------------------------第一种方式:通过keepalive的方式管理vip---------------------------------------------------------
下载软件进行并进行安装(在两台master上都要安装,准确的说一台是master(192.168.192.128);另外一台是备选master(192.168.192.129),在没有切换以前是slave)
[root@node_master ~]# yum -y install openssl-devel
[root@node_master ~]# wget http://www.keepalived.org/software/keepalived-2.0.10.tar.gz
[root@node_master ~]# tar zxf keepalived-2.0.10.tar.gz
[root@node_master ~]# cd keepalived-2.0.10/
[root@node_master keepalived-2.0.10]# ./configure --prefix=/usr/local/keepalived
[root@node_master keepalived-2.0.10]# make && make install
[root@node_master keepalived-2.0.10]# cp keepalived/etc/init.d/keepalived /etc/init.d/
[root@node_master keepalived-2.0.10]# cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@node_master keepalived-2.0.10]# mkdir /etc/keepalived
[root@node_master keepalived-2.0.10]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
[root@node_master keepalived-2.0.10]# cp /usr/local/keepalived/sbin/keepalived /usr/sbin/ keepalived配置
------------在master上配置(192.168.192.128节点上的配置)------------------
[root@node_master keepalived-2.0.10]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@node_master keepalived-2.0.10]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived global_defs {
notification_email {
1024331014@qq.com
}
notification_email_from 1024331014@qq.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id MySQL-HA
} vrrp_instance VI_1 {
state BACKUP
interface ens37
virtual_router_id 51
priority 150
advert_int 1
nopreempt authentication {
auth_type PASS
auth_pass 1111
} virtual_ipaddress {
192.168.192.131
}
}
其中router_id MySQL HA表示设定keepalived组的名称,将192.168.192.131这个虚拟ip绑定到该主机的ens37网卡上,并且设置了状态为backup模式,
将keepalived的模式设置为非抢占模式(nopreempt),priority 150表示设置的优先级为150。 ------------在candicate master上配置(192.168.192.129节点上的配置)------------------
[root@node_slave keepalived-2.0.10]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived global_defs {
notification_email {
1024331014@qq.com
}
notification_email_from 1024331014@qq.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id MySQL-HA
} vrrp_instance VI_1 {
state BACKUP
interface ens37
virtual_router_id 51
priority 120
advert_int 1
nopreempt authentication {
auth_type PASS
auth_pass 1111
} virtual_ipaddress {
192.168.192.131
}
} 启动keepalived服务
--------------在master上启动并查看日志(192.168.192.128节点上)------------------------------
[root@node_master keepalived-2.0.10]# /etc/init.d/keepalived start
Starting keepalived (via systemctl): [ OK ] 查看VIP是否成功配置(下面有个192.168.192.131的地址)
[root@node_master keepalived-2.0.10]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:19:14:db brd ff:ff:ff:ff:ff:ff
inet 192.168.52.129/24 brd 192.168.52.255 scope global noprefixroute dynamic ens33
valid_lft 1250sec preferred_lft 1250sec
3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:19:14:e5 brd ff:ff:ff:ff:ff:ff
inet 192.168.192.128/24 brd 192.168.192.255 scope global noprefixroute ens37
valid_lft forever preferred_lft forever
inet 192.168.192.131/32 scope global ens37
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:4b:9c:ed:04 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever [root@node_master keepalived-2.0.10]# tail -10 /var/log/messages
Dec 19 10:43:05 node_master Keepalived_vrrp[6856]: Sending gratuitous ARP on ens37 for 192.168.192.131
Dec 19 10:43:05 node_master Keepalived_vrrp[6856]: (VI_1) Sending/queueing gratuitous ARPs on ens37 for 192.168.192.131
Dec 19 10:43:05 node_master Keepalived_vrrp[6856]: Sending gratuitous ARP on ens37 for 192.168.192.131
Dec 19 10:43:05 node_master Keepalived_vrrp[6856]: Sending gratuitous ARP on ens37 for 192.168.192.131
Dec 19 10:43:05 node_master Keepalived_vrrp[6856]: Sending gratuitous ARP on ens37 for 192.168.192.131
Dec 19 10:43:05 node_master Keepalived_vrrp[6856]: Sending gratuitous ARP on ens37 for 192.168.192.131
Dec 19 10:43:07 node_master crond: sendmail: fatal: parameter inet_interfaces: no local interface found for ::1
Dec 19 10:44:02 node_master systemd: Started Session 90 of user root.
Dec 19 10:44:02 node_master systemd: Starting Session 90 of user root.
Dec 19 10:44:08 node_master crond: sendmail: fatal: parameter inet_interfaces: no local interface found for ::1 发现vip资源已经绑定到192.168.192.128这个master节点机上了 --------------在candicate master上启动(192.168.192.129节点上)----------------------------
[root@node_slave keepalived-2.0.10]# /etc/init.d/keepalived start
Starting keepalived (via systemctl): [ OK ] [root@node_slave keepalived-2.0.10]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:f8:41:0f brd ff:ff:ff:ff:ff:ff
inet 192.168.52.130/24 brd 192.168.52.255 scope global noprefixroute dynamic ens33
valid_lft 1083sec preferred_lft 1083sec
3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:f8:41:19 brd ff:ff:ff:ff:ff:ff
inet 192.168.192.129/24 brd 192.168.192.255 scope global noprefixroute ens37
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:b0:14:23:57 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever 从上面的信息可以看到keepalived已经配置成功。 注意:
上面两台服务器的keepalived都设置为了BACKUP模式,在keepalived中2种模式,分别是master->backup模式和backup->backup模式。这两种模式有很大区别。
在master->backup模式下,一旦主库宕机,虚拟ip会自动漂移到从库,当主库修复后,keepalived启动后,还会把虚拟ip抢占过来,即使设置了非抢占模式(nopreempt)抢占ip的动作也会发生。
在backup->backup模式下,当主库宕机后虚拟ip会自动漂移到从库上,当原主库恢复和keepalived服务启动后,并不会抢占新主的虚拟ip,即使是优先级高于从库的优先级别,也不会发生抢占。
为了减少ip漂移次数,通常是把修复好的主库当做新的备库。
MHA引入keepalived
MHA引入keepalived(MySQL服务进程挂掉时通过MHA 停止keepalived)
要想把keepalived服务引入MHA,只需要修改切换是触发的脚本文件master_ip_failover即可,在该脚本中添加在master发生宕机时对keepalived的处理。
编辑脚本/usr/local/bin/master_ip_failover,修改后如下:
[root@manager_slave ~]# cp /usr/local/bin/master_ip_failover /usr/local/bin/master_ip_failover.bak
[root@manager_slave ~]# vim /usr/local/bin/master_ip_failover #这里有个需要注意的点就是脚本中的vip的地址需要做下修改
#!/usr/bin/env perl use strict;
use warnings FATAL => 'all'; use Getopt::Long; my (
$command, $ssh_user, $orig_master_host, $orig_master_ip,
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
); my $vip = '192.168.192.131';
my $ssh_start_vip = "/etc/init.d/keepalived start";
my $ssh_stop_vip = "/etc/init.d/keepalived stop"; GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
); exit &main(); sub main { print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n"; if ( $command eq "stop" || $command eq "stopssh" ) { my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) { my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host \n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
#`ssh $ssh_user\@cluster1 \" $ssh_start_vip \"`;
exit 0;
}
else {
&usage();
exit 1;
}
} # A simple system call that enable the VIP on the new master
sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
# A simple system call that disable the VIP on the old_master
sub stop_vip() {
return 0 unless ($ssh_user);
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
} sub usage {
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
} 现在已经修改这个脚本了,现在打开/etc/masterha/app1.cnf文件中的master_ip_failover_script注释,再检查集群状态,看是否会报错
[root@manager_slave ~]# grep 'master_ip_failover_script' /etc/masterha/app1.cnf
master_ip_failover_script= /usr/local/bin/master_ip_failover [root@manager_slave ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf
....................
Checking the Status of the script.. OK
Wed Dec 19 10:55:37 2018 - [info] OK.
Wed Dec 19 10:55:37 2018 - [warning] shutdown_script is not defined.
Wed Dec 19 10:55:37 2018 - [info] Got exit code 0 (Not master dead). MySQL Replication Health is OK.
可以看出复制情况正常!
/usr/local/bin/master_ip_failover添加或者修改的内容意思是当主库数据库发生故障时,会触发MHA切换,MHA Manager会停掉主库上的keepalived服务,
触发虚拟ip漂移到备选从库,从而完成切换。当然可以在keepalived里面引入脚本,这个脚本监控mysql是否正常运行,如果不正常,则调用该脚本杀掉keepalived进程。
使用脚本实现vip
通过脚本的方式管理VIP
---------------------------------------------------------第二种方式:通过脚本的方式管理VIP---------------------------------------------------------
为了测试第二种方式,我把上keepalived停掉了
这里是修改/usr/local/bin/master_ip_failover,修改完成后内容如下。还需要手动在master服务器上绑定一个vip 先在master节点(192.168.192.128)上绑定vip
[root@node_master ~]# ifconfig ens37:0 192.168.192.131/24 #这里要注意的是网卡名和地址
[root@node_master ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.52.129 netmask 255.255.255.0 broadcast 192.168.52.255
ether 00:0c:29:19:14:db txqueuelen 1000 (Ethernet)
RX packets 7850 bytes 8852518 (8.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2478 bytes 176378 (172.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens37: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.192.128 netmask 255.255.255.0 broadcast 192.168.192.255
ether 00:0c:29:19:14:e5 txqueuelen 1000 (Ethernet)
RX packets 4200 bytes 448286 (437.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5620 bytes 2671664 (2.5 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens37:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.192.131 netmask 255.255.255.0 broadcast 192.168.192.255
ether 00:0c:29:19:14:e5 txqueuelen 1000 (Ethernet) lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 在manager(192.168.192.130)节点修改/usr/local/bin/master_ip_failover
[root@manager_slave ~]# cp /usr/local/bin/master_ip_failover /usr/local/bin/master_ip_failover.bak.keep
[root@manager_slave ~]# vim /usr/local/bin/master_ip_failover #这里有个需要注意的点就是脚本中的vip的地址需要做下修改
#!/usr/bin/env perl use strict;
use warnings FATAL => 'all'; use Getopt::Long; my (
$command, $ssh_user, $orig_master_host, $orig_master_ip,
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
); my $vip = '192.168.192.131/24';
my $key = '1';
my $ssh_start_vip = "/sbin/ifconfig eth0:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig eth0:$key down"; GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
); exit &main(); sub main { print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n"; if ( $command eq "stop" || $command eq "stopssh" ) { my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) { my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host \n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
exit 0;
}
else {
&usage();
exit 1;
}
} sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
sub stop_vip() {
return 0 unless ($ssh_user);
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
} sub usage {
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
} 检查MHA的复制情况是否正常
[root@manager_slave ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf
Checking the Status of the script.. OK
Wed Dec 19 11:06:36 2018 - [info] OK.
Wed Dec 19 11:06:36 2018 - [warning] shutdown_script is not defined.
Wed Dec 19 11:06:36 2018 - [info] Got exit code 0 (Not master dead). MySQL Replication Health is OK. 注意:
要将/etc/masterha/app1.cnf文件中的master_ip_failover_script注释打开
为了防止脑裂发生,推荐生产环境采用脚本的方式来管理虚拟ip,而不是使用keepalived来完成。到此为止,基本MHA集群已经配置完毕。
接下来就是实际的测试环节了。通过一些测试来看一下MHA到底是如何进行工作的。
failover故障切换
自动切换
自动切换(必须先启动MHA Manager,否则无法自动切换(当然手动切换不需要开启MHA Manager监控))
开启MHA Manager监控(在192.168.192.130上执行,当然如果已经启动了则不需要再次执行)
使用下面命令放在后台执行启动动作
[root@manager_slave /]# nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 & 停掉主库(192.168.192.128)mysql服务,模拟主库发生故障,进行自动failover操作
[root@node_master ~]# systemctl stop mysqld 查看MHA切换日志,了解整个切换过程。在manager管理节点(192.168.192.130)上查看日志
[root@manager_slave ~]# cat /var/log/masterha/app1/manager.log
................
................
----- Failover Report ----- app1: MySQL Master failover 192.168.192.128(192.168.192.128:3306) to 192.168.192.129(192.168.192.129:3306) succeeded Master 192.168.192.128(192.168.192.128:3306) is down! Check MHA Manager logs at manager_slave:/var/log/masterha/app1/manager.log for details. Started automated(non-interactive) failover.
Invalidated master IP address on 192.168.192.128(192.168.192.128:3306)
The latest slave 192.168.192.129(192.168.192.129:3306) has all relay logs for recovery.
Selected 192.168.192.129(192.168.192.129:3306) as a new master.
192.168.192.129(192.168.192.129:3306): OK: Applying all logs succeeded.
192.168.192.129(192.168.192.129:3306): OK: Activated master IP address.
192.168.192.130(192.168.192.130:3306): This host has the latest relay log events.
Generating relay diff files from the latest slave succeeded.
192.168.192.130(192.168.192.130:3306): OK: Applying all logs succeeded. Slave started, replicating from 192.168.192.129(192.168.192.129:3306)
192.168.192.129(192.168.192.129:3306): Resetting slave info succeeded.
Master failover to 192.168.192.129(192.168.192.129:3306) completed successfully.
看到最后的Master failover to 192.168.192.129(192.168.192.129:3306) completed successfully.说明备选master现在已经上位了
从上面的输出可以看出整个MHA的切换过程,共包括以下的步骤:
1)配置文件检查阶段,这个阶段会检查整个集群配置文件配置
2)宕机的master处理,这个阶段包括虚拟ip摘除操作,主机关机操作(这个我这里还没有实现,需要研究)
3)复制dead maste和最新slave相差的relay log,并保存到MHA Manger具体的目录下
4)识别含有最新更新的slave
5)应用从master保存的二进制日志事件(binlog events)
6)提升一个slave为新的master进行复制
7)使其他的slave连接新的master进行复制 最后启动MHA Manger监控,查看集群里面现在谁是master
[root@manager_slave ~]# nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 &
[root@manager_slave ~]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 (pid:7036) is running(0:PING_OK), master:192.168.192.129
手动切换
手动Failover(MHA Manager必须没有运行)
手动failover,这种场景意味着在业务上没有启用MHA自动切换功能,当主服务器故障时,人工手动调用MHA来进行故障切换操作,具体命令如下: 确保mha manager关闭
[root@manager_slave ~]# masterha_stop --conf=/etc/masterha/app1.cnf 注意:如果MHA manager检测到没有dead的server,将报错,并结束failover
[root@manager_slave ~]# masterha_master_switch --master_state=dead --conf=/etc/masterha/app1.cnf --dead_master_host=192.168.192.128 --dead_master_port=3306 --new_master_host=192.168.192.129 --new_master_port=3306 --ignore_last_failover
输出的信息会询问你是否进行切换(yes/NO): 输入yes
.............
.............
----- Failover Report ----- app1: MySQL Master failover 192.168.192.128(192.168.192.128:3306) to 192.168.192.129(192.168.192.129:3306) succeeded Master 192.168.192.128(192.168.192.128:3306) is down! Check MHA Manager logs at manager_slave for details. Started manual(interactive) failover.
Invalidated master IP address on 192.168.192.128(192.168.192.128:3306)
The latest slave 192.168.192.129(192.168.192.129:3306) has all relay logs for recovery.
Selected 192.168.192.129(192.168.192.129:3306) as a new master.
192.168.192.129(192.168.192.129:3306): OK: Applying all logs succeeded.
192.168.192.129(192.168.192.129:3306): OK: Activated master IP address.
192.168.192.130(192.168.192.130:3306): This host has the latest relay log events.
Generating relay diff files from the latest slave succeeded.
192.168.192.130(192.168.192.130:3306): OK: Applying all logs succeeded. Slave started, replicating from 192.168.192.129(192.168.192.129:3306)
192.168.192.129(192.168.192.129:3306): Resetting slave info succeeded.
Master failover to 192.168.192.129(192.168.192.129:3306) completed successfully. 我们看到上面的输出已经切换成功了,这样即模拟了master(192.168.192.128)宕机的情况下手动把192.168.192.129提升为主库的操作过程。
在线切换
在许多情况下, 需要将现有的主服务器迁移到另外一台服务器上,比如主服务器硬件故障,RAID 控制卡需要重建,将主服务器移到性能更好的服务器上等等。维护主服务器引起性能下降,导致停机时间至少无法写入数据。另外,阻塞或杀掉当前运行的会话会导致主主之间数据不一致的问题发生。
MHA提供快速切换和优雅的阻塞写入,这个切换过程只需要 0.5-2s 的时间,这段时间内数据是无法写入的。在很多情况下,0.5-2s 的阻塞写入是可以接受的。因此切换主服务器不需要计划分配维护时间窗口。
MHA在线切换的大概过程:
1)检测复制设置和确定当前主服务器
2)确定新的主服务器
3)阻塞写入到当前主服务器
4)等待所有从服务器赶上复制
5)授予写入到新的主服务器
6)重新设置从服务器
注意,在线切换的时候应用架构需要考虑以下两个问题:
1)自动识别master和slave的问题(master的机器可能会切换),如果采用了vip的方式,基本可以解决这个问题。
2)负载均衡的问题(可以定义大概的读写比例,每台机器可承担的负载比例,当有机器离开集群时,需要考虑这个问题)
为了保证数据完全一致性,在最快的时间内完成切换,MHA的在线切换必须满足以下条件才会切换成功,否则会切换失败。
1)所有slave的IO线程都在运行
2)所有slave的SQL线程都在运行
3)所有的show slave status的输出中Seconds_Behind_Master参数小于或者等于running_updates_limit秒,如果在切换过程中不指定running_updates_limit,那么默认情况下running_updates_limit为1秒。
4)在master端,通过show processlist输出,没有一个更新花费的时间大于running_updates_limit秒。
在线切换步骤如下:
首先,manager节点上停掉MHA监控:
[root@manager_slave ~]# masterha_stop --conf=/etc/masterha/app1.cnf 其次,进行在线切换操作(模拟在线切换主库操作,原主库192.168.192.128变为slave,192.168.192.129提升为新的主库)
[root@manager_slave ~]# masterha_master_switch --conf=/etc/masterha/app1.cnf --master_state=alive --new_master_host=192.168.192.129 --new_master_port=3306 --orig_master_is_new_slave --running_updates_limit=10000
执行后出现了下面的错误
..........
Starting master switch from 192.168.192.128(192.168.192.128:3306) to 192.168.192.129(192.168.192.129:3306)? (yes/NO): yes
Thu Dec 20 12:04:47 2018 - [info] Checking whether 192.168.192.129(192.168.192.129:3306) is ok for the new master..
Thu Dec 20 12:04:47 2018 - [info] ok.
Thu Dec 20 12:04:47 2018 - [info] 192.168.192.128(192.168.192.128:3306): SHOW SLAVE STATUS returned empty result. To check replication filtering rules, temporarily executing CHANGE MASTER to a dummy host.
Thu Dec 20 12:04:47 2018 - [info] 192.168.192.128(192.168.192.128:3306): Resetting slave pointing to the dummy host.
Thu Dec 20 12:04:47 2018 - [info] ** Phase 1: Configuration Check Phase completed.
Thu Dec 20 12:04:47 2018 - [info]
Thu Dec 20 12:04:47 2018 - [info] * Phase 2: Rejecting updates Phase..
Thu Dec 20 12:04:47 2018 - [info]
Thu Dec 20 12:04:47 2018 - [info] Executing master ip online change script to disable write on the current master:
Thu Dec 20 12:04:47 2018 - [info] /usr/local/bin/master_ip_online_change --command=stop --orig_master_host=192.168.192.128 --orig_master_ip=192.168.192.128 --orig_master_port=3306 --orig_master_user='manager' --new_master_host=192.168.192.129 --new_master_ip=192.168.192.129 --new_master_port=3306 --new_master_user='manager' --orig_master_ssh_user=root --new_master_ssh_user=root --orig_master_is_new_slave --orig_master_password=xxx --new_master_password=xxx
Thu Dec 20 12:04:47 2018 811566 Set read_only on the new master.. ok.
Thu Dec 20 12:04:47 2018 815337 Drpping app user on the orig master..
Got Error: Undefined subroutine &main::FIXME_xxx_drop_app_user called at /usr/local/bin/master_ip_online_change line 152. Thu Dec 20 12:04:47 2018 - [error][/usr/local/share/perl5/MHA/ManagerUtil.pm, ln177] Got ERROR: at /usr/local/bin/masterha_master_switch line 53. 报错原因是:这是由于无法找到对FIXME_xxx_drop_app_user定义,由于perl不熟,我暂时注释掉相关drop user的行或FIXME_xxx等,不会影响其他过程
解决方法:
[root@manager_slave ~]# cp /usr/local/bin/master_ip_online_change /usr/local/bin/master_ip_online_change.bak #备份这个文件因为要对这个文件进行修改
[root@manager_slave ~]# vim /usr/local/bin/master_ip_online_change #编辑这个文件
找到下面这两条将其使用#注释掉
FIXME_xxx_drop_app_user($orig_master_handler);
FIXME_xxx_create_app_user($new_master_handler); 重新执行在线切换
[root@manager_slave ~]# masterha_master_switch --conf=/etc/masterha/app1.cnf --master_state=alive --new_master_host=192.168.192.129 --new_master_port=3306 --orig_master_is_new_slave --running_updates_limit=10000
........
Thu Dec 20 12:11:06 2018 - [info] -- Slave switch on host 192.168.192.130(192.168.192.130:3306) succeeded.
Thu Dec 20 12:11:06 2018 - [info] Unlocking all tables on the orig master:
Thu Dec 20 12:11:06 2018 - [info] Executing UNLOCK TABLES..
Thu Dec 20 12:11:06 2018 - [info] ok.
Thu Dec 20 12:11:06 2018 - [info] Starting orig master as a new slave..
Thu Dec 20 12:11:06 2018 - [info] Resetting slave 192.168.192.128(192.168.192.128:3306) and starting replication from the new master 192.168.192.129(192.168.192.129:3306)..
Thu Dec 20 12:11:06 2018 - [info] Executed CHANGE MASTER.
Thu Dec 20 12:11:06 2018 - [info] Slave started.
Thu Dec 20 12:11:06 2018 - [info] All new slave servers switched successfully.
Thu Dec 20 12:11:06 2018 - [info]
Thu Dec 20 12:11:06 2018 - [info] * Phase 5: New master cleanup phase..
Thu Dec 20 12:11:06 2018 - [info]
Thu Dec 20 12:11:06 2018 - [info] 192.168.192.129: Resetting slave info succeeded.
Thu Dec 20 12:11:06 2018 - [info] Switching master to 192.168.192.129(192.168.192.129:3306) completed successfully. 看到上面的输出结果表示在线切换成功
其中参数的意思:
--orig_master_is_new_slave 切换时加上此参数是将原 master 变为 slave 节点,如果不加此参数,原来的 master 将不启动
--running_updates_limit=10000 故障切换时,候选master 如果有延迟的话, mha 切换不能成功,加上此参数表示延迟在此时间范围内都可切换(单位为s),但是切换的时间长短是由recover 时relay 日志的大小决定
至此整个了MySQL高可用方案--MHA部署完毕!!!