在上一片博客中,讲述了怎么去配置MHA架构!这片博客不再细说,只说明其中MySQL主从搭建,这里使用的是gtid加上半同步复制!
步骤与上一片博客一样,不同之处在于MySQL主从的搭建!详细的gtid搭建过程https://www.cnblogs.com/wxzhe/p/10055154.html
上一片博客中,把MySQL主从的搭建由filename和pos的过程改变为如下的基于gtid的过程就可以,因此不再详细说明,只展示gtid的搭建!
四台服务器分配如下:
MHA管理节点: 10.0.102.214
MySQL主节点: 10.0.102.204
MySQL从节点1: 10.0.102.179 【这个节点可以作为备用的主】
MySQL从节点2: 10.0.102.221
搭建基于gtid的数据库复制!
第一步:保证三个服务器的数据是一致的【处于一致的状态】
第二步:在主上和备用的主上创建复制账户,用户名和密码要保持一致!
第三步:开启gtid,加载半同步复制的插件!
三台服务器中配置文件加入以下参照:
plugin_dir=/usr/local/mysql/lib/plugin/ #因为这里MySQL5.7是源码安装的位置,若是使用rpm包安装,则注意更改位置plugin_load=semisync_master.so #这两个插件尽量在每个服务器都安装吧 plugin_load=semisync_slave.so gtid-mode=on #开启gtid enforce-gtid-consistency #确保gtid全局的一致性 log-bin=character_set_server=utf8 #设置字符集 log_slave_updates #gtid复制时,一定要开启
设置完配置文件之后,重启服务器,然后再从上执行以下命令!
mysql> change master to master_host="10.0.102.204", master_user="repl",master_password="123456",master_auto_position = 1; Query OK, 0 rows affected, 2 warnings (0.09 sec) mysql> start slave; Query OK, 0 rows affected (0.01 sec)
若上面没有报错,则使用show slave status查看复制的状态!
检查MHA的状态
ssh状态检查:
[root@test3 ~]# masterha_check_ssh --conf=/etc/masterha/app1.cnf Sun Dec 9 11:42:50 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping. Sun Dec 9 11:42:50 2018 - [info] Reading application default configuration from /etc/masterha/app1.cnf.. Sun Dec 9 11:42:50 2018 - [info] Reading server configuration from /etc/masterha/app1.cnf.. Sun Dec 9 11:42:50 2018 - [info] Starting SSH connection tests.. Sun Dec 9 11:42:51 2018 - [debug] Sun Dec 9 11:42:50 2018 - [debug] Connecting via SSH from root@10.0.102.204(10.0.102.204:22) to root@10.0.102.179(10.0.102.179:22).. Sun Dec 9 11:42:51 2018 - [debug] ok. Sun Dec 9 11:42:51 2018 - [debug] Connecting via SSH from root@10.0.102.204(10.0.102.204:22) to root@10.0.102.221(10.0.102.221:22).. Sun Dec 9 11:42:51 2018 - [debug] ok. Sun Dec 9 11:42:51 2018 - [debug] Sun Dec 9 11:42:51 2018 - [debug] Connecting via SSH from root@10.0.102.179(10.0.102.179:22) to root@10.0.102.204(10.0.102.204:22).. Sun Dec 9 11:42:51 2018 - [debug] ok. Sun Dec 9 11:42:51 2018 - [debug] Connecting via SSH from root@10.0.102.179(10.0.102.179:22) to root@10.0.102.221(10.0.102.221:22).. Sun Dec 9 11:42:51 2018 - [debug] ok. Sun Dec 9 11:42:52 2018 - [debug] Sun Dec 9 11:42:51 2018 - [debug] Connecting via SSH from root@10.0.102.221(10.0.102.221:22) to root@10.0.102.204(10.0.102.204:22).. Sun Dec 9 11:42:52 2018 - [debug] ok. Sun Dec 9 11:42:52 2018 - [debug] Connecting via SSH from root@10.0.102.221(10.0.102.221:22) to root@10.0.102.179(10.0.102.179:22).. Sun Dec 9 11:42:52 2018 - [debug] ok. Sun Dec 9 11:42:52 2018 - [info] All SSH connection tests passed successfully.
masterha_check_ssh --conf=/etc/masterha/app1.cnf
[root@test3 ~]# masterha_check_ssh --conf=/etc/masterha/app1.cnf Sun Dec 9 11:42:50 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping. Sun Dec 9 11:42:50 2018 - [info] Reading application default configuration from /etc/masterha/app1.cnf.. Sun Dec 9 11:42:50 2018 - [info] Reading server configuration from /etc/masterha/app1.cnf.. Sun Dec 9 11:42:50 2018 - [info] Starting SSH connection tests.. Sun Dec 9 11:42:51 2018 - [debug] Sun Dec 9 11:42:50 2018 - [debug] Connecting via SSH from root@10.0.102.204(10.0.102.204:22) to root@10.0.102.179(10.0.102.179:22).. Sun Dec 9 11:42:51 2018 - [debug] ok. Sun Dec 9 11:42:51 2018 - [debug] Connecting via SSH from root@10.0.102.204(10.0.102.204:22) to root@10.0.102.221(10.0.102.221:22).. Sun Dec 9 11:42:51 2018 - [debug] ok. Sun Dec 9 11:42:51 2018 - [debug] Sun Dec 9 11:42:51 2018 - [debug] Connecting via SSH from root@10.0.102.179(10.0.102.179:22) to root@10.0.102.204(10.0.102.204:22).. Sun Dec 9 11:42:51 2018 - [debug] ok. Sun Dec 9 11:42:51 2018 - [debug] Connecting via SSH from root@10.0.102.179(10.0.102.179:22) to root@10.0.102.221(10.0.102.221:22).. Sun Dec 9 11:42:51 2018 - [debug] ok. Sun Dec 9 11:42:52 2018 - [debug] Sun Dec 9 11:42:51 2018 - [debug] Connecting via SSH from root@10.0.102.221(10.0.102.221:22) to root@10.0.102.204(10.0.102.204:22).. Sun Dec 9 11:42:52 2018 - [debug] ok. Sun Dec 9 11:42:52 2018 - [debug] Connecting via SSH from root@10.0.102.221(10.0.102.221:22) to root@10.0.102.179(10.0.102.179:22).. Sun Dec 9 11:42:52 2018 - [debug] ok. Sun Dec 9 11:42:52 2018 - [info] All SSH connection tests passed successfully.[root@test3 ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf Sun Dec 9 11:43:39 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping. Sun Dec 9 11:43:39 2018 - [info] Reading application default configuration from /etc/masterha/app1.cnf.. Sun Dec 9 11:43:39 2018 - [info] Reading server configuration from /etc/masterha/app1.cnf.. Sun Dec 9 11:43:39 2018 - [info] MHA::MasterMonitor version 0.56. Sun Dec 9 11:43:39 2018 - [info] GTID failover mode = 1Sun Dec 9 11:43:39 2018 - [info] Dead Servers: Sun Dec 9 11:43:39 2018 - [info] Alive Servers: Sun Dec 9 11:43:39 2018 - [info] 10.0.102.204(10.0.102.204:3306) Sun Dec 9 11:43:39 2018 - [info] 10.0.102.179(10.0.102.179:3306) Sun Dec 9 11:43:39 2018 - [info] 10.0.102.221(10.0.102.221:3306) Sun Dec 9 11:43:39 2018 - [info] Alive Slaves: Sun Dec 9 11:43:39 2018 - [info] 10.0.102.179(10.0.102.179:3306) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled Sun Dec 9 11:43:39 2018 - [info] GTID ONSun Dec 9 11:43:39 2018 - [info] Replicating from 10.0.102.204(10.0.102.204:3306) Sun Dec 9 11:43:39 2018 - [info] Primary candidate for the new Master (candidate_master is set) Sun Dec 9 11:43:39 2018 - [info] 10.0.102.221(10.0.102.221:3306) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled Sun Dec 9 11:43:39 2018 - [info] GTID ONSun Dec 9 11:43:39 2018 - [info] Replicating from 10.0.102.204(10.0.102.204:3306) Sun Dec 9 11:43:39 2018 - [info] Not candidate for the new Master (no_master is set) Sun Dec 9 11:43:39 2018 - [info] Current Alive Master: 10.0.102.204(10.0.102.204:3306) Sun Dec 9 11:43:39 2018 - [info] Checking slave configurations.. Sun Dec 9 11:43:39 2018 - [info] read_only=1 is not set on slave 10.0.102.179(10.0.102.179:3306). Sun Dec 9 11:43:39 2018 - [info] read_only=1 is not set on slave 10.0.102.221(10.0.102.221:3306). Sun Dec 9 11:43:39 2018 - [info] Checking replication filtering settings.. Sun Dec 9 11:43:39 2018 - [info] binlog_do_db= , binlog_ignore_db= Sun Dec 9 11:43:39 2018 - [info] Replication filtering check ok. Sun Dec 9 11:43:39 2018 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking. Sun Dec 9 11:43:39 2018 - [info] Checking SSH publickey authentication settings on the current master.. Sun Dec 9 11:43:39 2018 - [info] HealthCheck: SSH to 10.0.102.204 is reachable. Sun Dec 9 11:43:39 2018 - [info] 10.0.102.204(10.0.102.204:3306) (current master) +--10.0.102.179(10.0.102.179:3306) +--10.0.102.221(10.0.102.221:3306)Sun Dec 9 11:43:39 2018 - [info] Checking replication health on 10.0.102.179.. Sun Dec 9 11:43:39 2018 - [info] ok. Sun Dec 9 11:43:39 2018 - [info] Checking replication health on 10.0.102.221.. Sun Dec 9 11:43:39 2018 - [info] ok. Sun Dec 9 11:43:39 2018 - [info] Checking master_ip_failover_script status: Sun Dec 9 11:43:39 2018 - [info] /usr/local/bin/master_ip_failover --ssh_user=root --command=status --ssh_user=root --orig_master_host=10.0.102.204 --orig_master_ip=10.0.102.204 --orig_master_port=3306 IN SCRIPT TEST====service keepalived stop==service keepalived start===Checking the Status of the script.. OK Sun Dec 9 11:43:39 2018 - [info] OK. Sun Dec 9 11:43:39 2018 - [warning] shutdown_script is not defined. Sun Dec 9 11:43:39 2018 - [info] Got exit code 0 (Not master dead). MySQL Replication Health is OK.[root@test3
复制状态检查:
Sun Dec 9 12:14:28 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping. Sun Dec 9 12:14:28 2018 - [info] Reading application default configuration from /etc/masterha/app1.cnf.. Sun Dec 9 12:14:28 2018 - [info] Reading server configuration from /etc/masterha/app1.cnf.. Sun Dec 9 12:14:28 2018 - [info] MHA::MasterMonitor version 0.56. Sun Dec 9 12:14:28 2018 - [info] GTID failover mode = 1Sun Dec 9 12:14:28 2018 - [info] Dead Servers: Sun Dec 9 12:14:28 2018 - [info] Alive Servers: Sun Dec 9 12:14:28 2018 - [info] 10.0.102.204(10.0.102.204:3306) Sun Dec 9 12:14:28 2018 - [info] 10.0.102.179(10.0.102.179:3306) Sun Dec 9 12:14:28 2018 - [info] 10.0.102.221(10.0.102.221:3306) Sun Dec 9 12:14:28 2018 - [info] Alive Slaves: Sun Dec 9 12:14:28 2018 - [info] 10.0.102.179(10.0.102.179:3306) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled Sun Dec 9 12:14:28 2018 - [info] GTID ONSun Dec 9 12:14:28 2018 - [info] Replicating from 10.0.102.204(10.0.102.204:3306) Sun Dec 9 12:14:28 2018 - [info] Primary candidate for the new Master (candidate_master is set) Sun Dec 9 12:14:28 2018 - [info] 10.0.102.221(10.0.102.221:3306) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled Sun Dec 9 12:14:28 2018 - [info] GTID ONSun Dec 9 12:14:28 2018 - [info] Replicating from 10.0.102.204(10.0.102.204:3306) Sun Dec 9 12:14:28 2018 - [info] Not candidate for the new Master (no_master is set) Sun Dec 9 12:14:28 2018 - [info] Current Alive Master: 10.0.102.204(10.0.102.204:3306) Sun Dec 9 12:14:28 2018 - [info] Checking slave configurations.. Sun Dec 9 12:14:28 2018 - [info] read_only=1 is not set on slave 10.0.102.179(10.0.102.179:3306). Sun Dec 9 12:14:28 2018 - [info] read_only=1 is not set on slave 10.0.102.221(10.0.102.221:3306). Sun Dec 9 12:14:28 2018 - [info] Checking replication filtering settings.. Sun Dec 9 12:14:28 2018 - [info] binlog_do_db= , binlog_ignore_db= Sun Dec 9 12:14:28 2018 - [info] Replication filtering check ok. Sun Dec 9 12:14:28 2018 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking. Sun Dec 9 12:14:28 2018 - [info] Checking SSH publickey authentication settings on the current master.. Sun Dec 9 12:14:28 2018 - [info] HealthCheck: SSH to 10.0.102.204 is reachable. Sun Dec 9 12:14:28 2018 - [info] 10.0.102.204(10.0.102.204:3306) (current master) +--10.0.102.179(10.0.102.179:3306) +--10.0.102.221(10.0.102.221:3306)Sun Dec 9 12:14:28 2018 - [info] Checking replication health on 10.0.102.179.. Sun Dec 9 12:14:28 2018 - [info] ok. Sun Dec 9 12:14:28 2018 - [info] Checking replication health on 10.0.102.221.. Sun Dec 9 12:14:28 2018 - [info] ok. Sun Dec 9 12:14:28 2018 - [info] Checking master_ip_failover_script status: Sun Dec 9 12:14:28 2018 - [info] /usr/local/bin/master_ip_failover --ssh_user=root --command=status --ssh_user=root --orig_master_host=10.0.102.204 --orig_master_ip=10.0.102.204 --orig_master_port=3306 IN SCRIPT TEST====service keepalived stop==service keepalived start===Checking the Status of the script.. OK Sun Dec 9 12:14:28 2018 - [info] OK. Sun Dec 9 12:14:28 2018 - [warning] shutdown_script is not defined. Sun Dec 9 12:14:28 2018 - [info] Got exit code 0 (Not master dead). MySQL Replication Health is OK.
masterha_check_repl --conf=/etc/masterha/app1.cnf
复制状态信息中会显示gtid的信息提示!
若上面两步检查都没有错误,可以开启MHA的监控!
nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover &
查看MHA的运行状态:
[root@test3 ~]# masterha_check_status --conf=/etc/masterha/app1.cnf app1 (pid:22124) is running(0:PING_OK), master:10.0.102.204#显示app1正在运行,集群中主服务器为10.0.102.204
这时候MHA集群已经搭建完毕,只要停掉主服务器,就会自动选择备用的从作为主服务器的!
配置VIP漂移
目前集群master是204这台服务器,想一个场景,前端应用正在连接这台数据库,然后master服务器因某种原因宕掉了,这时候在MHA看来,我们可以把备用的主也就是179这台服务器作为新的master,维持集群的正常运行;但是对于前端应用来说,不可能集群没宕一次就修改一次源码中的数据库的IP地址。我们需要一个VIP来和前端相连,这个VIP总是指向正常运行数据库服务器!
MHA引入VIP有两种方式,一种是使用keepalive,另一种是使用MHA自带的脚本!
使用keepalive配置VIP漂移
按照keepalive软件,在当前的主和要作为备份的主上,按照keepalive!
#直接yum按照即可 yum install -y keepalived
然后编辑配置文件,yum安装默认配置在/etc/下面
/etc/keepalived/! Configuration File .. /etc/keepalived/! Configuration File ..
注意:上面两台服务器的keepalived都设置为了BACKUP模式,在keepalived中2种模式,分别是master->backup模式和backup->backup模式。这两种模式有很大区别。在master->backup模式下,一旦主库宕机,虚拟ip会自动漂移到从库,当主库修复后,keepalived启动后,还会把虚拟ip抢占过来,即使设置了非抢占模式(nopreempt)抢占ip的动作也会发生。在backup->backup模式下,当主库宕机后虚拟ip会自动漂移到从库上,当原主库恢复和keepalived服务启动后,并不会抢占新主的虚拟ip,即使是优先级高于从库的优先级别,也不会发生抢占。为了减少ip漂移次数,通常是把修复好的主库当做新的备库。
keepalive配置完成之后,启动keepalive,可以先测试keepalive是否会完成VIP的漂移!
keepalive配置完成之后,可以设置failover脚本!
需要注意的是,这里我们要测试VIP漂移,因此需要指定failover的脚本位置,在配置文件中指定!
datamysqldata masterha_secondary_check s test1 s mgt01 usrlocalbinmaster_ip_failover # shutdown_script scriptmasterha usrlocalbin scriptmasterhamaster_ip_online_change
failover脚步需要自己编辑,在网上找了http://www.ywnds.com/?p=8116 这片博客中给出的failover脚步,测试的时候有点问题,MHA得不到failover的状态!我自己改了一下【删了两个变量的引用】,能用,如下!
#!/usr/bin/env perl use strict;use warnings FATAL => 'all'; use Getopt::Long; my ( $command, $ssh_user, $orig_master_host, $orig_master_ip, $orig_master_port, $new_master_host, $new_master_ip, $new_master_port ); my $ssh_start_vip = "service keepalived start"; #my $ssh_start_vip = "systemctl start keepalived.service"; #my $ssh_stop_vip = "systemctl stop keepalived.service"; my $ssh_stop_vip = "service keepalived stop"; GetOptions( 'command=s' => \$command, 'ssh_user=s' => \$ssh_user, 'orig_master_host=s' => \$orig_master_host, 'orig_master_ip=s' => \$orig_master_ip, 'orig_master_port=i' => \$orig_master_port, 'new_master_host=s' => \$new_master_host, 'new_master_ip=s' => \$new_master_ip, 'new_master_port=i' => \$new_master_port, ); exit &main(); sub main { print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n"; if ( $command eq "stop" || $command eq "stopssh" ) { my $exit_code = 1; eval { print "Disabling the VIP on old master: $orig_master_host \n"; &stop_vip(); $exit_code = 0; }; if ($@) { warn "Got Error: $@\n"; exit $exit_code; } exit $exit_code; } elsif ( $command eq "start" ) { my $exit_code = 10; eval { print "Enabling the VIP on the new master - $new_master_host \n"; &start_vip(); $exit_code = 0; }; if ($@) { warn $@; exit $exit_code; } exit $exit_code; } elsif ( $command eq "status" ) { print "Checking the Status of the script.. OK \n"; #`ssh $ssh_user\@cluster1 \" $ssh_start_vip \"`; exit 0; } else { &usage(); exit 1; } } # A simple system call that enable the VIP on the new master sub start_vip() { `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`; } # A simple system call that disable the VIP on the old_master sub stop_vip() { return 0 unless ($ssh_user); `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`; } sub usage { print "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";}
cat master_ip_failover
配置了脚本之后,再次检查复制状态:
masterha_check_repl --conf=/etc/masterha/app1.cnf
若复制状态检查中,显示failover错误的信息,可以单独执行下面的命令,查看脚本的状态及报错提示!
/usr/local/bin/master_ip_failover --ssh_user=root --command=status --ssh_user=root --orig_master_host=10.0.102.179 --orig_master_ip=10.0.102.179 --orig_master_port=3306
开启MHA的监控状态,若正常启动没有报错,则说明配置完成!
当前的状态是:
204的服务器是当前的master,并且有VIP。
179服务器上是备用的主。
221服务器仅是从!
停掉204的数据库服务,查看vip是否会转移到179上,集群的主服务器是否也会转到179上?
停掉204的数据库
[root@test2 ~: lo:
查看179服务器的VIP:
#可以看到VIP已经漂移,并且179服务器已经变为master服务器! [root@test1 ~: lo:
还可以MHA的日志:
cat /data/log/app1/manager.log
上面使用keepalive来做高可用,完成了VIP的漂移的测试,下面使用MHA自带的脚本来测试一下!
使用自带脚本配置VIP漂移
需要修改master_ip_failover脚本。
注意修改脚本中的VIP地址,以及ifconfig命令的绝对路径!
#!/usr/bin/env perluse strict; use warnings FATAL => 'all'; use Getopt::Long; my ( $command, $ssh_user, $orig_master_host, $orig_master_ip, $orig_master_port, $new_master_host, $new_master_ip, $new_master_port ); my $vip = '10.0.102.110/22'; my $key = '0'; my $ssh_start_vip = "/sbin/ifconfig eth0:$key $vip"; my $ssh_stop_vip = "/sbin/ifconfig eth0:$key down"; GetOptions( 'command=s' => \$command, 'ssh_user=s' => \$ssh_user, 'orig_master_host=s' => \$orig_master_host, 'orig_master_ip=s' => \$orig_master_ip, 'orig_master_port=i' => \$orig_master_port, 'new_master_host=s' => \$new_master_host, 'new_master_ip=s' => \$new_master_ip, 'new_master_port=i' => \$new_master_port, ); exit &main(); sub main { print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n"; if ( $command eq "stop" || $command eq "stopssh" ) { my $exit_code = 1; eval { print "Disabling the VIP on old master: $orig_master_host \n"; &stop_vip(); $exit_code = 0; }; if ($@) { warn "Got Error: $@\n"; exit $exit_code; } exit $exit_code; } elsif ( $command eq "start" ) { my $exit_code = 10; eval { print "Enabling the VIP - $vip on the new master - $new_master_host \n"; &start_vip(); $exit_code = 0; }; if ($@) { warn $@; exit $exit_code; } exit $exit_code; } elsif ( $command eq "status" ) { print "Checking the Status of the script.. OK \n"; exit 0; } else { &usage(); exit 1; } } sub start_vip() { `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;} sub stop_vip() { return 0 unless ($ssh_user); `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;} sub usage { print "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n"; }
cat /usr/local/bin/master_ip_failover
注意要给脚本加上可执行权限
chmod +x master_ip_failover
然后修改配置文件:
=/data/log/app1/=/data/log/=/data/=/usr/local/bin/===/data/log/===/usr/local/bin/=masterha_secondary_check -s test1 -s mgt01 --user=root --port= --master_host=test2 --master_port======.=
[server2] candidate_master=1hostname=10.0.102.179port=3306
=.==
然后进行repl测试,若测试通过,在启动MHA监控!
使用自带的脚本做failover需要手动添加虚拟IP,在当前的master上添加VIP!
[root@test1 ~]# ifconfig eth0:0 10.0.102.110/22 #添加虚拟IP[root@test1 ~]# ifconfigeth0 Link encap:Ethernet HWaddr FA:BC:66:8D:2E:00 inet addr:10.0.102.179 Bcast:10.0.103.255 Mask:255.255.252.0 inet6 addr: fe80::f8bc:66ff:fe8d:2e00/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3861480 errors:0 dropped:0 overruns:0 frame:0 TX packets:1279028 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7747611623 (7.2 GiB) TX bytes:2717743084 (2.5 GiB) eth0:0 Link encap:Ethernet HWaddr FA:BC:66:8D:2E:00 inet addr:10.0.102.110 Bcast:10.0.103.255 Mask:255.255.252.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:2742 errors:0 dropped:0 overruns:0 frame:0 TX packets:2742 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:370558 (361.8 KiB) TX bytes:370558 (361.8 KiB)
这是179的服务器上即有虚拟IP,也是MySQL集群的主服务器!然后停掉MySQL服务,查看VIP的漂移和主从的切换!
[root@test1 ~]# service mysqld stop #停掉主服务器,虚拟IP已经漂移 Shutting down MySQL............ SUCCESS! [root@test1 ~]# ifconfigeth0 Link encap:Ethernet HWaddr FA:BC:66:8D:2E:00 inet addr:10.0.102.179 Bcast:10.0.103.255 Mask:255.255.252.0 inet6 addr: fe80::f8bc:66ff:fe8d:2e00/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3864276 errors:0 dropped:0 overruns:0 frame:0 TX packets:1280993 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7747800974 (7.2 GiB) TX bytes:2717898739 (2.5 GiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:2744 errors:0 dropped:0 overruns:0 frame:0 TX packets:2744 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:370642 (361.9 KiB) TX bytes:370642 (361.9 KiB)
在备用主服务器204上查看:
[root@test2 ~]# ip addr1: lo:mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0:mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fa:1d:ae:12:52:00 brd ff:ff:ff:ff:ff:ff inet 10.0.102.204/22 brd 10.0.103.255 scope global eth0 inet 10.0.102.110/22 brd 10.0.103.255 scope global secondary eth0:0 inet6 fe80::f81d:aeff:fe12:5200/64 scope link valid_lft forever preferred_lft forever [root@test2 ~]# mysql -e "show processlist"+----+------+--------------------+------+------------------+------+---------------------------------------------------------------+------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+------+--------------------+------+------------------+------+---------------------------------------------------------------+------------------+ | 19 | repl | 10.0.102.221:52375 | NULL | Binlog Dump GTID | 1860 | Master has sent all binlog to slave; waiting for more updates | NULL | | 20 | root | localhost | NULL | Query | 0 | starting | show processlist | +----+------+--------------------+------+------------------+------+---------------------------------------------------------------+------------------+[root@test2 ~]#
可以看到虚拟IP已经漂移,并且主从已经切换!