一、说明
1、本文将介绍如何使用mysql-mmm搭建数据库的高可用架构,MMM即Master-Master Replication Manager for MySQL(mysql主主复制管理器)关于mysql主主复制配置的监控、故障转移和管理的一套可伸缩的脚本套件(在任何时候只有一个节点可以被写入),这个套件也能对居于标准的主从配置的任意数量的从服务器进行读负载均衡,所以你可以用它来在一组居于复制的服务器启动虚拟ip,除此之外,它还有实现数据备份、节点之间重新同步功能的脚本。MySQL本身没有提供replication failover的解决方案,通过MMM方案能实现服务器的故障转移,从而实现mysql的高可用。MMM不仅能提供浮动IP的功能,更可贵的是如果当前的主服务器挂掉后,会将你后端的从服务器自动转向新的主服务器进行同步复制,不用手工更改同步配置。这个方案是目前比较成熟的解决方案(上面这段话从网上载录)。至于mmm的具体介绍我这里就不多讲了,详情请看官网:http://mysql-mmm.org。
2、该方案的优缺点:
优点:稳定性高,可扩展性好,高可用,当主服务器挂掉以后,另一个主立即接管,其他的从服务器能自动切换,不用人工干预。
缺点:monitor节点是单点,不过这个你也可以结合keepalived或者haertbeat做成高可用
二、环境
1、服务器列表
服务器 | 主机名 | IP | serverID | mysql版本 | 系统 |
master1
|
db1
|
172.28.26.101
|
101
|
mysql5.5.15 | Centos 6.4 |
master2 |
db2
|
172.28.26.102
|
102
|
mysql5.5.15
|
Centos 6.4
|
slave1 |
db3
|
172.28.26.188
|
188
|
mysql5.5.15
|
Centos 6.4
|
slave2 |
db4
|
172.28.26.189
|
189
|
mysql5.5.15
|
Centos 6.4
|
monitor
|
monitor
|
172.28.26.103
|
无 | ? |
Centos 6.4
|
2、虚拟IP列表
VIP | Role | description |
172.28.26.104 | write | 应用配置的写入VIP |
172.28.26.105 | read |
应用配置的读入VIP
|
172.28.26.106 | read |
应用配置的读入VIP
|
三、mysql的安装
1、只在master1上安装即可,其他机器scp过去就好,最好用rsync
1
2
3
4
5
6
|
tar -zxvf mysql-5.5.15. tar .gz
cd mysql-5.5.15
cmake -DCMAKE_INSTALL_PREFIX:PATH= /data/mysql/navy1 -DMYSQL_DATADIR= /data/mysql/navy1/db -DMYSQL_TCP_PORT=3306 -DMYSQL_UNIX_ADDR= /tmp/mysql .sock -DDEFAULT_CHARSET=utf8 -DDEFAULT_COLLATION=utf8_general_ci -DWITH_MYISAM_STORAGE_ENGINE=1 -DWITH_INNOBASE_STORAGE_ENGINE=1 -DWITH_READLINE=1 -DENABLED_LOCAL_INFILE=1
make make install
useradd mysql -s /sbin/nologin ; cd /data/mysql/navy1 ; chown mysql:mysql db/ logs/ -R
|
vi /data/mysql/navy1/my.cnf (cp一份线上的配置文件修改一下)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
|
[mysqld_safe] log-error= /data/mysql/navy1/logs/mysqld .log
pid- file = /data/mysql/navy1/logs/mysqld .pid
[client] port = 3306 socket = /data/mysql/navy1/logs/mysql .sock
[mysqld] port = 3306 socket = /data/mysql/navy1/logs/mysql .sock
key_buffer = 384M max_allowed_packet = 1M table_cache = 512 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size = 64M basedir= /data/mysql/navy1
datadir= /data/mysql/navy1/db
thread_concurrency = 8 log-bin=mysql-bin binlog_format = mixed server- id = 101
max_connections=2048 character_set_server=utf8 wait_timeout=1800 interactive_timeout=1800 skip-show-database skip-name-resolve tmp_table_size = 512M max_heap_table_size = 512M binlog-ignore-db = mysql replicate-ignore-db = mysql binlog-ignore-db = information_schema replicate-ignore-db = information_schema binlog-ignore-db = performance_schema replicate-ignore-db = performance_schema binlog-ignore-db = test
replicate-ignore-db = test
innodb_data_home_dir = /data/mysql/navy1/db
#innodb_data_file_path = ibdata1:4000M;ibdata2:10M:autoextend innodb_file_per_table=1 innodb_log_group_home_dir = /data/mysql/navy1/db
innodb_buffer_pool_size = 2000M innodb_additional_mem_pool_size = 20M innodb_log_file_size = 100M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 2 innodb_lock_wait_timeout = 50 #default-storage-engine = MyISAM default-storage-engine = InnoDB [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash [isamchk] key_buffer = 256M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M [myisamchk] key_buffer = 256M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] interactive-timeout |
初始化数据库:
1
|
/data/mysql/navy1/scripts/mysql_install_db --user=mysql --basedir= /data/mysql/navy1 --datadir= /data/mysql/navy1/db/
|
启动数据库:
1
|
cd /data/mysql/navy1 ; /data/mysql/navy1/bin/mysqld_safe --defaults-extra- file = /data/mysql/navy1/my .cnf --user=mysql &
|
2、其他三天机器的mysql的安装仅需四步:
A、把已经安装好的master上的mysql停掉,rsync到master2、和两台slave的对应目录下(/data/mysql/navy1)
B、建用户、改权限
1
|
useradd mysql -s /sbin/nologin ; cd /data/mysql/navy1 ; chown mysql:mysql db/ logs/ -R
|
C、修改/data/mysql/navy1/my.cnf中的serverID
D、启动数据库:
1
|
cd /data/mysql/navy1 ; /data/mysql/navy1/bin/mysqld_safe --defaults-extra- file = /data/mysql/navy1/my .cnf --user=mysql &
|
四、主从配置(master1和master2配置成主主,slave1和slave2配置成master1的从):
1、在master1上授权:
1
2
3
|
grant replication slave on *.* to slave@ ‘172.28.26.102‘ identified by "123456" ;
grant replication slave on *.* to slave@ ‘172.28.26.188‘ identified by "123456" ;
grant replication slave on *.* to slave@ ‘172.28.26.189‘ identified by "123456" ;
|
2、在master2上授权:
1
2
3
|
grant replication slave on *.* to slave@ ‘172.28.26.101‘ identified by "123456" ;
grant replication slave on *.* to slave@ ‘172.28.26.188‘ identified by "123456" ;
grant replication slave on *.* to slave@ ‘172.28.26.189‘ identified by "123456" ;
|
3、把master2、slave1和slave2配置成master1的从库:
A、在master1上执行show master status \G 获取binlog文件和Position点
1
2
3
4
5
6
7
8
9
10
11
|
mysql> show master status \G ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... Connection id : 3974
Current database: *** NONE *** *************************** 1. row *************************** File: mysql-bin.000024 Position: 107 Binlog_Do_DB: Binlog_Ignore_DB: mysql,information_schema,performance_schema, test ,mysql,information_schema,performance_schema, test
1 row in set (0.00 sec)
|
B、在master2、slave1和slave2执行
1
2
|
change master to master_host= ‘172.28.26.101‘ , master_Port=3306, master_user= ‘slave‘ , master_password= ‘123456‘ , master_log_file= ‘mysql-bin.000024‘ , master_log_pos=107;
slave start; |
4、把master1配置成master2的从库:
A、在master2上执行show master status \G 获取binlog文件和Position点
1
2
3
4
5
6
7
|
mysql> show master status \G *************************** 1. row *************************** File: mysql-bin.000025 Position: 107 Binlog_Do_DB: navy Binlog_Ignore_DB: mysql,mysql,information_schema,performance_schema, test ,mysql,information_schema,performance_schema, test
1 row in set (0.00 sec)
|
B、在master1上执行:
1
2
|
change master to master_host= ‘172.28.26.101‘ , master_Port=3306, master_user= ‘slave‘ , master_password= ‘123456‘ , master_log_file= ‘mysql-bin.000025‘ , master_log_pos=107;
slave start; |
5、在各个机器上执行:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
mysql> show slave status \G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event
Master_Host: 172.28.26.102 Master_User: slave Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000025 Read_Master_Log_Pos: 107 Relay_Log_File: mysqld-relay-bin.000015 Relay_Log_Pos: 253 Relay_Master_Log_File: mysql-bin.000025 Slave_IO_Running: Yes Slave_SQL_Running: Yes |
如果Slave_IO_Running和Slave_SQL_Running都为yes,那么主从就已经配置OK了
五、mysql-mmm安装
1、db节点:
1
|
yum -y install mysql-mmm-agent
|
2、monitor节点:
1
|
yum -y install mysql-mmm*
|
六、mysql-mmm的配置:
1、在4个db节点授权:
1
2
|
GRANT SUPER, REPLICATION CLIENT, PROCESS ON *.* TO ‘mmm_agent‘ @ ‘172.28.26.%‘ IDENTIFIED BY ‘123456‘ ;
GRANT REPLICATION CLIENT ON *.* TO ‘mmm_monitor‘ @ ‘172.28.26.%‘ IDENTIFIED BY ‘123456‘ ;
|
2、修改配置文件
vi /etc/mysql-mmm/mmm_common.conf (db、monitor一样样的)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
|
active_master_role writer <host default> cluster_interface eth1 pid_path /var/run/mysql-mmm/mmm_agentd .pid
bin_path /usr/libexec/mysql-mmm/
replication_user slave replication_password 123456 agent_user mmm_agent agent_password 123456 < /host >
<host db1> ip 172.28.26.101 mysql_port 3306 mode master peer db2 < /host >
<host db2> ip 172.28.26.102 mysql_port 3306 mode master peer db1 < /host >
<host db3> ip 172.28.26.188 mysql_port 3306 mode slave peer db3 < /host >
<host db4> ip 172.28.26.189 mysql_port 3306 mode slave peer db4 < /host >
<role writer> hosts db1, db2 ips 172.28.26.104 mode exclusive < /role >
<role reader> hosts db3, db4 ips 172.28.26.105,172.28.26.106 mode balanced < /role >
|
PS:
peer的意思是等同,表示db1与db2是同等的。
ips指定VIP
mode exclusive 只有两种模式:exclusive是排他,在这种模式下任何时候只能一个host拥有该角色
balanced模式下可以多个host同时拥有此角色。一般writer是exclusive,reader是balanced
vi /etc/mysql-mmm/mmm_agent.conf (master1、master2、slave1和slave2分别修改为:this db1、db2、db3、db4)
vi /etc/mysql-mmm/mmm_mon.conf(仅monitor节点有)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
include mmm_common.conf <monitor> ip 127.0.0.1 pid_path /var/run/mysql-mmm/mmm_mond .pid
bin_path /usr/libexec/mysql-mmm
status_path /var/lib/mysql-mmm/mmm_mond .status
ping_ips 172.28.26.101,172.28.26.102 auto_set_online 10 # The kill_host_bin does not exist by default, though the monitor will # throw a warning about it missing. See the section 5.10 "Kill Host # Functionality" in the PDF documentation. # # kill_host_bin /usr/libexec/mysql-mmm/monitor/kill_host # < /monitor >
<host default> monitor_user mmm_monitor monitor_password 123456 < /host >
debug 0 |
七、mmm启动
1、db节点:
1
2
|
/etc/init .d /mysql-mmm-agent start
echo "/etc/init.d/mysql-mmm-agent start" >> /etc/rc . local
|
2、 monitor节点:
八、测试
1、集群正常启动:
1
2
3
4
5
|
[root@monitor ~] # mmm_control show
db1(172.28.26.101) master /ONLINE . Roles: writer(172.28.26.104)
db2(172.28.26.102) master /ONLINE . Roles:
db3(172.28.26.188) slave /ONLINE . Roles: reader(172.28.26.106)
db4(172.28.26.189) slave /ONLINE . Roles: reader(172.28.26.105)
|
2、停掉db1,看172.28.26.104是否漂移到db2上,db3、db4的主是否切换到db2
1
2
3
4
5
|
[root@monitor ~] # mmm_control show
db1(172.28.26.101) master /HARD_OFFLINE . Roles:
db2(172.28.26.102) master /ONLINE . Roles: writer(172.28.26.104)
db3(172.28.26.188) slave /ONLINE . Roles: reader(172.28.26.106)
db4(172.28.26.189) slave /ONLINE . Roles: reader(172.28.26.105)
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
mysql> show slave status \G Connection id : 5844
Current database: *** NONE *** *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event
Master_Host: 172.28.26.102 Master_User: slave Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000025 Read_Master_Log_Pos: 107 Relay_Log_File: mysqld-relay-bin.000002 Relay_Log_Pos: 253 Relay_Master_Log_File: mysql-bin.000025 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: |
本文出自 “屌丝运维男” 博客,请务必保留此出处http://navyaijm.blog.51cto.com/4647068/1230674