构建DRBD模型的MySQL高可用(HA)集群

一、DRBD

   在架构MySQL高可用集群之前,有必要先介绍一下什么是DRBD,以及DRBD的原理和其工作方式

1、什么是DRBD

DRBD (Distributed Replicated Block Device,分布式复制块设备)是由内核模块和相关脚本而构成,用以构建高可用性的集群。其实现方式是通过网络来镜像整个设备,是一个跨主机的镜像,所以可以把DRBD理解为是一种网络RAID1。

2、DRBD原理

每个设备(drbd 提供了不止一个设备)都有一个状态,可能是‘主’状态,也可能是‘从’状态。但是在使用时,主从一定不要同时挂载使用。因为,对于任何一个客户机挂载一个块级别设备以后,它们对于数据和元数据管理是在内存中实现的,然后定期存储到硬盘上去;二者的挂载操作都在内存中进行,所以是看不到对方的操作的。在这种情况下,就会发生资源争用,导致文件系统崩溃。

   既然这样,每当一个节点挂了的话,启不是还要手动去提升另一个节点为主节点?所以,要想实现同时挂载使用,就只能在集群的高可用模型下使用,因为集群支持分布式文件锁,当A节点持有锁时,可以通知给B节点,这也就意味着他们要依靠高可用集群的信息层才可以做双主,这也正是今天要使用的方式。在主节点上,应用程序应能运行和访问drbd设备(/dev/drbd*)。每次写入都会发往本地磁盘设备和从节点设备中。从节点只能简单地把数据写入它的磁盘设备上。 读取数据通常在本地进行。 如果主节点发生故障,心跳(heartbeat或corosync)将会把从节点转换到主状态,并启动其上的应用程序。

3、DRBD的复制模式

构建DRBD模型的MySQL高可用(HA)集群

(1)异步(协议A)

   只需发给本地的TCP/IP协议栈,并发送到本地发送队列,准备发送,即返回

(2)半同步(协议B)

   发送到对方的TCP/IP协议栈并返回

(3)同步(协议C)

   复制写到对方磁盘才返回

二、环境准备(两台做同样操作,只在node1上演示)

1、操作系统及主机

   CentOS 6.5 x86_64平台

   node1.shuishui.com    172.16.7.100

   node2.shuishui.com    172.16.7.200

2、修改两台主机的主机名,保证主机名与uname -n的显示结果一至


[root@node1 ~]# uname -n
node1.shuishui.com

3、配置节点互相解析


[root@node1 ~]# vim /etc/hosts
172.16.7.100 node1.shuishui.com node1
172.16.7.200 node2.shuishui.com node2

4、时间同步

[root@node1 ~]# ntpdate 172.16.0.1

5、配置SSH双机互信

[root@node1 ~]# ssh-keygen -t rsa -P ‘‘
[root@node1 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@node2

6、所需软件

corosync      #直接yum安装
pacemaker     #直接yum安装
crmsh-1.2.6-4.el6.x86_64.rpm             #pacemaker的配置接口
pssh-2.3.1-2.el6.x86_64.rpm              #crmsh的依赖包
mariadb-10.0.10-linux-x86_64.tar.gz      #二进制格式MariaDB
drbd-8.4.3-33.el6.x86_64.rpm        #drbd管理工具
drbd-kmdl-2.6.32-431.el6-8.4.3-33.el6.x86_64.rpm     #drbd内核模块

7、硬盘准备

   为了配置DRBD,在node1和node2上各准备一块大小相同的硬盘/dev/sdb。如果你的/dev/sda有足够空间,创建分区就可以

三、配置corosync

1、安装软件包

我把上面第6步所需要的软件都放到了/root下,所以直接yum一下安装

[root@node1 ~]# yum -y install corosync
[root@node1 ~]# yum -y install pacemaker
[root@node1 ~]# yum -y install *.rpm

drbd共有两部分组成:内核模块和用户空间的管理工具。其中drbd内核模块代码已经整合进Linux内核2.6.33以后的版本中,因此,如果您的内核版本高于此版本的话,你只需要安装管理工具即可;否则,您需要同时安装内核模块和管理工具两个软件包,并且此两者的版本号一定要保持对应。

   目前适用CentOS 5的drbd版本主要有8.0、8.2、8.3三个版本,其对应的rpm包的名字分别为drbd, drbd82和drbd83,对应的内核模块的名字分别为kmod-drbd, kmod-drbd82和kmod-drbd83。而适用于CentOS 6的版本为8.4,其对应的rpm包为drbd和drbd-kmdl,但在实际选用时,要切记两点:drbd和drbd-kmdl的版本要对应;另一个是drbd-kmdl的版本要与当前系统的内容版本相对应。各版本的功能和配置等略有差异;我们实验所用的平台为x86_64且系统为CentOS 6.5,因此需要同时安装内核模块和管理工具。我们这里选用最新的8.4的版本(drbd-8.4.3-33.el6.x86_64.rpm和drbd-kmdl-2.6.32-431.el6-8.4.3-33.el6.x86_64.rpm),下载地址为ftp://rpmfind.net/linux/atrpms/

2、配置corosync

[root@node1 ~]# cd /etc/corosync/
[root@node1 corosync]# cp corosync.conf.example corosync.conf

(1)修改corosync的配置文件,增加service段和aisexec段

compatibility: whitetank
totem {
        version: 2
        secauth: off        #安全认证
        threads: 0
        interface {
                ringnumber: 0
                bindnetaddr: 172.16.7.0        #绑定网络地址
                mcastaddr: 230.100.100.7       #心跳信息传递的组播地址
                mcastport: 5405                #多播端口
                ttl: 1
        }
}
logging {
        fileline: off
        to_stderr: no
        to_logfile: yes            #是否写入日志文件
        to_syslog: no
        logfile: /var/log/cluster/corosync.log    #cluster这个目录如果没有的话,需手动创建
        debug: off
        timestamp: on
        logger_subsys {
                subsys: AMF
                debug: off
        }
}
amf {
        mode: disabled
}
service {
        ver:0
        name:pacemaker            #定义corosync在启动时自动启动pacemaker
}
aisexec {                         #表示启动corosync的ais功能,以哪个用户的身份运行
        user:root
        group:root
}

(2)生成密钥文件

对于corosync而言,各节点之间通信需要安全认证,所以需要安全密钥,生成后会自动保存至当前目录下,命名为authkey,权限为400。我在《corosync+pacemaker实现web集群高可用》:http://nmshuishui.blog.51cto.com/1850554/1399811 这篇博文中使用的是随机数方法生成密钥,有时它熵池中的随机数不够用,所以生成速度会相当慢,所以今天这里就不使用随机数生成了,而是使用伪随机数生成,但是这种方法不安全,请慎用

[root@node1 corosync]# mv /dev/random /dev/h
[root@node1 corosync]# ln /dev/urandom /dev/random
[root@node1 corosync]# corosync-keygen
[root@node1 corosync]# rm /dev/random
[root@node1 corosync]# mv /dev/h /dev/random

(3)将corosync.conf和生成的authkey传到node2上

[root@node1 corosync]# scp -p authkey corosync.conf node2:/etc/corosync/

3、启动corosync并检查配置

   请参考这里:《corosync+pacemaker实现web集群高可用》http://nmshuishui.blog.51cto.com/1850554/1399811

[root@node1 ~]# service corosync start
Starting Corosync Cluster Engine (corosync):               [  OK  ]
[root@node1 ~]# ssh node2 "service corosync start"
Starting Corosync Cluster Engine (corosync): [  OK  ]

4、查看集群状态

[root@node1 ~]# crm status
Last updated: Wed Apr 23 14:44:11 2014
Last change: Wed Apr 23 14:44:07 2014 via crmd on node1.shuishui.com
Stack: classic openais (with plugin)
Current DC: node1.shuishui.com - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
0 Resources configured
Online: [ node1.shuishui.com node2.shuishui.com ]  #node1,node2都在线

四、配置DRBD

1、配置/etc/drbd.d/global-common.conf

global {
        usage-count no;    #是否让linbit公司收集目前drbd的使用情况
        # minor-count dialog-refresh disable-ip-verification
}
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
common {
        protocol C;
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
        handlers {
                pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
                pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
                local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
                # fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
                # split-brain "/usr/lib/drbd/notify-split-brain.sh root";
                # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
                # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
                # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
        }
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
        startup {
                #wfc-timeout 120;
                #degr-wfc-timeout 120;
        }
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
        disk {
                on-io-error detach;          # 同步错误的做法是分离
                #fencing resource-only;
        }
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
        net {
                cram-hmac-alg "sha1";         #加密算法为sha1
                shared-secret "mydrbdlab";    #加密key
        }
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    
        syncer {
                rate 500M;                    #同步速率
        }
}


2、定义一个资源

[root@node1 drbd.d]# vim mariadb.res
resource mariadb {
        on node1.shuishui.com {
        device  /dev/drbd0;
        disk    /dev/sdb;
        address 172.16.7.100:7789;
        meta-disk internal;
        }
        on node2.shuishui.com {
        device  /dev/drbd0;
        disk    /dev/sdb;
        address 172.16.7.200:7789;
        meta-disk internal;
        }
}
~

3、同步配置文件到node2

   第2步中的资源文件在两个节点上必须相同,因此,可以基于ssh将刚才配置的文件全部同步至另外一个节点

[root@node1 drbd.d]# scp /etc/drbd.d/* node2:/etc/drbd.d/

4、在两个节点上初始化已定义的资源并启动服务(只在node1上演示)

(1)初始化资源(两个节点都需执行)

[root@node1 ~]# drbdadm create-md mariadb

这一步会出现如下报错,不需理会,直接忽略

Writing meta data...
initializing activity log
NOT initializing bitmap
lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory
New drbd meta data block successfully created.
lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory

(2)启动服务(两个节点都需执行)

[root@node1 ~]# service drbd start

(3)查看启动状态

[root@node1 ~]# cat /proc/drbd
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-11-29 12:28:00
 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:20970844

   也可以使用drbd-overview命令来查看

[root@node1 ~]# drbd-overview
  0:mariadb/0  Connected Secondary/Secondary Inconsistent/Inconsistent C r-----

从上面的信息中可以看出此时两个节点均牌Secondary状态,因此需要将一个节点设为Primary

(4)设置node1为主节点

[root@node1 ~]# drbdadm primary --force mariadb

   此时再来使用drbd-overview命令来查看状态,可以发现数据同步过程已经开始

[root@node1 ~]# drbd-overview
  0:mariadb/0  SyncSource Primary/Secondary UpToDate/Inconsistent C r---n-
    [================>...] sync‘ed: 88.8% (2304/20476)M

(5)等待数据同步完成再次查看状态

[root@node1 ~]# drbd-overview
  0:mariadb/0  Connected Primary/Secondary UpToDate/UpToDate C r-----

   此时可以发现节点已经成实时状态,且节点已经有了主次

5、创建文件系统

   文件系统的挂载只能在Primary节点进行,因此,也只有在设置了主节点后才能对drbd设备进行格式化:

[root@node1 ~]# mke2fs -j -L DRBD /dev/drbd0
[root@node1 ~]# mkdir /mnt/drbd
[root@node1 ~]# mount /dev/drbd0 /mnt/drbd/
[root@node1 ~]# ls /mnt/drbd/
lost+found                     #挂载成功

6、切换Primary和Secondary节点

   对主Primary/Secondary模型的drbd服务来讲,在某个时刻只能有一个节点为Primary,因此,要切换两个节点的角色,只能在先将原有的Primary节点设置为Secondary后,才能将原来的Secondary节点设置为Primary:

(1)在切换前,我们先往/mnt/drbd中拷个文件

[root@node1 drbd]# cp /etc/fstab .
[root@node1 drbd]# ls
fstab  lost+found

(2)降级node1

   再降级时一定要先卸载再降级

[root@node1 ~]# umount /mnt/drbd/             #先卸载
[root@node1 ~]# drbdadm secondary mariadb     #再降级
[root@node1 ~]# drbd-overview                 #降级成功
  0:mariadb/0  Connected Secondary/Secondary UpToDate/UpToDate C r-----

(3)提升node2

[root@node2 ~]# drbdadm primary mariadb    #提升node2
[root@node2 ~]# drbd-overview              #node2已经成为主节点
  0:mariadb/0  Connected Primary/Secondary UpToDate/UpToDate C r-----
[root@node2 ~]# mkdir /mnt/drbd            #创建目录并挂载
[root@node2 ~]# mount /dev/drbd0 /mnt/drbd

(4)查看此前在主节点上复制到此设备的文件是否存在

[root@node2 ~]# ls /mnt/drbd
fstab  lost+found            #确实存在,没有问题

   到此,DRBD配置结束

五、MySQL配置安装说明

1、创建mysql用户mysql组(node1和node2都操作)

[root@node1 local]# groupadd -g 306 mysql
[root@node1 local]# useradd -u 306 -g mysql -s /sbin/nologin -M mysql

2、解压mysql(node1和node2都操作)

[root@node1 ~]# tar xf mariadb-10.0.10-linux-x86_64.tar.gz -C /usr/local/
[root@node1 ~]# cd /usr/local/
[root@node1 local]# ln -sv mariadb-10.0.10-linux-x86_64/ mysql
`mysql‘ -> `mariadb-10.0.10-linux-x86_64/‘
[root@node1 local]# chown -R mysql.mysql mysql/*

3、将node1的DRBD设置为主节点并挂载

[root@node1 ~]# drbd-overview
  0:web/0  Connected Primary/Secondary UpToDate/UpToDate C r-----
[root@node1 ~]# mkdir /mydata
[root@node1 ~]# mount /dev/drbd0 /mydata/
[root@node1 ~]# cd /mydata/
[root@node1 mydata]# mkdir data
[root@node1 mydata]# chown -R  mysql.mysql /mydata/data/
[root@node1 mydata]# mkdir binlogs
[root@node1 mydata]# chown -R mysql.mysql binlogs/
[root@node1 mydata]# ll
total 24
drwxr-xr-x 2 mysql mysql  4096 Apr 23 21:37 binlogs
drwxr-xr-x 2 mysql mysql  4096 Apr 23 21:37 data
drwx------ 2 root  root  16384 Apr 23 16:26 lost+found


4、提供mysql配置文件

[root@node1 ~]# cp /usr/local/mysql/support-files/my-large.cnf /etc/my.cnf
[root@node1 ~]# vim /etc/my.cnf
#增加下面这一行
datadir = /mydata/data
#修改二进制日志路径
log-bin=/mydata/binlogs/master-bin

5、初始化mysql

[root@node1 data]# /usr/local/mysql/scripts/mysql_install_db --datadir=/mydata/data/ --basedir=/usr/local/mysql --user=mysql

6、提供服务脚本

[root@node1 ~]# cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysqld

7、启动并测试mysql

[root@node1 ~]# service mysqld start
Starting MySQL. SUCCESS!
[root@node1 ~]# /usr/local/mysql/bin/mysql
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 4
Server version: 10.0.10-MariaDB-log MariaDB Server
Copyright (c) 2000, 2014, Oracle, SkySQL Ab and others.
Type ‘help;‘ or ‘\h‘ for help. Type ‘\c‘ to clear the current input statement.
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+

8、将配置好的配置文件和脚本复制到node2上

[root@node1 ~]# scp /etc/my.cnf node2:/etc/
my.cnf                                                100% 4940     4.8KB/s   00:00 
[root@node1 ~]# scp /etc/rc.d/init.d/mysqld node2:/etc/rc.d/init.d/
mysqld                                                100%   11KB  11.4KB/s   00:00

9、关闭mysql并设置为开机不启动

[root@node1 ~]# service mysqld stop
Shutting down MySQL. SUCCESS!
[root@node1 data]# chkconfig mysqld off

10、设置node2为主节点,并挂载测试

[root@node1 ~]# umount /mydata/
[root@node1 ~]# drbdadm secondary mariadb
[root@node1 ~]# drbd-overview
  0:mariadb/0  Connected Secondary/Secondary UpToDate/UpToDate C r-----
======================================================================
[root@node2 ~]# drbdadm primary mariadb
[root@node2 ~]# drbd-overview
  0:mariadb/0  Connected Primary/Secondary UpToDate/UpToDate C r-----
[root@node2 ~]# mkdir /mydata/
[root@node2 ~]# mount /dev/drbd0 /mydata/
[root@node2 ~]# ll /mydata/
total 24
drwxr-xr-x 2 mysql mysql  4096 Apr 23 21:46 binlogs
drwxr-xr-x 5 mysql mysql  4096 Apr 23 21:46 data
drwx------ 2 root  root  16384 Apr 23 16:26 lost+found

11、启动并测试node2上的mysql

[root@node2 mydata]# service mysqld start
Starting MySQL.. SUCCESS!
[root@node2 mydata]# /usr/local/mysql/bin/mysql
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 4
Server version: 10.0.10-MariaDB-log MariaDB Server
Copyright (c) 2000, 2014, Oracle, SkySQL Ab and others.
Type ‘help;‘ or ‘\h‘ for help. Type ‘\c‘ to clear the current input statement.
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.01 sec)

12、关闭node2上的mysql并设置为开机不启动

[root@node2 ~]# service mysqld stop
Shutting down MySQL. SUCCESS!
[root@node2 ~]# chkconfig mysqld off

六、配置高可用集群资源

1、停止DRBD服务并设置为开机不启动,将由CRM管理

[root@node1 ~]# service drbd stop
[root@node1 ~]# chkconfig drbd off
[root@node1 ~]# ssh node2 "service drbd stop"
[root@node1 ~]# ssh node2 "chkconfig drbd off"

2、定义全局属性,设置没有法定票数的行为和禁用stonith

   至于为什么这样做,详细说明在我的上一篇高可用博文里

[root@node1 ~]# crm
crm(live)# configure
crm(live)configure# property stonith-enabled=false
crm(live)configure# property no-quorum-policy=ignore
crm(live)configure# verify
crm(live)configure# commit

3、配置drbd为集群资源

(1)查看drbd的provider    

   提供drbd的RA目前由OCF归类为linbit,其路径为/usr/lib/ocf/resource.d/linbit/drbd,可以使用下面命令查看RA及RAmeta信息

crm(live)ra# classes
lsb
ocf / heartbeat linbit pacemaker
service
stonith
crm(live)ra# list ocf linbit
drbd

(2)配置drdb资源

drbd需要同时运行在两个节点上,但只能有一个节点(primary/secondary模型)是Master,而另一个节点为Slave;因此,它是一种比较特殊的集群资源,其资源类型为多态(Multi-state)clone类型,即主机节点有Master和Slave之分,且要求服务刚启动时两个节点都处于slave状态。

crm(live)configure# primitive mysqldrbd ocf:linbit:drbd params drbd_resource=mariadb op monitor role=Master interval=50s timeout=30s op monitor role=Slave interval=60s timeout=30s op start timeout=240s interval=0 op stop timeout=100s interval=0
crm(live)configure#
crm(live)configure# master MS_mysqldrbd mysqldrbd meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
crm(live)configure#
crm(live)configure# show mysqldrbd
primitive mysqldrbd ocf:linbit:drbd     params drbd_resource="mariadb"     op monitor role="Master" interval="50s" timeout="30s"     op monitor role="Slave" interval="60s" timeout="30s"     op start timeout="240s" interval="0"     op stop timeout="100s" interval="0"
crm(live)configure# show MS_mysqldrbd
ms MS_mysqldrbd mysqldrbd     meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
crm(live)configure# verify
crm(live)configure# commit

   查看当前集群的运行状态

[root@node1 ~]# crm status
Last updated: Wed Apr 23 18:20:46 2014
Last change: Wed Apr 23 18:15:38 2014 via cibadmin on node1.shuishui.com
Stack: classic openais (with plugin)
Current DC: node1.shuishui.com - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
2 Resources configured
Online: [ node1.shuishui.com node2.shuishui.com ]
 Master/Slave Set: MS_mysqldrbd [mysqldrbd]
     Masters: [ node1.shuishui.com ]
     Slaves: [ node2.shuishui.com ]


   从上面的信息可以看出,此时的drbd服务的Primary节点在node1.shuishui.com上,Secondary节点为node2.shuishui.com。当然,也可以在node2上使用如下命令验证当前主机是否已经成为mariadb资源的Slave节点

drbdadm role mariadb

4、配置文件系统资源

   为Primary节点上的mariadb资源创建自动挂载的集群服务:

   MS_mysqldrbd的Master节点即为drbd服务mariadb资源的Primary节点,此节点的设备/dev/drbd0可以挂载使用,且在某集群服务的应用当中也需要能够实现自动挂载。假设我们这里的mariadb资源是为mysql服务器集群提供数据目录的共享文件系统,其需要挂载至/mydata(此目录需要在两个节点都已经建立完成)目录。


   此外,此自动挂载的集群资源需要运行于drbd服务的Master节点上,并且只能在drbd服务将某节点设置为Primary以后方可启动。因此,还需要为这两个资源建立排列约束和顺序约束。

crm(live)configure# primitive mysqlstore ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/mydata" fstype="ext4" op monitor interval=40s timeout=40s op start timeout=60s interval=0 op stop timeout=60s interval=0
crm(live)configure#
crm(live)configure# verify
crm(live)configure# colocation mysqlstore_with_MS_mysqldrbd inf: mysqlstore MS_mysqldrbd:Master               #排列约束:drbd要与主节点永远在一起
crm(live)configure# order mysqlstore_after_MS_mysqldrbd mandatory: MS_mysqldrbd:promote mysqlstore:start        #drbd先提升,再挂载
crm(live)configure# verify
crm(live)configure# commit

   查看此刻集群中资源的运行状态

[root@node1 ~]# crm status
======================================================
Last updated: Wed Apr 23 19:07:27 2014
Last change: Wed Apr 23 18:55:52 2014 via cibadmin on node1.shuishui.com
Stack: classic openais (with plugin)
Current DC: node1.shuishui.com - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
3 Resources configured
=======================================================
Online: [ node1.shuishui.com node2.shuishui.com ]
-------------------------------------------------------
 Master/Slave Set: MS_mysqldrbd [mysqldrbd]
     Masters: [ node1.shuishui.com ]
     Slaves: [ node2.shuishui.com ]
 mysqlstore (ocf::heartbeat:Filesystem):    Started node1.shuishui.com

5、配置mysql资源

crm(live)configure# primitive mysqld lsb:mysqld op monitor interval=20s timeout=20s on-fail=restart
crm(live)configure#
crm(live)configure# colocation mysqld_with_mysqlstore inf: mysqld mysqlstore
crm(live)configure#
crm(live)configure# verify
crm(live)configure#
crm(live)configure# order mysqlstore_before_mysqld inf: mysqlstore:start mysqld:start
crm(live)configure#
crm(live)configure# verify
crm(live)configure#
crm(live)configure# commit

   查看此刻集群中资源的运行状态

[root@node1 ~]# crm status
========================================================
Last updated: Wed Apr 23 22:10:46 2014
Last change: Wed Apr 23 20:52:58 2014 via cibadmin on node1.shuishui.com
Stack: classic openais (with plugin)
Current DC: node1.shuishui.com - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
4 Resources configured
========================================================
Online: [ node1.shuishui.com node2.shuishui.com ]
 Master/Slave Set: MS_mysqldrbd [mysqldrbd]
     Masters: [ node1.shuishui.com ]
     Slaves: [ node2.shuishui.com ]
 mysqlstore (ocf::heartbeat:Filesystem):    Started node1.shuishui.com
 mysqld (lsb:mysqld):   Started node1.shuishui.com

   测试mysql是否可以正常登录

[root@node1 ~]# /usr/local/mysql/bin/mysql
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 5
Server version: 10.0.10-MariaDB-log MariaDB Server
Copyright (c) 2000, 2014, Oracle, SkySQL Ab and others.
Type ‘help;‘ or ‘\h‘ for help. Type ‘\c‘ to clear the current input statement.
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+

6、配置VIP资源

crm(live)configure# primitive vip ocf:heartbeat:IPaddr params ip=172.16.7.1 cidr_netmask=16 op monitor interval=20s timeout=20s on-fail=restart
crm(live)configure#
crm(live)configure# colocation vip_with_mysqld inf: vip mysqld
crm(live)configure#
crm(live)configure# order vip_before_mysqld inf: vip mysqld
crm(live)configure#
crm(live)configure# verify
crm(live)configure# commit

   查看此刻集群中的资源运行状态

[root@node1 ~]# crm status
=======================================================
Last updated: Wed Apr 23 22:33:13 2014
Last change: Wed Apr 23 22:31:13 2014 via cibadmin on node1.shuishui.com
Stack: classic openais (with plugin)
Current DC: node1.shuishui.com - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
5 Resources configured
=======================================================
Online: [ node1.shuishui.com node2.shuishui.com ]
 Master/Slave Set: MS_mysqldrbd [mysqldrbd]
     Masters: [ node1.shuishui.com ]
     Slaves: [ node2.shuishui.com ]
 mysqlstore (ocf::heartbeat:Filesystem):    Started node1.shuishui.com
 mysqld (lsb:mysqld):   Started node1.shuishui.com
 vip    (ocf::heartbeat:IPaddr):    Started node1.shuishui.com

7、最后再显示一下所有的配置结果

node node1.shuishui.com
node node2.shuishui.com
primitive mysqld lsb:mysqld         op monitor interval="20s" timeout="20s" on-fail="restart"
primitive mysqldrbd ocf:linbit:drbd         params drbd_resource="mariadb"         op monitor role="Master" interval="50s" timeout="30s"         op monitor role="Slave" interval="60s" timeout="30s"         op start timeout="240s" interval="0"         op stop timeout="100s" interval="0"
primitive mysqlstore ocf:heartbeat:Filesystem         params device="/dev/drbd0" directory="/mydata" fstype="ext4"         op monitor interval="40s" timeout="40s"         op start timeout="60s" interval="0"         op stop timeout="60s" interval="0"
primitive vip ocf:heartbeat:IPaddr         params ip="172.16.7.1" cidr_netmask="16"         op monitor interval="20s" timeout="20s" on-fail="restart"
ms MS_mysqldrbd mysqldrbd         meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
colocation mysqld_with_mysqlstore inf: mysqld mysqlstore
colocation mysqlstore_with_MS_mysqldrbd inf: mysqlstore MS_mysqldrbd:Master
colocation vip_with_mysqld inf: vip mysqld
order mysqlstore_after_MS_mysqldrbd inf: MS_mysqldrbd:promote mysqlstore:start
order mysqlstore_before_mysqld inf: mysqlstore:start mysqld:start
order vip_before_mysqld inf: vip mysqld
property $id="cib-bootstrap-options"         dc-version="1.1.10-14.el6-368c726"         cluster-infrastructure="classic openais (with plugin)"         expected-quorum-votes="2"         stonith-enabled="false"         no-quorum-policy="ignore"

七、测试mysql高可用集群

1、授权可远程登录的网段及用户

MariaDB [(none)]>
MariaDB [(none)]> grant all on *.* to ‘test‘@‘172.16.%.%‘ identified by ‘test‘;
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)

2、远程客户端测试

   使用虚拟IP:172.16.7.1远程登录mysql服务器,客户端IP是:172.16.7.10

[root@node1 ~]# mysql -u test -h 172.16.7.1 -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 6
Server version: 10.0.10-MariaDB-log MariaDB Server
Copyright (c) 2000, 2014, Oracle, SkySQL Ab and others.
Type ‘help;‘ or ‘\h‘ for help. Type ‘\c‘ to clear the current input statement.
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.00 sec)

3、故障模拟

[root@node1 ~]# crm node standby
[root@node1 ~]# crm status
Last updated: Wed Apr 23 22:51:10 2014
Last change: Wed Apr 23 22:51:02 2014 via crm_attribute on node1.shuishui.com
Stack: classic openais (with plugin)
Current DC: node1.shuishui.com - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
5 Resources configured
Node node1.shuishui.com: standby
Online: [ node2.shuishui.com ]
 Master/Slave Set: MS_mysqldrbd [mysqldrbd]
     Masters: [ node2.shuishui.com ]          #node2已经自动切换为Master且所有资源已切换到node2上
     Stopped: [ node1.shuishui.com ]
 mysqlstore (ocf::heartbeat:Filesystem):    Started node2.shuishui.com
 mysqld (lsb:mysqld):   Started node2.shuishui.com
 vip    (ocf::heartbeat:IPaddr):    Started node2.shuishui.com

4、再次在远程客户端登录VIP:172.16.7.1

[root@node1 ~]# mysql -u test -h 172.16.7.1 -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 4
Server version: 10.0.10-MariaDB-log MariaDB Server
Copyright (c) 2000, 2014, Oracle, SkySQL Ab and others.
Type ‘help;‘ or ‘\h‘ for help. Type ‘\c‘ to clear the current input statement.
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.05 sec)

   远程客户端访问mysql服务器毫无压力,根本意识不到节点已经自动切换到node2上。


   搭建基于DRBD模型的MySQL高可用(HA)集群获得完美成功!构建DRBD模型的MySQL高可用(HA)集群





本文出自 “nmshuishui的博客” 博客,请务必保留此出处http://nmshuishui.blog.51cto.com/1850554/1401410

构建DRBD模型的MySQL高可用(HA)集群,布布扣,bubuko.com

构建DRBD模型的MySQL高可用(HA)集群

上一篇:一张图片说明Oracle数据库网络连接原理


下一篇:AI完美混合专色使用过技巧介绍