基于DRBD+corosync对MariaDB做高可用集群

一、DRBD简介

   DRBD的全称为:Distributed ReplicatedBlock Device(DRBD)分布式块设备复制,DRBD是由内核模块和相关脚本而构成,用以构建高可用性的集群。其实现方式是通过网络来镜像整个设备。你可以把它看作是一种网络RAID。它允许用户在远程机器上建立一个本地块设备的实时镜像。

     1.下载rpm包

      目前适用CentOS 5的drbd版本主要有8.0、8.2、8.3三个版本,其对应的rpm包的名字分别为drbd, drbd82和drbd83,对应的内核模块的名字分别为kmod-drbd, kmod-drbd82和kmod-drbd83。而适用于CentOS 6的版本为8.4,其对应的rpm包为drbd和drbd-kmdl,但在实际选用时,要切记两点:drbd和drbd-kmdl的版本要对应;另一个是drbd-kmdl的版本要与当前系统的内容版本相对应。各版本的功能和配置等略有差异;我们实验所用的平台为x86_64且系统为CentOS 6.5,因此需要同时安装内核模块和管理工具。我们这里选用最新的8.4的版本(drbd-8.4.3-33.el6.x86_64.rpm和drbd-kmdl-2.6.32-431.el6-8.4.3-33.el6.x86_64.rpm

2.准备工作

   两个节点的节点的名称和对应的ip地址解析服务应该能正常工作

3.安装,由于drbd包没有依赖关系,可以直接使用rpm安装
  1. [root@node1 ~]# rpm -ivh drbd-8.4.3-33.el6.x86_64.rpm drbd-kmdl-2.6.32-431.el6-8.4.3-33.el6.x86_64.rpm
  2. [root@node2 ~]# rpm -ivh drbd-8.4.3-33.el6.x86_64.rpm drbd-kmdl-2.6.32-431.el6-8.4.3-33.el6.x86_64.rpm

重新启动rsyslog服务

4.配置说明

   drbd的主配置文件为/etc/drbd.conf;为了管理的便捷性,目前通常会将些配置文件分成多个部分,且都保存至/etc/drbd.d目录中,主配置文件中仅使用"include"指令将这些配置文件片断整合起来。通常,/etc/drbd.d目录中的配置文件为global_common.conf和所有以.res结尾的文件。其中global_common.conf中主要定义global段和common段,而每一个.res的文件用于定义一个资源。

   在配置文件中,global段仅能出现一次,且如果所有的配置信息都保存至同一个配置文件中而不分开为多个文件的话,global段必须位于配置文件的最开始处。目前global段中可以定义的参数仅有minor-count, dialog-refresh, disable-ip-verification和usage-count。

   common段则用于定义被每一个资源默认继承的参数,可以在资源定义中使用的参数都可以在common段中定义。实际应用中,common段并非必须,但建议将多个资源共享的参数定义为common段中的参数以降低配置文件的复杂度。

   resource段则用于定义drbd资源,每个资源通常定义在一个单独的位于/etc/drbd.d目录中的以.res结尾的文件中。资源在定义时必须为其命名,名字可以由非空白的ASCII字符组成。每一个资源段的定义中至少要包含两个host子段,以定义此资源关联至的节点,其它参数均可以从common段或drbd的默认中进行继承而无须定义。

5.配置文件/etc/drbd.d/global_common.conf

  1. global {
  2. usage-count no; //drbd官网用来统计drbd的使用数据的
  3. # minor-count dialog-refresh disable-ip-verification
  4. }
  5. common {  //提供共享配置
  6. handlers {  //处理器,在特定的环境下执行的命令
  7. # These are EXAMPLE handlers only.
  8. # They may have severe implications,
  9. # like hard resetting the node under certain circumstances.
  10. # Be careful when chosing your poison.
  11. pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; //在主节点降级的情况下要执行的命令
  12. pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; //在备节点接替主节点前,对主节点的操作
  13. local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f"; 当本地发送io错误时的操作
  14. # fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
  15. # split-brain "/usr/lib/drbd/notify-split-brain.sh root";
  16. # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
  17. # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
  18. # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
  19. # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
  20. }
  21. startup { //可以不配置,使用默认即可
  22. # wfc-timeout 等待对端上线的超时时间
  23. # degr-wfc-timeout 降级的超时时间
  24. #outdated-wfc-timeout 过期的等待超时时间
  25. #wait-after-sb 脑裂的等待时间
  26. }
  27. options { //可以不配置 使用默认即可
  28. # cpu-mask on-no-data-accessible
  29. }
  30. disk {
  31. # size max-bio-bvecs on-io-error fencing disk-barrier disk-flushes
  32. # disk-drain md-flushes resync-rate resync-after al-extents
  33. # c-plan-ahead c-delay-target c-fill-target c-max-rate
  34. # c-min-rate disk-timeout
  35. on-io-error 当发生io错误是,应该要做的操作,有三个选项,pass_on:降级当前节点;call-local-io-error:执行本机的io-error操作;detach:将磁盘拆掉
  36. }
  37. net {
  38. protocol C 协议版本
  39. cram-hmac-alg "sha1"
  40. shared-secret "kjasdbiu2178uwhbj"
  41. # protocol timeout max-epoch-size max-buffers unplug-watermark
  42. # connect-int ping-int sndbuf-size rcvbuf-size ko-count
  43. # allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri
  44. # after-sb-1pri after-sb-2pri always-asbp rr-conflict
  45. # ping-timeout data-integrity-alg tcp-cork on-congestion
  46. syncer{
  47. rate 1000M 同步的速率
  48. }
  49. }

6.准备磁盘设备,双方节点都需要准备,最好是能等同大小,编号最好也能一样.

  1. [root@node2 ~]# fdisk /dev/sda
  2. WARNING: DOS-compatible mode is deprecated. It‘s strongly recommended to
  3. switch off the mode (command ‘c‘) and change display units to
  4. sectors (command ‘u‘).
  5. Command (m for help): n
  6. Command action
  7. e   extended
  8. p   primary partition (1-4)
  9. p
  10. Partition number (1-4): 3
  11. First cylinder (7859-15665, default 7859):
  12. Using default value 7859
  13. Last cylinder, +cylinders or +size{K,M,G} (7859-15665, default 15665): +10G
  14. Command (m for help): w
  15. The partition table has been altered!
  16. Calling ioctl() to re-read partition table.
  17. WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
  18. The kernel still uses the old table. The new table will be used at
  19. the next reboot or after you run partprobe(8) or kpartx(8)
  20. Syncing disks.
  21. [root@node2 ~]# kpartx -af /dev/sda
  22. device-mapper: reload ioctl on sda1 failed: Invalid argument
  23. create/reload failed on sda1
  24. device-mapper: reload ioctl on sda2 failed: Invalid argument
  25. create/reload failed on sda2
  26. device-mapper: reload ioctl on sda3 failed: Invalid argument
  27. create/reload failed on sda3
  28. [root@node2 ~]# partx -a /dev/sda
  29. BLKPG: Device or resource busy
  30. error adding partition 1
  31. BLKPG: Device or resource busy
  32. error adding partition 2
  33. BLKPG: Device or resource busy
  34. error adding partition 3

7.准备资源配置文件,如drbd.re

  1. resource drbd {
  2. on node1.zero1.com { 用on来标志节点名称
  3. device  /dev/drbd0;    指定drbd的编号
  4. disk    /dev/sda3;     磁盘分区编号
  5. address 192.168.1.200:7789;    监听的套接字
  6. meta-disk internal; 对于原始数据的处理,此处为保持在本磁盘
  7. }
  8. on node2.zero1.com {
  9. device  /dev/drbd0;
  10. disk    /dev/sda3;
  11. address 192.168.1.201:7789
  12. meta-disk internal
  13. }
  14. }

8.将数据复制到节点

  1. [root@node1 drbd.d]# scp * node2:/etc/drbd.d/

七、初始化测试

1.创建资源   

  1. [root@node1 drbd.d]# drbdadm create-md drbd
  2. Writing meta data...
  3. initializing activity log
  4. NOT initializing bitmap
  5. lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory
  6. New drbd meta data block successfully created. //创建成功
  7. lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory

2.启动服务  

  1. [root@node2 ~]# service drbd start
  2. Starting DRBD resources: [
  3. create res: drbd
  4. prepare disk: drbd
  5. adjust disk: drbd
  6. adjust net: drbd
  7. ]

两边要同时启动

   3.查看启动状态

  1. [root@node2 ~]# cat /proc/drbd
  2. version: 8.4.3 (api:1/proto:86-101)
  3. GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-11-29 12:28:00
  4. 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
  5. ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:10489084
  6. 也可以通过drbd-overview命令来查看
  7. [root@node2 ~]# drbd-overview
  8. 0:drbd/0 Connected Secondary/Secondary Inconsistent/Inconsistent C r-----

从上面的信息中可以看出此时两个节点均处于Secondary状态。

   4.提升节点

  1. [root@node1 drbd.d]# drbdadm primary --force drbd
  2. [root@node1 drbd.d]# drbd-overview
  3. 0:drbd/0 SyncSource Primary/Secondary UpToDate/Inconsistent C r-----
  4. [>....................] sync‘ed:  4.8% (9760/10240)M

   可以看到正在同步数据,等数据同步完再查看

  1. [root@node1 ~]# cat /proc/drbd
  2. version: 8.4.3 (api:1/proto:86-101)
  3. GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-11-29 12:28:00
  4. 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
  5. ns:10490880 nr:0 dw:0 dr:10496952 al:0 bm:641 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

   5.创建文件系统

   文件系统的挂载只能在Primary节点进行,因此,也只有在设置了主节点后才能对drbd设备进行格式化:

  1. [root@node1 ~]# mke2fs -t ext4 /dev/drbd0
  2. [root@node1 ~]# mkdir /drbd
  3. [root@node1 ~]# mount /dev/drbd0 /drbd/

   6.切换Primary和Secondary节点

   对主Primary/Secondary模型的drbd服务来讲,在某个时刻只能有一个节点为Primary,因此,要切换两个节点的角色,只能在先将原有的Primary节点设置为Secondary后,才能原来的Secondary节点设置为Primary: 

主节点

  1. [root@node1 ~]# cp /etc/fstab /drbd/
  2. [root@node1 ~]# umount /drbd/
  3. [root@node1 ~]# drbdadm secondary drbd

备节点

  1. [root@node2 ~]# drbdadm primary drbd
  2. [root@node2 ~]# mkdir /drbd
  3. [root@node2 ~]# mount /dev/drbd0 /drbd/
  4. [root@node2 ~]# ll /drbd/
  5. total 20
  6. -rw-r--r-- 1 root root   921 Apr 19 2014 fstab
  7. drwx------ 2 root root 16384 Apr 19 2014 lost+found

可以看到我们的数据在节点2上可以看到了

  1. [root@node2 ~]# cat /proc/drbd
  2. version: 8.4.3 (api:1/proto:86-101)
  3. GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-11-29 12:28:00
  4. 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
  5. ns:24 nr:10788112 dw:10788136 dr:1029 al:2 bm:642 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

可以看到现在node2为主节点

八、corosync的安装与配置

      安装corosync。

   2.crm的安装

  1. [root@node1 ~]# yum install -y crmsh-1.2.6-4.el6.x86_64.rpm pssh-2.3.1-2.el6.x86_64.rpm
  2. [root@node2 ~]# yum install -y crmsh-1.2.6-4.el6.x86_64.rpm pssh-2.3.1-2.el6.x86_64.rpm

   3.定义drbd资源

  1. [root@node1 ~]# crm
  2. crm(live)# configure
  3. crm(live)configure# primitive mariadbdrbd ocf:linbit:drbd params drbd_resource=drbd op monitor role=Master interval=10s timeout=20s op monitor role=Slave interval=20s timeout=20s op start timeout=240s op stop timeout=120s
  4. 定于drbd的主从资源
  5. crm(live)configure# ms ms_mariadbdrbd mariadbdrbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true

   4.定义文件系统资源和约束关系

  1. crm(live)configure# primitive mariadbstore ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/mydata" fstype="ext4" op monitor interval=40s timeout=40s op start timeout=60s op stop timeout=60s
  2. crm(live)configure# colocation mariadbstore_with_ms_mariadbdrbd inf: mariadbstore ms_mariadbdrbd:Master
  3. crm(live)configure# order ms_mariadbrbd_before_mariadbstore mandatory: ms_mariadbdrbd:promote mariadbstore:start

   5.增加vip和MariaDB的资源及约束关系

  1. crm(live)configure# primitive madbvip ocf:heartbeat:IPaddr2 params ip="192.168.1.240" op monitor interval=20s timeout=20s on-fail=restart
  2. crm(live)configure# primitive maserver lsb:mysqld op monitor interval=20s timeout=20s on-fail=restart
  3. crm(live)configure# verify

定义约束关系

  1. crm(live)configure# colocation maserver_with_mariadbstore inf: maserver mariadbstore
  2. crm(live)configure# order mariadbstore_before_maserver mandatory: mariadbstore:start maserver:start
  3. crm(live)configure# verify
  4. crm(live)configure# colocation madbvip_with_maserver inf: madbvip maserver
  5. crm(live)configure# order madbvip_before_masever mandatory: madbvip maserver
  6. crm(live)configure# verify
  7. crm(live)configure# commit

   6.查看所有定义的资源

  1. node node1.zero1.com
  2. node node2.zero1.com
  3. primitive madbvip ocf:heartbeat:IPaddr2 \
  4. params ip="192.168.1.240" \
  5. op monitor interval="20s" timeout="20s" on-fail="restart"
  6. primitive mariadbdrbd ocf:linbit:drbd \
  7. params drbd_resource="drbd" \
  8. op monitor role="Master" interval="30s" timeout="20s" \
  9. op monitor role="Slave" interval="60s" timeout="20s" \
  10. op start timeout="240s" interval="0" \
  11. op stop interval="0s" timeout="100s"
  12. primitive mariadbstore ocf:heartbeat:Filesystem \
  13. params device="/dev/drbd0" directory="/mydata" fstype="ext4" \
  14. op monitor interval="40s" timeout="40s" \
  15. op start timeout="60s" interval="0" \
  16. op stop timeout="60s" interval="0"
  17. primitive maserver lsb:mysqld \
  18. op monitor interval="20s" timeout="20s" on-fail="restart"
  19. ms ms_mariadbdrbd mariadbdrbd \
  20. meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
  21. colocation madbvip_with_maserver inf: madbvip maserver
  22. colocation mariadbstore_with_ms_mariadbdrbd inf: mariadbstore ms_mariadbdrbd:Master
  23. colocation maserver_with_mariadbstore inf: maserver mariadbstore
  24. order madbvip_before_masever inf: madbvip maserver
  25. order mariadbstore_before_maserver inf: mariadbstore:start maserver:start
  26. order ms_mariadbrbd_before_mairadbstore inf: ms_mariadbdrbd:promote mariadbstore:start
  27. property $id="cib-bootstrap-options" \
  28. dc-version="1.1.10-14.el6_5.2-368c726" \
  29. cluster-infrastructure="classic openais (with plugin)" \
  30. expected-quorum-votes="2" \
  31. stonith-enabled="false" \
  32. no-quorum-policy="ignore"

九、测试

   1.查看运行状态

  1. [root@node1 ~]# crm status
  2. Last updated: Wed Apr 23 16:24:11 2014
  3. Last change: Wed Apr 23 16:21:50 2014 via cibadmin on node1.zero1.com
  4. Stack: classic openais (with plugin)
  5. Current DC: node1.zero1.com - partition with quorum
  6. Version: 1.1.10-14.el6_5.2-368c726
  7. 2 Nodes configured, 2 expected votes
  8. 5 Resources configured
  9. Online: [ node1.zero1.com node2.zero1.com ]
  10. Master/Slave Set: ms_mariadbdrbd [mariadbdrbd]
  11. Masters: [ node1.zero1.com ]
  12. Slaves: [ node2.zero1.com ]
  13. mariadbstore   (ocf::heartbeat:Filesystem):    Started node1.zero1.com
  14. madbvip    (ocf::heartbeat:IPaddr2):   Started node1.zero1.com
  15. maserver   (lsb:mysqld):   Started node1.zero1.com

   2.手动切换节点

  1. [root@node1 ~]# crm node standby node1.zero1.com
  2. [root@node1 ~]# crm status
  3. Last updated: Wed Apr 23 16:26:05 2014
  4. Last change: Wed Apr 23 16:25:34 2014 via crm_attribute on node1.zero1.com
  5. Stack: classic openais (with plugin)
  6. Current DC: node1.zero1.com - partition with quorum
  7. Version: 1.1.10-14.el6_5.2-368c726
  8. 2 Nodes configured, 2 expected votes
  9. 5 Resources configured
  10. Node node1.zero1.com: standby
  11. Online: [ node2.zero1.com ]
  12. Master/Slave Set: ms_mariadbdrbd [mariadbdrbd]
  13. Masters: [ node2.zero1.com ]
  14. Stopped: [ node1.zero1.com ]
  15. mariadbstore   (ocf::heartbeat:Filesystem):    Started node2.zero1.com
  16. madbvip    (ocf::heartbeat:IPaddr2):   Started node2.zero1.com
  17. maserver   (lsb:mysqld):   Started node2.zero1.com

这样所有的资源都在node2上运行了。

基于DRBD+corosync对MariaDB做高可用集群

上一篇:EntityFramework数据库连接字符串重用


下一篇:Oracle 异常处理