结合keepalived实现lvs的高可用群集故障自动转移

结合keepalived实现lvs的高可用群集故障自动转移

直上干活

dr1上keepalived的配置:

/etc/keepalived/keepalived.conf


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
global_defs {
                router_id LVS1          # 设置lvs的id,在一个网络内应该是唯一的
}
vrrp_sync_group test {           #设置vrrp组
        group {
        loadbalance
        }
}
vrrp_instance loadbalance {
        state MASTER       #设置lvs的状态,报错MASTER和BACKUP两种,必须大写
        interface eth0     #设置对外服务的接口
        lvs_sync_daemon_inteface eth0   #设置lvs监听的接口
        virtual_router_id 51                     #设置虚拟路由表示
        priority 180            #设置优先级,数值越大,优先级越高
        advert_int 1           #设置同步时间间隔
        authentication {                    #设置验证类型和密码
                auth_type PASS
                auth_pass 1111
                }
        virtual_ipaddress {
                192.168.56.200
        }
}
virtual_server 192.168.56.200 80 {
        delay_loop 6          #健康检查时间间隔
        lb_algo rr               #负载均衡调度算法
        lb_kind DR            #负载均衡转发规则
        #persistence_timeout 20  #设置会话保持时间,对bbs等很有用
        protocol TCP                #协议
        real_server 192.168.56.105 80 {
        weight 3                #设置权重
                TCP_CHECK {
                        connect_timeout 3
                        nb_get_retry 3
                        delay_before_retry 3
                        connect_port 80
                        }
                }
        real_server 192.168.56.106 80 {
                weight 3
                TCP_CHECK {
                        connect_timeout 3
                        nb_get_retry 3
                        delay_before_retry 3
                        connect_port 80
                }
        }
}


dr2上keepalived的配置

/etc/keepalived/keepalived.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
global_defs {
        router_id LVS2
}
vrrp_sync_group test {
        group {
                loadbalance
        }
}
vrrp_instance loadbalance {
        state BACKUP
        interface eth0
        lvs_sync_daemon_inteface eth0
        virtual_router_id 51
        priority 150
        advert_int 1
        authentication {
                auth_type PASS
                        auth_pass 1111
                }
        virtual_ipaddress {
                192.168.56.200
        }
}
virtual_server 192.168.56.200 80 {
        delay_loop 6
        lb_algo rr
        lb_kind DR
        #persistence_timeout 20
        protocol TCP
        real_server 192.168.56.105 80 {
                weight 3
                TCP_CHECK {
                        connect_timeout 3
                        nb_get_retry 3
                        delay_before_retry 3
                        connect_port 80
                }
        }
        real_server 192.168.56.106 80 {
                weight 3
                TCP_CHECK {
                        connect_timeout 3
                        nb_get_retry 3
                        delay_before_retry 3
                        connect_port 80
                }
        }
}}



启动dr1上的keepalived

keepalived -f /etc/keepalived/keepalived.conf

查看信息


结合keepalived实现lvs的高可用群集故障自动转移


然后启动dr2上keepalived

keepalived -f /etc/keepalived/keepalived.conf

查看信息

结合keepalived实现lvs的高可用群集故障自动转移



在realserver1与realserver2上分别执行下面的脚本

/home/lhb/sh/rs.sh


1
2
3
4
5
6
7
8
9
10
#!/bin/bash
vip=192.168.56.200
ifconfig lo:0 $vip netmask 255.255.255.255
route add -host $vip dev lo:0
route -n
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
#sysctl -p  #查看sysctl的配置变化,可不执行


结合keepalived实现lvs的高可用群集故障自动转移



然后我们开一个客户端方位vip:

结合keepalived实现lvs的高可用群集故障自动转移


在dr1上执行ipvsadm -ln

结合keepalived实现lvs的高可用群集故障自动转移



然后我们在dr1上关闭keepalived

结合keepalived实现lvs的高可用群集故障自动转移


这是我们继续访问vip,访问正常如图所示:

结合keepalived实现lvs的高可用群集故障自动转移


访问正常,说明web业务没有停止:

由此推断我们的dr2已经开始接管业务了.然后到dr2上看一下信息:

结合keepalived实现lvs的高可用群集故障自动转移


说明故障发生时,业务服务已经自动从dr1转移到dr2上了。

然后当我们的dr1修复好后,我们执行keepalived -f /etc/keepalived/keepalived.conf


结合keepalived实现lvs的高可用群集故障自动转移

此时访问vip,业务仍然正常访问.

回到dr2上看信息

结合keepalived实现lvs的高可用群集故障自动转移


到此为止,keepalived实现lvs的故障在主备机自动切换已经展示完毕。


本文转自birdinroom 51CTO博客,原文链接:http://blog.51cto.com/birdinroom/1402004,如需转载请自行联系原作者

上一篇:《Spring Boot开发:从0到1》第12章 Spring Boot与微服务


下一篇:【云驻共创】当c++和Lambda表达式相遇,会碰撞出怎样的火花?