Linux集群架构(二)

Linux集群架构(二)

目录

八、LVS DR模式搭建

九、keepalived + LVS

十、扩展

八、LVS DR模式搭建

1、实验环境:

四台机器:

client: 10.0.1.50

Director节点: (ens32 10.0.1.55 vip ens32:0 10.0.1.58)

Real server1: (ens32 10.0.1.56 vip lo:0 10.0.1.58)

Real server2: (ens32 10.0.1.57 vip lo:0 10.0.1.58)

2、安装

//两台real server需安装web服务。之前已经装过,略过
//在director安装ipvsadm软件包,可参考lvs nat部分
[root@lvs-dr ~]# yum -y install ipvsadm

3、在director上配置脚本

[root@lvs-dr1 ~]# vim /usr/local/sbin/lvs-dr.sh
#!/bin/bash
echo 1 > /proc/sys/net/ipv4/ip_forward
ipv=/usr/sbin/ipvsadm
vip=10.0.1.58
rs1=10.0.1.56
rs2=10.0.1.57
ifconfig ens32:0 $vip broadcast $vip netmask 255.255.255.255 up
route add -host $vip dev ens32:0
$ipv -C
$ipv -A -t $vip:80 -s rr
$ipv -a -t $vip:80 -r $rs1:80 -g -w 3
$ipv -a -t $vip:80 -r $rs2:80 -g -w 1 //赋予755权限,
[root@lvs-dr1 ~]# chmod 755 /usr/local/sbin/lvs-dr.sh //执行脚本
[root@lvs-dr1 ~]# /usr/local/sbin/lvs-dr.sh //查看状态
[root@lvs-dr1 ~]# chmod 755 /usr/local/sbin/lvs-dr.sh^C
[root@lvs-dr1 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.0.1.58:80 rr
-> 10.0.1.56:80 Route 3 0 0
-> 10.0.1.57:80 Route 1 0 0

4、在两台real server配置脚本

[root@lvs-backend1 ~]# vim /usr/local/sbin/lvs-dr-rs.sh
#!/bin/bash
vip=10.0.1.58
ifconfig lo:0 $vip broadcast $vip netmask 255.255.255.255 up
route add -host $vip lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce //赋予755权限,然后执行
[root@lvs-backend1 ~]# chmod 755 /usr/local/sbin/lvs-dr-rs.sh //执行
[root@lvs-backend1 ~]# /usr/local/sbin/lvs-dr-rs.sh

5、测试

//当前采用的是rr调度算法
Last login: Mon Jul 23 14:47:55 2018
[root@localhost ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:a2:07:b1 brd ff:ff:ff:ff:ff:ff
inet 10.0.1.50/24 brd 10.0.1.255 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fea2:7b1/64 scope link
valid_lft forever preferred_lft forever
[root@localhost ~]# curl 10.0.1.58
I am Lvs-backend1!!!
[root@localhost ~]# curl 10.0.1.58
I am lvs-backend2!!!
[root@localhost ~]# curl 10.0.1.58
I am Lvs-backend1!!!
[root@localhost ~]# curl 10.0.1.58
I am lvs-backend2!!!
[root@localhost ~]# curl 10.0.1.58
I am Lvs-backend1!!!
[root@localhost ~]# curl 10.0.1.58
I am lvs-backend2!!!
[root@localhost ~]# curl 10.0.1.58
I am Lvs-backend1!!!
[root@localhost ~]# curl 10.0.1.58
I am lvs-backend2!!!
[root@localhost ~]# curl 10.0.1.58
I am Lvs-backend1!!!
[root@localhost ~]# curl 10.0.1.58
I am lvs-backend2!!!
[root@localhost ~]#

九、keepalived + LVS

LVS可以实现负载均衡,但是不能够进行健康检查,如一个rs出现故障,LVS 仍然会把请求转发给故障的rs服务器,这就会导致请求的无效性。keepalive 软件可以进行健康检查,而且能同时实现 LVS 的高可用性,解决 LVS 单点故障的问题,其实 keepalive 就是为 LVS 而生的。

1、实验环境

4台节点

Keepalived1 + lvs1(Director1):10.0.1.55

Keepalived2 + lvs2(Director2):10.0.1.59

Real server1:10.0.1.56

Real server2:10.0.1.57

VIP: 192.168.0.58

2.软件安装

//Keepalived + lvs两个节点安装
[root@localhost ~]# yum install ipvsadm keepalived -y //两个read server安装nignx, 之前环境已经安装过,此处略

3.设置配置脚本

//两台real server节点建立脚本
[root@lvs-backend1 ~]# vim /usr/local/sbin/lvs-dr-rs.sh
#!/bin/bash
vip=10.0.1.58
ifconfig lo:0 $vip broadcast $vip netmask 255.255.255.255 up
route add -host $vip lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce //赋予755权限,然后执行
[root@lvs-backend1 ~]# chmod 755 /usr/local/sbin/lvs-dr-rs.sh //执行
[root@lvs-backend1 ~]# /usr/local/sbin/lvs-dr-rs.sh //两台keepalived节点配置
//master节点配置文件
[root@lvs-dr1 ~]# vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
#备用服务器上为 BACKUP
state MASTER
interface ens32
virtual_router_id 51
#备用服务器上为90
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass aminglinux
}
virtual_ipaddress {
10.0.1.58
}
}
virtual_server 10.0.1.58 80 {
#(每隔10秒查询realserver状态)
delay_loop 10
#(lvs 算法)
lb_algo wrr
#(DR模式)
lb_kind DR
#(同一IP的连接60秒内被分配到同一台realserver)
#实验环境注释掉,不然看不到rr的效果
#persistence_timeout 60
#(用TCP协议检查realserver状态)
protocol TCP real_server 10.0.1.56 80 {
#(权重)
weight 1
TCP_CHECK {
#(10秒无响应超时)
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 10.0.1.57 80 {
weight 1
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
} //backup节点
[root@lvs-backend2 ~]# vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
#备用服务器上为 BACKUP
state BACKUP
interface ens32
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass aminglinux
}
virtual_ipaddress {
10.0.1.58
}
}
virtual_server 10.0.1.58 80 {
#(每隔10秒查询realserver状态)
delay_loop 10
#(lvs 算法)
lb_algo rr
#(DR模式)
lb_kind DR
#(同一IP的连接60秒内被分配到同一台realserver)
#persistence_timeout 60
#(用TCP协议检查realserver状态)
protocol TCP real_server 10.0.1.56 80 {
#(权重)
weight 1
TCP_CHECK {
#(10秒无响应超时)
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 10.0.1.57 80 {
weight 1
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}

4.在keepalived两个节点开启转发功能

[root@lvs-dr1 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward

5.在两个节点启动keepalive,

[root@lvs-dr1 ~]# systemctl start keepalived.service
[root@lvs-dr2 ~]# systemctl start keepalived.service

6.测试

//测试1:手动关闭10.0.1.56节点的nginx,在客户端上去测试访问

//在10.0.1.56上操作
[root@lvs-backend1 ~]# /usr/local/nginx/sbin/nginx -s stop
[root@lvs-backend1 ~]# lsof -i :80 //在10.0.1.50客户端上测试
Last login: Mon Jul 23 14:49:10 2018 from 10.0.1.229
[root@localhost ~]# curl 10.0.1.58
I am lvs-backend2!!!
[root@localhost ~]# curl 10.0.1.58
I am lvs-backend2!!!
[root@localhost ~]# curl 10.0.1.58
I am lvs-backend2!!!
[root@localhost ~]# curl 10.0.1.58
I am lvs-backend2!!!
[root@localhost ~]# curl 10.0.1.58
I am lvs-backend2!!!
//结果正常,不会出现访问10.0.1.56节点,一直访问的是10.0.1.57节点的内容。 //测试2 手动重新开启 10.0.1.56 节点的nginx, 在客户端上去测试访问
//在10.0.1.56上操作
[root@lvs-backend1 ~]# lsof -i :80
[root@lvs-backend1 ~]# /usr/local/nginx/sbin/nginx
[root@lvs-backend1 ~]# lsof -i :80
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nginx 2969 root 6u IPv4 48805 0t0 TCP *:http (LISTEN)
nginx 2970 nginx 6u IPv4 48805 0t0 TCP *:http (LISTEN)
nginx 2971 nginx 6u IPv4 48805 0t0 TCP *:http (LISTEN)
//在10.0.1.50上测试
I am Lvs-backend1!!!
[root@localhost ~]# curl 10.0.1.58
I am lvs-backend2!!!
[root@localhost ~]# curl 10.0.1.58
I am Lvs-backend1!!!
[root@localhost ~]# curl 10.0.1.58
I am lvs-backend2!!!
[root@localhost ~]# curl 10.0.1.58
I am Lvs-backend1!!!
[root@localhost ~]# curl 10.0.1.58
I am lvs-backend2!!!
[root@localhost ~]# curl 10.0.1.58
I am Lvs-backend1!!!
[root@localhost ~]# curl 10.0.1.58
I am lvs-backend2!!!
[root@localhost ~]# curl 10.0.1.58
I am Lvs-backend1!!!
[root@localhost ~]# curl 10.0.1.58
I am lvs-backend2!!!
//结果正常,按照 rr 调度算法访问10.0.1.56节点和10.0.1.57节点内容。 //测试 keepalived 的HA特性
//ip addr查看,此时10.0.1.58的vip在主上
valid_lft forever preferred_lft forever
[root@lvs-dr1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:85:24:8c brd ff:ff:ff:ff:ff:ff
inet 10.0.1.55/24 brd 10.0.1.255 scope global ens32
valid_lft forever preferred_lft forever
inet 10.0.1.58/32 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe85:248c/64 scope link
valid_lft forever preferred_lft forever
//停止master上的keepalived
[root@lvs-dr1 ~]# systemctl stop keepalived.service
[root@lvs-dr1 ~]# //在dr2上查看,vip抢占过来了
[root@lvs-dr2 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:dd:53:4e brd ff:ff:ff:ff:ff:ff
inet 10.0.1.59/24 brd 10.0.1.255 scope global ens32
valid_lft forever preferred_lft forever
inet 10.0.1.58/32 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::c388:e67a:4ac3:6566/64 scope link
valid_lft forever preferred_lft forever //在10.0.1.50上测试
I am Lvs-backend1!!!
[root@localhost ~]# curl 10.0.1.58
I am lvs-backend2!!!
[root@localhost ~]# curl 10.0.1.58
I am Lvs-backend1!!!
[root@localhost ~]# curl 10.0.1.58
I am lvs-backend2!!!
[root@localhost ~]# curl 10.0.1.58
I am Lvs-backend1!!!
[root@localhost ~]# curl 10.0.1.58
I am lvs-backend2!!!
[root@localhost ~]# curl 10.0.1.58
I am Lvs-backend1!!!
[root@localhost ~]# curl 10.0.1.58
I am lvs-backend2!!!
[root@localhost ~]# curl 10.0.1.58
I am Lvs-backend1!!!
[root@localhost ~]# curl 10.0.1.58
I am lvs-backend2!!!
[root@localhost ~]# curl 10.0.1.58
I am Lvs-backend1!!!
[root@localhost ~]# curl 10.0.1.58
I am lvs-backend2!!!
[root@localhost ~]# curl 10.0.1.58
//可以正常访问后端的网站,验证了keepalived的特性 //重新开启master上的keepalived
[root@lvs-dr1 ~]# systemctl start keepalived.service
[root@lvs-dr1 ~]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:85:24:8c brd ff:ff:ff:ff:ff:ff
inet 10.0.1.55/24 brd 10.0.1.255 scope global ens32
valid_lft forever preferred_lft forever
inet 10.0.1.58/32 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe85:248c/64 scope link
valid_lft forever preferred_lft forever

十、扩展

heartbeat和keepalived比较

http://blog.csdn.net/yunhua_lee/article/details/9788433

DRBD工作原理和配置

http://502245466.blog.51cto.com/7559397/1298945

mysql+keepalived

http://lizhenliang.blog.51cto.com/7876557/1362313

lvs 三种模式详解

http://www.it165.net/admin/html/201401/2248.html

lvs几种算法

http://www.aminglinux.com/bbs/thread-7407-1-1.html

关于arp_ignore和 arp_announce

http://www.cnblogs.com/lgfeng/archive/2012/10/16/2726308.html

lvs原理相关的

http://blog.csdn.net/pi9nc/article/details/23380589

haproxy+keepalived

http://blog.csdn.net/xrt95050/article/details/40926255

nginx、lvs、haproxy比较

http://www.csdn.net/article/2014-07-24/2820837

keepalived中自定义脚本 vrrp_script

http://my.oschina.net/hncscwc/blog/158746

lvs dr模式只使用一个公网ip的实现方法

http://storysky.blog.51cto.com/628458/338726

上一篇:解构赋值 Destructuring Assignment


下一篇:How to build .apk file from command line(转)