最终测试
首先启动confd,按照前面的命令,去掉onetime参数,放到后台最为守护进程长期运行,确保etcd注册目录修改之后,能准实时生成haproxy的配置文件。
然后在两台slave,一台启动两个nginx容器,一台启动一台,模拟上面的a.abc.com和b.abc.com两个域名。
docker run -P -v `pwd`/html:/var/www/html -d dockerfile/nginx
这里暴露所有的端口(80和443),然后挂载当前的html目录给该容器,再html目录中创建一个1.html文件,
包含容器id、内部ip、外部ip作为测试。同样启动两个之后,通过master上的etcdctl配置这几个启动的容器:
etcdctl set /services/web/a.abc.com/server1/ip 10.211.55.12
etcdctl set /services/web/a.abc.com/server1/port 49154
etcdctl set /services/web/a.abc.com/server2/ip 10.211.55.12
etcdctl set /services/web/a.abc.com/server2/port 49156
etcdctl set /services/web/b.abc.com/server1/ip 10.211.55.13
etcdctl set /services/web/b.abc.com/server1/port 49154
confd会间歇性检查目录修改状态:
INFO /home/babydragon/haproxy/haproxy.cfg has md5sum c8fb4ae9c10086b9f94cd11d0edecec1 should be 048c844d73c062014c0fd77d9548c47d
2015-02-09T11:42:00+08:00 master confd[3781]: INFO Target config /home/babydragon/haproxy/haproxy.cfg out of sync
2015-02-09T11:42:00+08:00 master confd[3781]: INFO Target config /home/babydragon/haproxy/haproxy.cfg has been updated
然后haproxy被更新:
acl is_a.abc.com hdr(host) -i a.abc.com
acl is_b.abc.com hdr(host) -i b.abc.com
use_backend a.abc.com_cluster if is_a.abc.com
use_backend b.abc.com_cluster if is_b.abc.com
backend a.abc.com_cluster
cookie SERVERID insert indirect nocache
server server1 10.211.55.12:49154 cookie server1 check
server server2 10.211.55.12:49156 cookie server2 check
backend b.abc.com_cluster
cookie SERVERID insert indirect nocache
server server1 10.211.55.13:49154 cookie server1 check
重新启动haproxy的容器(没有配置直接加载haproxy.cfg),查看status页面,两个backend都已经生效。通过curl模拟下:
curl -H "Host: a.abc.com" http://10.211.55.11:49154/1.html
I am a80b37f78259 on 172.17.0.4 (Host: 10.211.55.12)
curl -H "Host: a.abc.com" http://10.211.55.11:49154/1.html
I am 209b20bab7ce on 172.17.0.3 (Host: 10.211.55.12)
由于配置了负载均衡为轮询方式,两次请求被落到了不同的容器上,haproxy正确的将请求分发到了两个容器中。
转载自:https://coolex.info/blog/485.html