一 多Harbor高可用介绍
共享后端存储是一种比较标准的方案,将多个Harbor实例共享同一个后端存储,任何一个实例持久化到存储的镜像,都可被其他实例中读取。通过前置LB组件,如Keepalived,可以分流到不同的实例中去处理,从而实现负载均衡,也避免了单点故障,其架构图如下:
方案说明:
共享存储:Harbor的后端存储目前支持AWS S3、Openstack Swift, Ceph等,本实验环境采用NFS;
共享Session:harbor默认session会存放在redis中,可将redis独立出来,从而实现在不同实例上的session共享,独立出来的redis也可采用redis sentinel或者redis cluster等方式来保证redis的高可用性,本实验环境采用单台redis;
数据库高可用:MySQL多个实例无法共享一份mysql数据文件,可将harbor中的数据库独立出来。让多实例共用一个外部数据库,独立出来的mysql数据库也可采用mysqls cluster等方式来保证mysql的高可用性,本实验环境采用单台mysql。
二 正式部署
2.1 前期准备
节点
|
IP地址
|
备注
|
docker01
|
172.24.8.111
|
Docker harbor node01
|
docker02
|
172.24.8.112
|
Docker harbor node02
|
docker03
|
172.24.8.113
|
mysql+redis节点
|
docker04
|
172.24.8.114
|
Docker客户端,用于测试仓库
|
nfsslb
|
172.24.8.71
|
共享nfs存储节点
Keepalived节点
VIP地址:172.24.8.200/32
|
slb02
|
172.24.8.72
|
Keepalived节点
VIP地址:172.24.8.200/32
|
架构示意图:
前置配置:
- docker、docker-compose安装(见《009.Docker Compose基础使用》);
- ntp时钟同步(建议项);
- 相关防火墙-SELinux放通或关闭;
- nfsslb和slb02节点添加解析:echo "172.24.8.200 reg.harbor.com" >> /etc/hosts
2.2 创建nfs
[root@nfsslb ~]# yum -y install nfs-utils*
[root@nfsslb ~]# mkdir /myimages #用于共享镜像
[root@nfsslb ~]# mkdir /mydatabase #用于存储数据库数据
[root@nfsslb ~]# echo -e "/dev/vg01/lv01 /myimages ext4 defaults 0 0\n/dev/vg01/lv02 /mydatabase ext4 defaults 0 0">> /etc/fstab
[root@nfsslb ~]# mount -a
[root@nfsslb ~]# vi /etc/exports
/myimages 172.24.8.0/24(rw,no_root_squash)
/mydatabase 172.24.8.0/24(rw,no_root_squash)
[root@nfsslb ~]# systemctl start nfs.service
[root@nfsslb ~]# systemctl enable nfs.service
注意:nfsserver节点采用独立LVM磁盘作为nfs挂载目录,并配置相应共享目录,更多nfs配置见——NFS《004.NFS配置实例》。
2.3 挂载nfs
root@docker01:~# apt-get -y install nfs-common
root@docker02:~# apt-get -y install nfs-common
root@docker03:~# apt-get -y install nfs-common root@docker01:~# mkdir /data
root@docker02:~# mkdir /data root@docker01:~# echo "172.24.8.71:/myimages /data nfs defaults,_netdev 0 0">> /etc/fstab
root@docker02:~# echo "172.24.8.71:/myimages /data nfs defaults,_netdev 0 0">> /etc/fstab
root@docker03:~# echo "172.24.8.71:/mydatabase /database nfs defaults,_netdev 0 0">> /etc/fstab root@docker01:~# mount -a
root@docker02:~# mount -a
root@docker03:~# mount -a root@docker03:~# mkdir -p /database/mysql
root@docker03:~# mkdir -p /database/redis
2.4 部署外部mysql-redis
root@docker03:~# mkdir docker_compose/
root@docker03:~# cd docker_compose/
root@docker03:~/docker_compose# vi docker-compose.yml
version: '3'
services:
mysql-server:
hostname: mysql-server
restart: always
container_name: mysql-server
image: mysql:5.7
volumes:
- /database/mysql:/var/lib/mysql
command: --character-set-server=utf8
ports:
- '3306:3306'
environment:
MYSQL_ROOT_PASSWORD: x19901123
# logging:
# driver: "syslog"
# options:
# syslog-address: "tcp://172.24.8.112:1514"
# tag: "mysql"
redis:
hostname: redis-server
container_name: redis-server
restart: always
image: redis:3
volumes:
- /database/redis:/data
ports:
- '6379:6379'
# logging:
# driver: "syslog"
# options:
# syslog-address: "tcp://172.24.8.112:1514"
# tag: "redis"
提示:因为log容器为harbor中服务,当harbor暂未部署时,需要注释相关配置,harbor部署完毕后取消注释,然后重新up一次即可。
root@docker03:~/docker_compose# docker-compose up -d
root@docker03:~/docker_compose# docker-compose ps #确认docker是否up
root@docker03:~/docker_compose# netstat -tlunp #确认相关端口是否启动
2.5 下载harbor
root@docker01:~# wget https://storage.googleapis.com/harbor-releases/harbor-offline-installer-v1.5.4.tgz
root@docker01:~# tar xvf harbor-offline-installer-v1.5.4.tgz
提示:docker02节点参考如上操作即可。
2.6 导入registry表
root@docker01:~# apt-get -y install mysql-client
root@docker01:~# cd harbor/ha/
root@docker01:~/harbor/ha# ll
root@docker01:~/harbor/ha# mysql -h172.24.8.113 -uroot -p
mysql> set session sql_mode='STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION'; #必须修改sql_mode
mysql> source ./registry.sql #导入registry数据表至外部数据库。
mysql> exit
提示:只需要导入一次即可。
2.7 修改harbor相关配置
root@docker01:~/harbor/ha# cd /root/harbor/
root@docker01:~/harbor# vi harbor.cfg #修改harbor配置文件
hostname = 172.24.8.111
db_host = 172.24.8.113
db_password = x19901123
db_port = 3306
db_user = root
redis_url = 172.24.8.113:6379
root@docker01:~/harbor# vi prepare
empty_subj = "/C=/ST=/L=/O=/CN=/"
修改如下:
empty_subj = "/C=US/ST=California/L=Palo Alto/O=VMware, Inc./OU=Harbor/CN=notarysigner"
root@docker01:~/harbor# ./prepare #载入相关配置
提示:docker02参考如上配置即可;
由于采用外部mysql和redis,根据以下架构图可知和数据库相关的组件有UI和jobservices因此需要做相应修改,运行prepare命令,会自动将相应的数据库参数同步至./common/config/ui/env和./common/config/adminserver/env。
root@docker01:~/harbor# cat ./common/config/ui/env #验证
_REDIS_URL=172.24.8.113:6379
root@docker01:~/harbor# cat ./common/config/adminserver/env | grep MYSQL #验证
MYSQL_HOST=172.24.8.113
MYSQL_PORT=3306
MYSQL_USR=root
MYSQL_PWD=x19901123
MYSQL_DATABASE=registry
2.8 docker-compose部署
root@docker01:~/harbor# cp docker-compose.yml docker-compose.yml.bak
root@docker01:~/harbor# cp ha/docker-compose.yml .
root@docker01:~/harbor# vi docker-compose.yml
log
ports:
- 1514:10514 #log需要对外部redis和mysql提供服务,因此只需要修改此处即可
root@docker01:~/harbor# ./install.sh
提示:由于redis和mysql采用外部部署,因此需要在docker-compose.yml中删除或注释redis和mysql的服务项,同时删除其他服务对其的依赖,官方自带的harbor中已经存在修改好的docker-compose文件,位于ha目录。
docker02节点参考2.5-2.8部署harbor即可。
2.9 重新构建外部redis和mysql
去掉log有关注释项。
root@docker03:~/docker_compose# docker-compose up -d
root@docker03:~/docker_compose# docker-compose ps #确认docker是否up
root@docker03:~/docker_compose# netstat -tlunp #确认相关端口是否启动
2.10 Keepalived安装
[root@nfsslb ~]# yum -y install gcc gcc-c++ make kernel-devel kernel-tools kernel-tools-libs kernel libnl libnl-devel libnfnetlink-devel openssl-devel
[root@nfsslb ~]# cd /tmp/
[root@nfsslb ~]# tar -zxvf keepalived-2.0.8.tar.gz
[root@nfsslb tmp]# cd keepalived-2.0.8/
[root@nfsslb keepalived-2.0.8]# ./configure --sysconf=/etc --prefix=/usr/local/keepalived
[root@nfsslb keepalived-2.0.8]# make && make install
提示:slb02节点参考如上即可。
2.11 Keepalived配置
[root@nfsslb ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
root@docker01:~# scp harbor/ha/sample/active_active/keepalived_active_active.conf root@172.24.8.71:/etc/keepalived/keepalived.conf
root@docker01:~# scp harbor/ha/sample/active_active/check.sh root@172.24.8.71:/usr/local/bin/check.sh
root@docker01:~# scp harbor/ha/sample/active_active/check.sh root@172.24.8.72:/usr/local/bin/check.sh
[root@nfsslb ~]# chmod u+x /usr/local/bin/check.sh
[root@slb02 ~]# chmod u+x /usr/local/bin/check.sh
[root@nfsslb ~]# vi /etc/keepalived/keepalived.conf
global_defs {
router_id haborlb
}
vrrp_sync_groups VG1 {
group {
VI_1
}
}
vrrp_instance VI_1 {
interface eth0 track_interface {
eth0
} state MASTER
virtual_router_id 51
priority 10 virtual_ipaddress {
172.24.8.200
}
advert_int 1
authentication {
auth_type PASS
auth_pass d0cker
} }
virtual_server 172.24.8.200 80 {
delay_loop 15
lb_algo rr
lb_kind DR
protocol TCP
nat_mask 255.255.255.0
persistence_timeout 10 real_server 172.24.8.111 80 {
weight 10
MISC_CHECK {
misc_path "/usr/local/bin/check.sh 172.24.8.111"
misc_timeout 5
}
} real_server 172.24.8.112 80 {
weight 10
MISC_CHECK {
misc_path "/usr/local/bin/check.sh 172.24.8.112"
misc_timeout 5
}
}
}
[root@nfsslb ~]# scp /etc/keepalived/keepalived.conf root@172.24.8.72:/etc/keepalived/keepalived.conf #Keepalived配置复制至slb02节点
[root@nfsslb ~]# vi /etc/keepalived/keepalived.conf
state BACKUP
priority 8
提示:harbor官方已提示Keepalived配置文件及检测脚本,直接使用即可;
lsb02节点设置为BACKUP,优先级低于MASTER,其他默认即可。
2.12 slb节点配置LVS
[root@nfsslb ~]# yum -y install ipvsadm
[root@nfsslb ~]# vi ipvsadm.sh
#!/bin/sh
#****************************************************************#
# ScriptName: ipvsadm.sh
# Author: xhy
# Create Date: 2018-10-28 02:40
# Modify Author: xhy
# Modify Date: 2018-10-28 02:40
# Version:
#***************************************************************#
sudo ifconfig eth0:0 172.24.8.200 broadcast 172.24.8.200 netmask 255.255.255.255 up
sudo route add -host 172.24.8.200 dev eth0:0
sudo echo "1" > /proc/sys/net/ipv4/ip_forward
sudo ipvsadm -C
sudo ipvsadm -A -t 172.24.8.200:80 -s rr
sudo ipvsadm -a -t 172.24.8.200:80 -r 172.24.8.111:80 -g
sudo ipvsadm -a -t 172.24.8.200:80 -r 172.24.8.112:80 -g
sudo ipvsadm
sudo sysctl -p
[root@nfsslb ~]# chmod u+x ipvsadm.sh
[root@nfsslb ~]# echo "source /root/ipvsadm.sh" >> /etc/rc.local #开机运行
[root@nfsslb ~]# ./ipvsadm.sh
示例解释:
ipvsadm -A -t 172.24.8.200:80 -s rr -p 600
表示在内核的虚拟服务器列表中添加一条IP为192.168.10.200的虚拟服务器,并且指定此虚拟服务器的服务端口为80,其调度策略为轮询模式,并且每个Real Server上的持续时间为600秒。
ipvsadm -a -t 172.24.8.200:80 -r 192.168.10.100:80 -g
表示在IP地位为192.168.10.10的虚拟服务器上添加一条新的Real Server记录,且虚拟服务器的工作模式为直接路由模式。
提示:slb02节点参考以上配置即可,更多LVS可参考https://www.cnblogs.com/itzgr/category/1367969.html。
2.13 harbor节点配置VIP
root@docker01:~# vi /etc/init.d/lvsrs
#!/bin/bash
# description:Script to start LVS DR real server.
#
. /etc/rc.d/init.d/functions
VIP=172.24.8.200
#修改相应的VIP
case "$1" in
start)
#启动 LVS-DR 模式,real server on this machine. 关闭ARP冲突检测。
echo "Start LVS of Real Server!"
/sbin/ifconfig lo down
/sbin/ifconfig lo up
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
/sbin/ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up
/sbin/route add -host $VIP dev lo:0
sudo sysctl -p
;;
stop)
#停止LVS-DR real server loopback device(s).
echo "Close LVS Director Server!"
/sbin/ifconfig lo:0 down
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
sudo sysctl -p
;;
status)
# Status of LVS-DR real server.
islothere=`/sbin/ifconfig lo:0 | grep $VIP`
isrothere=`netstat -rn | grep "lo:0" | grep $VIP`
if [ ! "$islothere" -o ! "isrothere" ];then
# Either the route or the lo:0 device
# not found.
echo "LVS-DR real server Stopped!"
else
echo "LVS-DR real server Running..."
fi
;;
*)
# Invalid entry.
echo "$0: Usage: $0 {start|status|stop}"
exit 1
;;
esac
root@docker01:~# chmod u+x /etc/init.d/lvsrs
root@docker02:~# chmod u+x /etc/init.d/lvsrs
2.14 启动相关服务
root@docker01:~# service lvsrs start
root@docker02:~# service lvsrs start
[root@nfsslb ~]# systemctl start keepalived.service
[root@nfsslb ~]# systemctl enable keepalived.service
[root@slb02 ~]# systemctl start keepalived.service
[root@slb02 ~]# systemctl enable keepalived.service
2.15 确认验证
root@docker01:~# ip addr #验证docker01/02/slb是否成功启用vip
三 测试验证
root@docker04:~# vi /etc/hosts
172.24.8.200 reg.harbor.com
root@docker04:~# vi /etc/docker/daemon.json
{
"insecure-registries": ["http://reg.harbor.com"]
}
root@docker04:~# systemctl restart docker.service
若是信任CA机构颁发的证书,相应关闭daemon.json中的配置。
root@docker04:~# docker login reg.harbor.com #登录registry
Username: admin
Password: Harbor12345
提示:公开的registry可pull,但push也必须登录,私有的registry必须登录才可pull和push。
root@docker04:~# docker pull hello-world
root@docker04:~# docker tag hello-world:latest reg.harbor.com/library/hello-world:xhy
root@docker04:~# docker push reg.harbor.com/library/hello-world:xhy
提示:修改tag必须为已经存在的项目,并且具备相应的授权。
浏览器访问:https://reg.harbor.com,并使用默认用户名admin/Harbor12345
参考链接:https://www.cnblogs.com/breezey/p/9444231.html