- 一、多节点master2节点部署
- 二、负载均衡部署+keepalived高可用(192.168.80.14/15)
- 三、修改node节点上的kubeconfig配置文件
- 四、在master01节点上操作
接上篇博客:单节点master部署
一、多节点master2节点部署
1.1 从master01节点上拷贝证书文件、配置文件和服务管理文件到master02
scp -r /opt/etcd/ root@192.168.80.16:/opt/
scp -r /opt/kubernetes/ root@192.168.80.16:/opt
scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.80.16:/usr/lib/systemd/system/
1.2 修改配置文件kube-apiserver中的IP
vim /opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.80.11:2379,https://192.168.80.12:2379,https://192.168.80.13:2379 \
--bind-address=192.168.80.16 \ #修改master02的ip
--secure-port=6443 \
--advertise-address=192.168.80.16 \ #修改master02的ip
1.3 在master02节点上启动各服务并设置开机自启
systemctl enable --now kube-apiserver.service
systemctl enable --now kube-controller-manager.service
systemctl enable --now kube-scheduler.service
1.4 查看node节点状态
ln -s /opt/kubernetes/bin/* /usr/local/bin/
kubectl get nodes
kubectl get nodes -o wide #-o=wide:输出额外信息;对于Pod,将输出Pod所在的Node名
//此时在master02节点查到的node节点状态仅是从etcd查询到的信息,而此时node节点实际上并未与master02节点建立通信连接,因此需要使用一个VIP把node节点与master节点都关联起来
二、负载均衡部署+keepalived高可用(192.168.80.14/15)
2.1 配置nginx的官方在线yum源,配置本地nginx的yum源
cat > /etc/yum.repos.d/nginx.repo << 'EOF'
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
EOF
yum install nginx -y
2.2 修改nginx配置文件,配置四层反向代理负载均衡
##指定k8s群集2台master的节点ip和6443端口
vim /etc/nginx/nginx.conf
events {
worker_connections 1024;
}
#添加
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 192.168.80.11:6443;
server 192.168.80.16:6443;
}
server {
listen 6443;
proxy_pass k8s-apiserver;
}
}
http {
......
2.3 检查配置文件语法并启动nginx服务
1.检查配置文件语法
nginx -t
2.启动nginx服务,查看已监听6443端口
systemctl enable --now nginx
netstat -natp | grep nginx
2.4 部署keepalived服务
yum install keepalived -y
2.5 修改keepalived配置文件(额外编写健康检查脚本)
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
# 接收邮件地址
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
# 邮件发送地址
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER #lb01节点的为NGINX_MASTER,lb02节点的为 NGINX_BACKUP
}
#添加一个周期性执行的脚本
vrrp_script check_nginx {
script "/etc/nginx/check_nginx.sh" #指定检查nginx存活的脚本路径
}
vrrp_instance VI_1 {
state MASTER #lb01节点的为 MASTER,lb02节点的为 BACKUP
interface ens33 #指定网卡名称 ens33
virtual_router_id 51 #指定vrid,两个节点要一致
priority 100 #lb01节点的为 100,lb02节点的为 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.80.100 #指定 VIP
}
track_script {
check_nginx #指定vrrp_script配置的脚本
}
}
==========================================================
##创建nginx状态检查脚本
vim /etc/nginx/check_nginx.sh
#!/bin/bash
#egrep -cv "grep|$$" 用于过滤掉包含grep 或者 $$ 表示的当前Shell进程ID
count=$(ps -ef | grep nginx | egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
systemctl stop keepalived
fi
chmod +x /etc/nginx/check_nginx.sh
2.6 启动keepalived服务(一定要先启动了nginx服务,再启动keepalived服务)
systemctl enable --now nginx
systemctl enable --now keepalived
ip a #查看VIP是否生成
三、修改node节点上的kubeconfig配置文件
//修改bootstrap.kubeconfig,kubelet.kubeconfig配置文件为VIP
cd /opt/kubernetes/cfg/
vim bootstrap.kubeconfig
server: https://192.168.80.100:6443
vim kubelet.kubeconfig
server: https://192.168.80.100:6443
vim kube-proxy.kubeconfig
server: https://192.168.80.100:6443
//重启kubelet和kube-proxy服务
systemctl restart kubelet.service
systemctl restart kube-proxy.service
四、在master01节点上操作
1.测试创建pod
kubectl create deployment nginx-test --image=nginx
2.查看Pod的状态信息
kubectl get pod
kubectl get pods -o wide
3.在对应网段的node节点上操作,可以直接使用浏览器或者curl命令访问
curl 172.17.47.2
4.这时在master01节点上查看nginx日志,发现没有权限查看
kubectl logs nginx-test-7d965f56df-q8qlp
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) (nginx-test-7d965f56df-q8qlp)
5.在master01节点上,将cluster-admin角色授予用户system:anonymous
命令:kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created
6.再次查看nginx日志
kubectl logs nginx-test-7d965f56df-q8qlp