K8S-Demo集群实践:部署ipvs模式的kube-proxy组件
- kube-proxy运行在所有worker节点上,它监听apiserver中service和endpoint的变化情况,创建路由规则以提供服务IP和负载均衡功能
一、创建和分发kube-proxy的kubeconfig文件
[root@master1 ~]# cd /opt/install/kubeconfig
[root@master1 kubeconfig]# kubectl config set-cluster k8s-demo \
--certificate-authority=/opt/install/cert/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
[root@master1 kubeconfig]# kubectl config set-credentials k8s-demo-kube-proxy \
--client-certificate=/opt/install/cert/kube-proxy.pem \
--client-key=/opt/install/cert/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
[root@master1 kubeconfig]# kubectl config set-context default \
--cluster=k8s-demo \
--user=k8s-demo-kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
[root@master1 kubeconfig]# for node_name in ${NODE_NAMES[@]}
do
echo ">>> ${node_name}"
scp kube-proxy.kubeconfig root@${node_name}:/opt/k8s/etc/
done
二、创建kube-proxy参数配置文件
1、创建模板文件kube-proxy-config.yaml.template
[root@master1 kubeconfig]# cat > kube-proxy-config.yaml.template <<EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
burst: 200
kubeconfig: "/opt/k8s/etc/kube-proxy.kubeconfig"
qps: 100
bindAddress: ##NODE_IP##
healthzBindAddress: ##NODE_IP##:10256
metricsBindAddress: ##NODE_IP##:10249
enableProfiling: true
clusterCIDR: ${CLUSTER_CIDR}
hostnameOverride: ##NODE_NAME##
mode: "ipvs"
portRange: ""
iptables:
masqueradeAll: false
ipvs:
scheduler: rr
excludeCIDRs: []
EOF
- hostnameOverride:参数值必须与kubelet的值一致,否则kube-proxy启动后会找不到该Node,从而不会创建任何ipvs规则
- clusterCIDR:kube-proxy根据–cluster-cidr判断集群内部和外部流量,指定–cluster-cidr或–masquerade-all选项后kube-proxy才会对访问Service IP的请求做SNAT
2、创建和分发kube-proxy参数配置文件
[root@master1 ~]# cd /opt/install/kubeconfig
[root@master1 kubeconfig]# for (( i=0; i < 6; i++ ))
do
echo ">>> ${ALL_NAMES[i]}"
sed -e "s/##ALL_NAME##/${ALL_NAMES[i]}/" -e "s/##NODE_IP##/${ALL_IPS[i]}/" kube-proxy-config.yaml.template > kube-proxy-config-${ALL_NAMES[i]}.yaml.template
scp kube-proxy-config-${ALL_NAMES[i]}.yaml.template root@${ALL_NAMES[i]}:/etc/kubernetes/kube-proxy-config.yaml
done
三、创建kube-proxy systemd unit并分发部署
[root@master1 ~]# cd /opt/install/service
[root@master1 service]# cat > kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
WorkingDirectory=${K8S_DIR}/kube-proxy
ExecStart=/opt/k8s/bin/kube-proxy \\
--config=/opt/k8s/etc/kube-proxy-config.yaml \\
--logtostderr=true \\
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
[root@master1 service]# for node_name in ${ALL_NAMES[@]}
do
echo ">>> ${node_name}"
scp kube-proxy.service root@${node_name}:/etc/systemd/system/
done
四、启动并验证各Node节点上的kube-proxy服务
- 创建授权信息,kube-proxy.pem证书中的CN为k8s-demo-kube-proxy
[root@master1 ~]# kubectl create clusterrolebinding k8s-demo-cluster-proxy-binding --clusterrole=system:node-proxier --user=k8s-demo-kube-proxy
[root@master1 ~]# for node_ip in ${ALL_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "modprobe ip_vs_rr"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
done
[root@master1 ~]# for node_ip in ${ALL_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl status kube-proxy|grep Active"
ssh root@${node_ip} "ss -lnpt | grep kube-proxy"
done
- 如果状态不是active (running),则要查看日志确认原因:
[root@node1 ~]# journalctl -u kube-proxy
[root@master1 ~]# for node_ip in ${ALL_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "/usr/sbin/ipvsadm -ln"
done
- 先用起来,通过操作实践认识k8s,积累多了自然就理解了
- 把理解的知识分享出来,自造福田,自得福缘
- 追求简单,容易使人理解,知识的上下文也是知识的一部分,例如版本,时间等
- 欢迎留言交流,也可以提出问题,一般在周末回复和完善文档
- Jason@vip.qq.com 2021-1-21