二进制安装多master节点的k8s集群
一、环境规划
1.1、实验环境规划
K8S集群角色 | Ip | 主机名 | 安装的组件 |
---|---|---|---|
控制节点 | 192.168.40.180 | k8s-master1 | apiserver、controller-manager、scheduler、etcd、docker、keepalived、nginx |
控制节点 | 192.168.40.181 | k8s-master2 | apiserver、controller-manager、scheduler、etcd、docker、keepalived、nginx |
控制节点 | 192.168.40.182 | k8s-master3 | apiserver、controller-manager、scheduler、etcd、docker |
工作节点 | 192.168.40.183 | k8s-node1 | kubelet、kube-proxy、docker、calico、coredns |
Vip | 192.168.40.199 |
实验环境规划:
- 操作系统:centos7.6
- 配置: 4Gib内存/4vCPU/100G硬盘
- 网络:Vmware NAT模式
k8s网络环境规划:
-
k8s版本:
v1.20.7
-
Pod网段:
10.0.0.0/16
-
Service网段:
10.255.0.0/16
1.2、kubeadm和二进制安装
1.2.1、kubeadm安装
1)kubeadm是官方提供的开源工具,是一个开源项目,用于快速搭建kubernetes集群,目前是比较方便和推荐使用的。kubeadm init
以及 kubeadm join
这两个命令可以快速创建 kubernetes 集群。Kubeadm初始化k8s,所有的组件都是以pod形式运行的,具备故障自恢复能力。
2)kubeadm是工具,可以快速搭建集群,也就是相当于用程序脚本帮我们装好了集群,属于自动部署,简化部署操作,自动部署屏蔽了很多细节,使得对各个模块感知很少,如果对k8s架构组件理解不深的话,遇到问题比较难排查。
3)kubeadm适合需要经常部署k8s,或者对自动化要求比较高的场景下使用。
1.2.2、二进制安装
二进制:在官网下载相关组件的二进制包,如果手动安装,对kubernetes理解也会更全面。
Kubeadm和二进制都适合生产环境,在生产环境运行都很稳定,具体如何选择,可以根据实际项目进行评估。
1.3、k8s多master节点架构
1.4、节点初始化
1)配置静态ip地址
# 把虚拟机或者物理机配置成静态ip地址,这样机器重新启动后ip地址也不会发生改变。以master1主机修改静态IP为例
~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=none
NAME=eth0
DEVICE=eth0
ONBOOT=yes
IPADDR=192.168.40.180 # 按实验规划修改
NETMASK=255.255.255.0
GATEWAY=192.168.40.2
DNS1=223.5.5.5
# 重启网络
~]# systemctl restart network
# 测试网络连通信
~]# ping baidu.com
PING baidu.com (39.156.69.79) 56(84) bytes of data.
64 bytes from 39.156.69.79 (39.156.69.79): icmp_seq=1 ttl=128 time=63.2 ms
64 bytes from 39.156.69.79 (39.156.69.79): icmp_seq=2 ttl=128 time=47.3 ms
2)配置主机名
hostnamectl set-hostname <主机名> && bash
3)配置hosts文件
# 所有机器
cat >> /etc/hosts << EOF
192.168.40.180 k8s-master1
192.168.40.181 k8s-master2
192.168.40.182 k8s-master3
192.168.40.183 k8s-node1
EOF
# 测试
~]# ping k8s-master1
PING k8s-master1 (192.168.40.180) 56(84) bytes of data.
64 bytes from k8s-master1 (192.168.40.180): icmp_seq=1 ttl=64 time=0.015 ms
64 bytes from k8s-master1 (192.168.40.180): icmp_seq=2 ttl=64 time=0.047 ms
4)配置主机之间无密码登录
# 生成ssh 密钥对,一路回车,不输入密码
ssh-keygen -t rsa
# 把本地的ssh公钥文件安装到远程主机对应的账户
ssh-copy-id -i .ssh/id_rsa.pub k8s-master1
ssh-copy-id -i .ssh/id_rsa.pub k8s-master2
ssh-copy-id -i .ssh/id_rsa.pub k8s-master3
ssh-copy-id -i .ssh/id_rsa.pub k8s-node1
5)关闭firewalld防火墙
systemctl stop firewalld ; systemctl disable firewalld
6)关闭selinux
# 临时关闭
setenforce 0
# 永久关闭
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
# 查看
getenforce
7)关闭交换分区swap
#临时关闭
swapoff -a
#永久关闭:注释swap挂载,给swap开头加一下注释
sed -ri 's/.*swap.*/#&/' /etc/fstab
#注意:如果是克隆的虚拟机,需要删除UUID一行
8)修改内核参数
# 1、加载br_netfilter模块
modprobe br_netfilter
# 2、验证模块是否加载成功
lsmod |grep br_netfilter
# 3、修改内核参数
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# 4、使刚才修改的内核参数生效
sysctl -p /etc/sysctl.d/k8s.conf
问题一:sysctl
是做什么的?
# 在运行时配置内核参数
-p 从指定的文件加载系统参数,如不指定即从/etc/sysctl.conf中加载
问题二:为什么要执行modprobe br_netfilter
?
修改/etc/sysctl.d/k8s.conf文件,增加如下三行参数:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
# sysctl -p /etc/sysctl.d/k8s.conf出现报错:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
# 解决方法:
modprobe br_netfilter
问题三:为什么开启net.bridge.bridge-nf-call-iptables
内核参数?
# 在centos下安装docker,执行docker info出现如下警告:
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
# 解决办法:
vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
问题四:为什么要开启net.ipv4.ip_forward = 1
参数?
kubeadm初始化k8s如果报错如下,说明没有开启ip_forward,需要开启
# net.ipv4.ip_forward是数据包转发:
1)出于安全考虑,Linux系统默认是禁止数据包转发的。所谓转发即当主机拥有多于一块的网卡时,其中一块收到数据包,根据数据包的目的ip地址将数据包发往本机另一块网卡,该网卡根据路由表继续发送数据包。这通常是路由器所要实现的功能。
2)要让Linux系统具有路由转发功能,需要配置一个Linux的内核参数net.ipv4.ip_forward。这个参数指定了Linux系统当前对路由转发功能的支持情况;其值为0时表示禁止进行IP转发;如果是1,则说明IP转发功能已经打开。
9)配置阿里云repo源
# 备份
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
# 下载新的CentOS-Base.repo
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# 生成缓存
yum clean all && yum makecache
10)配置时间同步
# 安装ntpdate命令,
yum install ntpdate -y
# 跟网络源做同步
ntpdate cn.pool.ntp.org
# 把时间同步做成计划任务
crontab -e
* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org
# 重启crond服务
service crond restart
11)安装iptables
# 安装iptables
yum install iptables-services -y
# 禁用iptables
service iptables stop && systemctl disable iptables
# 清空防火墙规则
iptables -F
12)开启ipvs
不开启ipvs将会使用iptables进行数据包转发,但是效率低,所以官网推荐需要开通ipvs。
# 创建ipvs.modules文件
~]# vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
/sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
if [ 0 -eq 0 ]; then
/sbin/modprobe ${kernel_module}
fi
done
# 执行脚本
~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
ip_vs_ftp 13079 0
nf_nat 26787 1 ip_vs_ftp
ip_vs_sed 12519 0
ip_vs_nq 12516 0
ip_vs_sh 12688 0
ip_vs_dh 12688 0
ip_vs_lblcr 12922 0
ip_vs_lblc 12819 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs_wlc 12519 0
ip_vs_lc 12516 0
ip_vs 141092 22 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_lblcr,ip_vs_lblc
nf_conntrack 133387 2 ip_vs,nf_nat
libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
13)安装基础软件包
~]# yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet rsync
14)安装docker-ce
~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
~]# yum install docker-ce docker-ce-cli containerd.io -y
~]# systemctl start docker && systemctl enable docker.service && systemctl status docker
15)配置docker镜像加速器
# 注意:修改docker文件驱动为systemd,默认为cgroupfs,kubelet默认使用systemd,两者必须一致才可以
~]# tee /etc/docker/daemon.json << 'EOF'
{
"registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
~]# systemctl daemon-reload && systemctl restart docker && systemctl status docker
二、部署Etcd集群
2.1、准备证书生成工具
在master1,master2及master3操作
# 1、配置etcd工作目录,创建配置文件和证书文件存放目录
[root@k8s-master1 ~]# mkdir -p /etc/etcd /etc/etcd/ssl
[root@k8s-master2 ~]# mkdir -p /etc/etcd /etc/etcd/ssl
[root@k8s-master3 ~]# mkdir -p /etc/etcd /etc/etcd/ssl
# 2、master1安装签发证书工具cfssl
[root@k8s-master1 ~]# mkdir /data/work -p && cd /data/work
[root@k8s-master1 work]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@k8s-master1 work]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@k8s-master1 work]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
[root@k8s-master1 work]# chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
[root@k8s-master1 work]# mv cfssl_linux-amd64 /usr/local/bin/cfssl
[root@k8s-master1 work]# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
[root@k8s-master1 work]# mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
2.2、生成证书
1)生成ca证书请求文件
[root@k8s-master1 work]# vim ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Hubei",
"L": "Wuhan",
"O": "k8s",
"OU": "system"
}
],
"ca": {
"expiry": "87600h"
}
}
[root@k8s-master1 work]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2021/07/07 13:24:29 [INFO] generating a new CA key and certificate from CSR
2021/07/07 13:24:29 [INFO] generate received request
2021/07/07 13:24:29 [INFO] received CSR
2021/07/07 13:24:29 [INFO] generating key: rsa-2048
2021/07/07 13:24:29 [INFO] encoded CSR
2021/07/07 13:24:29 [INFO] signed certificate with serial number 363113506681999715134277718708382917741339875304
[root@k8s-master1 work]# ll
total 16
-rw-r--r-- 1 root root 997 Jul 7 13:24 ca.csr
-rw-r--r-- 1 root root 252 Jul 7 13:24 ca-csr.json
-rw------- 1 root root 1675 Jul 7 13:24 ca-key.pem
-rw-r--r-- 1 root root 1346 Jul 7 13:24 ca.pem
ca-csr.json
证书请求文件参数说明:
# CN字段
Common Name(公用名称),kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法;对于 SSL 证书,一般为网站域名;而对于代码签名证书则为申请单位名称;而对于客户端证书则为证书申请者的姓名。
# O字段
Organization(单位名称),kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);对于 SSL 证书,一般为网站域名;而对于代码签名证书则为申请单位名称;而对于客户端单位证书则为证书申请者所在单位名称。
# L字段
所在城市
# S字段
所在省份
# C字段
只能是国家字母缩写,如中国:CN
2)生成ca证书文件
[root@k8s-master1 work]# vim ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
3)生成etcd证书
# 配置etcd证书请求,hosts的ip变成自己etcd所在节点的ip,可以预留几个,做扩容用
[root@k8s-master1 work]# vim etcd-csr.json
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"192.168.40.180",
"192.168.40.181",
"192.168.40.182",
"192.168.40.199"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "CN",
"ST": "Hubei",
"L": "Wuhan",
"O": "k8s",
"OU": "system"
}]
}
[root@k8s-master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
[root@k8s-master1 work]# ls etcd*.pem
etcd-key.pem etcd.pem
2.3、部署etcd集群
etcd软件下载地址:https://github.com/etcd-io/etcd/releases/
1)下载软件并上传
# 把etcd-v3.4.13-linux-amd64.tar.gz上传到/data/work目录下
[root@k8s-master1 work]# pwd
/data/work
[root@k8s-master1 work]# tar -xf etcd-v3.4.13-linux-amd64.tar.gz
[root@k8s-master1 work]# cp -p etcd-v3.4.13-linux-amd64/etcd* /usr/local/bin/
# 将etcd,etcdctl拷贝至其他机器
[root@k8s-master1 work]# scp -r etcd-v3.4.13-linux-amd64/etcd* k8s-master2:/usr/local/bin/
[root@k8s-master1 work]# scp -r etcd-v3.4.13-linux-amd64/etcd* k8s-master3:/usr/local/bin/
2)创建配置文件
[root@k8s-master1 work]# vim /etc/etcd/etcd.conf
#[Member]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.40.180:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.40.180:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.40.180:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.40.180:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.40.180:2380,etcd2=https://192.168.40.181:2380,etcd3=https://192.168.40.182:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
[root@k8s-master2 ~]# vim /etc/etcd/etcd.conf
#[Member]
ETCD_NAME="etcd2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.40.181:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.40.181:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.40.181:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.40.181:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.40.180:2380,etcd2=https://192.168.40.181:2380,etcd3=https://192.168.40.182:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
[root@k8s-master3 ~]# vim /etc/etcd/etcd.conf
#[Member]
ETCD_NAME="etcd3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.40.182:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.40.182:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.40.182:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.40.182:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.40.180:2380,etcd2=https://192.168.40.181:2380,etcd3=https://192.168.40.182:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
配置说明:
ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN:集群Token
ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群
3)创建启动服务文件
[root@k8s-master1 work]# vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-client-cert-auth \
--client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
# 拷贝至其他节点
[root@k8s-master1 work]# for i in k8s-master2 k8s-master3;do rsync -vaz /usr/lib/systemd/system/etcd.service $i:/usr/lib/systemd/system/;done
4)拷贝相关证书
[root@k8s-master1 work]# cp ca*.pem etcd*.pem /etc/etcd/ssl/
# 证书拷贝至其他节点
[root@k8s-master1 work]# for i in k8s-master2 k8s-master3;do rsync -vaz etcd*.pem ca*.pem $i:/etc/etcd/ssl/;done
5)创建数据目录
[root@k8s-master1 work]# mkdir -p /var/lib/etcd/default.etcd
[root@k8s-master2 work]# mkdir -p /var/lib/etcd/default.etcd
[root@k8s-master3 work]# mkdir -p /var/lib/etcd/default.etcd
6)启动etcd
# 启动etcd时,先启动k8s-master1的etcd服务,会一直卡住在启动的状态,接着再启动k8s-master2的etcd,这样k8s-master1这个节点etcd才会正常起来
[root@k8s-master1 work]# systemctl daemon-reload && systemctl enable etcd && systemctl start etcd && systemctl status etcd
[root@k8s-master2 work]# systemctl daemon-reload && systemctl enable etcd && systemctl start etcd && systemctl status etcd
[root@k8s-master3 work]# systemctl daemon-reload && systemctl enable etcd && systemctl start etcd && systemctl status etcd
7)查看集群状态
[root@k8s-master1 work]# ETCDCTL_API=3
[root@k8s-master1 work]# /usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.40.180:2379,https://192.168.40.181:2379,https://192.168.40.182:2379 endpoint health
+-----------------------------+--------+-------------+-------+
| ENDPOINT | HEALTH | TOOK | ERROR |
+-----------------------------+--------+-------------+-------+
| https://192.168.40.180:2379 | true | 11.456419ms | |
| https://192.168.40.182:2379 | true | 12.759217ms | |
| https://192.168.40.181:2379 | true | 21.141684ms | |
+-----------------------------+--------+-------------+-------+
三、部署kubernetes组件
3.1、软件包下载
下载地址: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/
# 把kubernetes-server-linux-amd64.tar.gz上传到k8s-master1上的/data/work目录下:
[root@k8s-master1 work]# tar zxvf kubernetes-server-linux-amd64.tar.gz
[root@k8s-master1 work]# cd kubernetes/server/bin/
[root@k8s-master1 bin]# ll
total 986524
-rwxr-xr-x 1 root root 46678016 May 12 20:51 apiextensions-apiserver
-rwxr-xr-x 1 root root 39215104 May 12 20:51 kubeadm
-rwxr-xr-x 1 root root 44675072 May 12 20:51 kube-aggregator
-rwxr-xr-x 1 root root 118210560 May 12 20:51 kube-apiserver
-rw-r--r-- 1 root root 8 May 12 20:50 kube-apiserver.docker_tag
-rw------- 1 root root 123026944 May 12 20:50 kube-apiserver.tar
-rwxr-xr-x 1 root root 112746496 May 12 20:51 kube-controller-manager
-rw-r--r-- 1 root root 8 May 12 20:50 kube-controller-manager.docker_tag
-rw------- 1 root root 117562880 May 12 20:50 kube-controller-manager.tar
-rwxr-xr-x 1 root root 40226816 May 12 20:51 kubectl
-rwxr-xr-x 1 root root 114097256 May 12 20:51 kubelet
-rwxr-xr-x 1 root root 39481344 May 12 20:51 kube-proxy
-rw-r--r-- 1 root root 8 May 12 20:50 kube-proxy.docker_tag
-rw------- 1 root root 120374784 May 12 20:50 kube-proxy.tar
-rwxr-xr-x 1 root root 43716608 May 12 20:51 kube-scheduler
-rw-r--r-- 1 root root 8 May 12 20:50 kube-scheduler.docker_tag
-rw------- 1 root root 48532992 May 12 20:50 kube-scheduler.tar
-rwxr-xr-x 1 root root 1634304 May 12 20:51 mounter
[root@k8s-master1 bin]# cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
# 将kube-apiserver kube-controller-manager kube-scheduler kubectl拷贝至master2,master3
[root@k8s-master1 bin]# rsync -vaz kube-apiserver kube-controller-manager kube-scheduler kubectl k8s-master2:/usr/local/bin/
[root@k8s-master1 bin]# rsync -vaz kube-apiserver kube-controller-manager kube-scheduler kubectl k8s-master3:/usr/local/bin/
# 将kubelet kube-proxy拷贝至node1
[root@k8s-master1 bin]# scp kubelet kube-proxy k8s-node1:/usr/local/bin/
# 创建相关目录
[root@k8s-master1 work]# mkdir -p /etc/kubernetes/
[root@k8s-master1 work]# mkdir -p /etc/kubernetes/ssl
[root@k8s-master1 work]# mkdir /var/log/kubernetes
3.2、部署apiserver组件
3.2.1、TLS Bootstrapping 机制
TLS Bootstrapping
机制:
1)Master apiserver启用TLS认证后,每个节点的 kubelet 组件都要使用由 apiserver 使用的 CA 签发的有效证书才能与 apiserver 通讯,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。
2)Bootstrap 是很多系统中都存在的程序,比如 Linux 的bootstrap,bootstrap 一般都是作为预先配置在开启或者系统启动的时候加载,这可以用来生成一个指定环境。
3)Kubernetes 的 kubelet 在启动时同样可以加载一个这样的配置文件,这个文件的内容类似如下形式:
apiVersion: v1
clusters: null
contexts:
- context:
cluster: kubernetes
user: kubelet-bootstrap
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
user: {}
TLS bootstrapping
具体引导过程:
# TLS 作用
TLS 的作用就是对通讯加密,防止中间人窃听;同时如果证书不信任的话根本就无法与 apiserver 建立连接,更不用提有没有权限向apiserver请求指定内容。
# RBAC 作用
当 TLS 解决了通讯问题后,那么权限问题就应由 RBAC 解决(可以使用其他权限模型,如 ABAC);RBAC 中规定了一个用户或者用户组(subject)具有请求哪些 api 的权限;在配合 TLS 加密的时候,实际上 apiserver 读取客户端证书的 CN 字段作为用户名,读取 O字段作为用户组.
# 说明
1)想要与 apiserver 通讯就必须采用由 apiserver CA 签发的证书,这样才能形成信任关系,建立 TLS 连接;
2)可以通过证书的 CN、O 字段来提供 RBAC 所需的用户与用户组。
kubelet
首次启动流程
# 问题引出:
TLS bootstrapping 功能是让 kubelet 组件去 apiserver 申请证书,然后用于连接 apiserver;那么第一次启动时没有证书如何连接 apiserver ?
# 流程分析
在apiserver 配置中指定了一个 token.csv 文件,该文件中是一个预设的用户配置;同时该用户的Token 和 由apiserver 的 CA签发的用户被写入了 kubelet 所使用的 bootstrap.kubeconfig 配置文件中;这样在首次请求时,kubelet 使用 bootstrap.kubeconfig 中被 apiserver CA 签发证书时信任的用户来与 apiserver 建立 TLS 通讯,使用 bootstrap.kubeconfig 中的用户 Token 来向 apiserver 声明自己的 RBAC 授权身份.
token.csv格式: token,用户名,UID,用户组
3940fd7fbb391d1b4d861ad17a1f0613,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
首次启动时,可能遇到 kubelet报401无权访问 apiserver 的错误;这是因为在默认情况下,kubelet 通过 bootstrap.kubeconfig 中的预设用户 Token 声明了自己的身份,然后创建 CSR 请求;但是不要忘记这个用户在我们不处理的情况下他没任何权限的,包括创建 CSR 请求;所以需要创建一个ClusterRoleBinding,将预设用户 kubelet-bootstrap 与内置的 ClusterRole system:node-bootstrapper 绑定到一起,使其能够发起 CSR 请求。
3.2.2、生成相关证书
1)创建token.csv文件
# 格式:token,用户名,UID,用户组
[root@k8s-master1 work]# cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
[root@k8s-master1 work]# cat token.csv
b0937520a8a36f99ea6bc95e67d77740,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
2)创建csr请求文件
# 注意:如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表。 由于该证书后续被 kubernetes master 集群使用,需要将master节点的IP都填上,同时还需要填写 service 网络的首个IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.255.0.1)
[root@k8s-master1 work]# vim kube-apiserver-csr.json
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.40.180",
"192.168.40.181",
"192.168.40.182",
"192.168.40.183",
"192.168.40.199",
"10.255.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Hubei",
"L": "Wuhan",
"O": "k8s",
"OU": "system"
}
]
}
# 生成证书
[root@k8s-master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
[root@k8s-master1 work]# ll kube-apiserver*
-rw-r--r-- 1 root root 1269 Jul 7 15:40 kube-apiserver.csr
-rw-r--r-- 1 root root 522 Jul 7 15:38 kube-apiserver-csr.json
-rw------- 1 root root 1679 Jul 7 15:40 kube-apiserver-key.pem
-rw-r--r-- 1 root root 1635 Jul 7 15:40 kube-apiserver.pem
3.2.3、部署apiserver
1)创建api-server的配置文件
[root@k8s-master1 work]# vim kube-apiserver.conf
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--anonymous-auth=false \
--bind-address=192.168.40.180 \
--secure-port=6443 \
--advertise-address=192.168.40.180 \
--insecure-port=0 \
--authorization-mode=Node,RBAC \
--runtime-config=api/all=true \
--enable-bootstrap-token-auth \
--service-cluster-ip-range=10.255.0.0/16 \
--token-auth-file=/etc/kubernetes/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--etcd-cafile=/etc/etcd/ssl/ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--etcd-servers=https://192.168.40.180:2379,https://192.168.40.181:2379,https://192.168.40.182:2379 \
--enable-swagger-ui=true \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-apiserver-audit.log \
--event-ttl=1h \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=4"
参数说明:
--logtostderr:启用日志
--v:日志等级
--log-dir:日志目录
--etcd-servers:etcd集群地址
--bind-address:监听地址
--secure-port:https安全端口
--advertise-address:集群通告地址
--allow-privileged:启用授权
--service-cluster-ip-range:Service虚拟IP地址段
--enable-admission-plugins:准入控制模块
--authorization-mode:认证授权,启用RBAC授权和节点自管理
--enable-bootstrap-token-auth:启用TLS bootstrap机制
--token-auth-file:bootstrap token文件
--service-node-port-range:Service nodeport类型默认分配端口范围
--kubelet-client-xxx:apiserver访问kubelet客户端证书
--tls-xxx-file:apiserver https证书
--etcd-xxxfile:连接Etcd集群证书 –
-audit-log-xxx:审计日志
2)创建服务启动文件
[root@k8s-master1 work]# vim kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
3)拷贝相关文件
# master1文件本机拷贝
[root@k8s-master1 work]# cp ca*.pem /etc/kubernetes/ssl
[root@k8s-master1 work]# cp kube-apiserver*.pem /etc/kubernetes/ssl/
[root@k8s-master1 work]# cp token.csv /etc/kubernetes/
[root@k8s-master1 work]# cp kube-apiserver.conf /etc/kubernetes/
[root@k8s-master1 work]# cp kube-apiserver.service /usr/lib/systemd/system/
# 拷贝token.csv至其他节点
[root@k8s-master1 work]# rsync -vaz token.csv k8s-master2:/etc/kubernetes/
[root@k8s-master1 work]# rsync -vaz token.csv k8s-master3:/etc/kubernetes/
# 拷贝证书至其他节点
[root@k8s-master1 work]# rsync -vaz kube-apiserver*.pem k8s-master2:/etc/kubernetes/ssl/
[root@k8s-master1 work]# rsync -vaz kube-apiserver*.pem k8s-master3:/etc/kubernetes/ssl/
[root@k8s-master1 work]# rsync -vaz ca*.pem k8s-master2:/etc/kubernetes/ssl/
[root@k8s-master1 work]# rsync -vaz ca*.pem k8s-master3:/etc/kubernetes/ssl/
# 拷贝kube-apiserver文件至其他节点
[root@k8s-master1 work]# rsync -vaz kube-apiserver.conf k8s-master2:/etc/kubernetes/
[root@k8s-master1 work]# rsync -vaz kube-apiserver.conf k8s-master3:/etc/kubernetes/
[root@k8s-master1 work]# rsync -vaz kube-apiserver.service k8s-master2:/usr/lib/systemd/system/
[root@k8s-master1 work]# rsync -vaz kube-apiserver.service k8s-master3:/usr/lib/systemd/system/
4)修改k8s-master2和k8s-master3配置文件kube-apiserver.conf
[root@k8s-master2 ~]# vim /etc/kubernetes/kube-apiserver.conf
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--anonymous-auth=false \
--bind-address=192.168.40.181 \
--secure-port=6443 \
--advertise-address=192.168.40.181 \
--insecure-port=0 \
--authorization-mode=Node,RBAC \
--runtime-config=api/all=true \
--enable-bootstrap-token-auth \
--service-cluster-ip-range=10.255.0.0/16 \
--token-auth-file=/etc/kubernetes/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--etcd-cafile=/etc/etcd/ssl/ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--etcd-servers=https://192.168.40.180:2379,https://192.168.40.181:2379,https://192.168.40.182:2379 \
--enable-swagger-ui=true \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-apiserver-audit.log \
--event-ttl=1h \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=4"
[root@k8s-master3 ~]# vim /etc/kubernetes/kube-apiserver.conf
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--anonymous-auth=false \
--bind-address=192.168.40.182 \
--secure-port=6443 \
--advertise-address=192.168.40.182 \
--insecure-port=0 \
--authorization-mode=Node,RBAC \
--runtime-config=api/all=true \
--enable-bootstrap-token-auth \
--service-cluster-ip-range=10.255.0.0/16 \
--token-auth-file=/etc/kubernetes/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--etcd-cafile=/etc/etcd/ssl/ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--etcd-servers=https://192.168.40.180:2379,https://192.168.40.181:2379,https://192.168.40.182:2379 \
--enable-swagger-ui=true \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-apiserver-audit.log \
--event-ttl=1h \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=4"
5)启动apiserver
[root@k8s-master1 work]# systemctl daemon-reload && systemctl enable kube-apiserver && systemctl start kube-apiserver && systemctl status kube-apiserver
[root@k8s-master2 work]# systemctl daemon-reload && systemctl enable kube-apiserver && systemctl start kube-apiserver && systemctl status kube-apiserver
[root@k8s-master3 work]# systemctl daemon-reload && systemctl enable kube-apiserver && systemctl start kube-apiserver && systemctl status kube-apiserver
6)测试
# 看到401,这个是正常的的状态,还没认证
[root@k8s-master3 ~]# curl --insecure https://192.168.40.180:6443/
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
3.3、部署kubectl组件
3.3.1、kubectl使用原理
1)Kubectl是客户端工具,操作k8s资源的,如增删改查等。
2)Kubectl操作资源的时候,怎么知道连接到哪个集群,需要一个文件/etc/kubernetes/admin.conf
,kubectl会根据这个文件的配置,去访问k8s资源,/etc/kubernetes/admin.conf
文件记录了访问的k8s集群和用到的证书。
3)可以设置一个环境变量KUBECONFIG
,这样在操作kubectl,就会自动加载KUBECONFIG来操作要管理哪个集群的k8s资源了
~]# export KUBECONFIG =/etc/kubernetes/admin.conf
4)也可以按照下面方法,这个是在kubeadm初始化k8s的时候会告诉我们要用的一个方法,这样我们在执行kubectl,就会加载/root/.kube/config文件,去操作k8s资源了
~]# cp /etc/kubernetes/admin.conf /root/.kube/config
5)如果设置了KUBECONFIG
,那就会先找到KUBECONFIG去操作k8s,如果没有KUBECONFIG变量,那就会使用/root/.kube/config
文件决定管理哪个k8s集群的资源
3.3.2、部署kubectl
1)创建csr请求文件
[root@k8s-master1 work]# vim admin-csr.json
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Hubei",
"L": "Wuhan",
"O": "system:masters",
"OU": "system"
}
]
}
证书请求文件说明:
1)kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权; kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限; O指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限;
2)admin 证书,是将来生成管理员用的kube config 配置文件用的,现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制, kubernetes 将证书中的CN 字段 作为User, O 字段作为 Group; "O": "system:masters", 必须是system:masters,否则后面kubectl create clusterrolebinding报错。
3)证书O配置为system:masters 在集群内部cluster-admin的clusterrolebinding将system:masters组和cluster-admin clusterrole绑定在一起
2)生成证书
[root@k8s-master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
[root@k8s-master1 work]# ll admin*
-rw-r--r-- 1 root root 1005 Jul 7 16:41 admin.csr
-rw-r--r-- 1 root root 238 Jul 7 16:41 admin-csr.json
-rw------- 1 root root 1675 Jul 7 16:41 admin-key.pem
-rw-r--r-- 1 root root 1391 Jul 7 16:41 admin.pem
[root@k8s-master1 work]# cp admin*.pem /etc/kubernetes/ssl/
3)配置安全上下文
创建kubeconfig
配置文件:kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书
# 1.设置集群参数
[root@k8s-master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.40.180:6443 --kubeconfig=kube.config
[root@k8s-master1 work]# vim kube.config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUDVxU29ZZW55bUsvYzRnVzR6Sk1VT2tTRitnd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qRXdOekEzTURVeE9UQXdXaGNOTXpFd056QTFNRFV4T1RBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU1LMjNJWnJxNWRXVDdZQmlySmI5Rk1HYmNMaERPcDkKdDZLR05KUFV2YWVJaDlMaEFnWEtZSTJtTmRiTFVnZjVxZklOeGpKVXNhb2tZcHlpeVdNUFpIbWE0ZjV6bVFacwo4NTNiWmkySmtsT3paZllXclo1bmNFcHh3Z2hnWkNVMlovQTJFcDFvYTdManM0b3hWaDk0VnNjNTBvMjVWaTBEClF2ZWM5Qmg0QlRkSzRSSlhjYkpxeDlDSW5pUzFSV1p2eXB5YkJqdkNxZW9UM0xFK1FTcHNBSzJPZXhuM2NWdDkKZGdoeWoxbEhpTjNaejcwQUVVV09SaXhwSHFjTS9WYWJOTFd5amJscUpJM0x2UDRKOFhOUTMxMHBKbXBMT0ZENQpvYkNmVmY5R0FjUDZnTHhBNnRBSldZV0pKeWF1R1hqNDBJL010ZzdtbWhQNEFjT1BsamFMc0NVQ0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRklXTUIrUDNYZXM3WXdhTEZHYXl5YTQzZi9Tek1COEdBMVVkSXdRWU1CYUFGSVdNQitQM1hlczdZd2FMRkdheQp5YTQzZi9Tek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjFsV25ib1B4ZEdYSDVNM3ZYZ1ZOVEdYUURrME1hCnh4UWw4UlZmam1tQVo0OU10cncvZ3Y1UUFRS3paTUU0a2YycXJlWmVaOThVckhPMmNVd08rRXpoeW56ZVRTV0YKZUFLVDl0RjQ1OWQ0L0hnSnFLQVVHWUxnczEzVVlmemdTUDNpK3hDakI5eFZNc0RtdmpVYkVFSlJHSnpBc1Frdwp3MHp5bHBOWTFRU0xnL3hmTmhQZUNRTXY5NGZaWXlBNTFucXdtZHk0bFpFcmUrS3NsS1lsNDlQak9ONmhuUkl1CkMvZG9jM1VvbjV5L1UwNTVpVHdjQTd2U3Jsd0lzNGRoQ2gzSTByR3dLMVY4ekFCVWZHYjJVTVJ3V05WRkVsNzgKUFBhWTdJek1SdkdLbGFYdzB5amh3NmRkbUxVbVVPR0pmRmlzUzdsQWQvZDFRWVZwRU9rMktQSVQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.40.180:6443
name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
# 2.设置客户端认证参数
[root@k8s-master1 work]# kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config
# 3.设置上下文参数
[root@k8s-master1 work]# kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config
# 4.设置当前上下文
[root@k8s-master1 work]# kubectl config use-context kubernetes --kubeconfig=kube.config
[root@k8s-master1 work]# mkdir ~/.kube -p
[root@k8s-master1 work]# cp kube.config ~/.kube/config
# 5.授权kubernetes证书访问kubelet api权限
[root@k8s-master1 work]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
# 6.查看kube.config
[root@k8s-master1 work]# vim ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUDVxU29ZZW55bUsvYzRnVzR6Sk1VT2tTRitnd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qRXdOekEzTURVeE9UQXdXaGNOTXpFd056QTFNRFV4T1RBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU1LMjNJWnJxNWRXVDdZQmlySmI5Rk1HYmNMaERPcDkKdDZLR05KUFV2YWVJaDlMaEFnWEtZSTJtTmRiTFVnZjVxZklOeGpKVXNhb2tZcHlpeVdNUFpIbWE0ZjV6bVFacwo4NTNiWmkySmtsT3paZllXclo1bmNFcHh3Z2hnWkNVMlovQTJFcDFvYTdManM0b3hWaDk0VnNjNTBvMjVWaTBEClF2ZWM5Qmg0QlRkSzRSSlhjYkpxeDlDSW5pUzFSV1p2eXB5YkJqdkNxZW9UM0xFK1FTcHNBSzJPZXhuM2NWdDkKZGdoeWoxbEhpTjNaejcwQUVVV09SaXhwSHFjTS9WYWJOTFd5amJscUpJM0x2UDRKOFhOUTMxMHBKbXBMT0ZENQpvYkNmVmY5R0FjUDZnTHhBNnRBSldZV0pKeWF1R1hqNDBJL010ZzdtbWhQNEFjT1BsamFMc0NVQ0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRklXTUIrUDNYZXM3WXdhTEZHYXl5YTQzZi9Tek1COEdBMVVkSXdRWU1CYUFGSVdNQitQM1hlczdZd2FMRkdheQp5YTQzZi9Tek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjFsV25ib1B4ZEdYSDVNM3ZYZ1ZOVEdYUURrME1hCnh4UWw4UlZmam1tQVo0OU10cncvZ3Y1UUFRS3paTUU0a2YycXJlWmVaOThVckhPMmNVd08rRXpoeW56ZVRTV0YKZUFLVDl0RjQ1OWQ0L0hnSnFLQVVHWUxnczEzVVlmemdTUDNpK3hDakI5eFZNc0RtdmpVYkVFSlJHSnpBc1Frdwp3MHp5bHBOWTFRU0xnL3hmTmhQZUNRTXY5NGZaWXlBNTFucXdtZHk0bFpFcmUrS3NsS1lsNDlQak9ONmhuUkl1CkMvZG9jM1VvbjV5L1UwNTVpVHdjQTd2U3Jsd0lzNGRoQ2gzSTByR3dLMVY4ekFCVWZHYjJVTVJ3V05WRkVsNzgKUFBhWTdJek1SdkdLbGFYdzB5amh3NmRkbUxVbVVPR0pmRmlzUzdsQWQvZDFRWVZwRU9rMktQSVQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.40.180:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: admin
name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQxVENDQXIyZ0F3SUJBZ0lVZUhEcUsvbWRYdlNIYTBIZTMvSXpwN0U2SUtZd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qRXdOekEzTURnek5qQXdXaGNOTXpFd056QTFNRGd6TmpBd1dqQm5NUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVJjd0ZRWURWUVFLRXc1egplWE4wWlcwNmJXRnpkR1Z5Y3pFUE1BMEdBMVVFQ3hNR2MzbHpkR1Z0TVE0d0RBWURWUVFERXdWaFpHMXBiakNDCkFTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTldlZ1NTL1ZYRldHSUxmNGI0VmdqNjYKaFBSMEVwYjNxVlRUTXJaQzZiaG1Lai9JNFpiaVJQMFZjWkJqNnFBM3RKSUE4M1dHcGNsQUJUaG5veUVTWitnZApxUFJsY0xoRFpGclhSaTlKRGp6ZlRZUXZWVENYSUl0bHlJYkwwSkFQK2hQTUZQTGNxVitnVkhpc3ZlWmxIMStpCklmSXgyaGtqbjRrT1RmZVM2UUFBNjE0Wm5ocTJsTG1MdTU5RFZZbUFad1RFaTkwZC9aS2pXajY5aXZoTDA1WE4KYUl4a1RwWnhTRVBuV3lFSlh3eXRTakwwZGduajZGcXRTR1hIMGxrYXlUZ3pQQlJBMkRVMFJxY2dscUV3UmJRVQpHWUdralZhMlhtdEdDRk5oWWttVWFZdEhJVFN1cXk4U0FvWG9sUmlyRVB4RFUyc1lmc3JmakthemtONUlrWXNDCkF3RUFBYU4vTUgwd0RnWURWUjBQQVFIL0JBUURBZ1dnTUIwR0ExVWRKUVFXTUJRR0NDc0dBUVVGQndNQkJnZ3IKQmdFRkJRY0RBakFNQmdOVkhSTUJBZjhFQWpBQU1CMEdBMVVkRGdRV0JCVHd4b2J0QzJ1THBzSkRyaE1HY2hkaApkVVJRYkRBZkJnTlZIU01FR0RBV2dCU0ZqQWZqOTEzck8yTUdpeFJtc3NtdU4zLzBzekFOQmdrcWhraUc5dzBCCkFRc0ZBQU9DQVFFQWZJU29aREpWbVVSY0hSWmVpaXNjRFRVUC9iaFMwaUhTZlBDUENkOVhyUy9MbmprV2JPZUUKM09PdTJlUW82bHlCZkU1ZFA4VlFmcGxiMm41bTcrWi9zUktENmNQU241UUgxNld5bjdYUXJxWlB5QitQTGJvKwo2cVQydUFycVJlOUFLRE5rMVNkdEVmYWQ0dDlRUUlOVzlzV1EyVE5rWjhtMXBRdUgvbkZ2QVk5U0Vqczl4bU1RCitEZk5mZDhmOFdsbmREdW9JZnIvYjE0ZzJmOXBobzBuTk1vMEluOE5IUUlLQ0M1aWRicFhCYm5tYy9WWXlPN1QKbm16M2loOXZIRXV2b29qU0lxNm1FNDFlWGxYUVZzWFdNUUlESUgwY2M4ZWRRRmNnU01PV3VjbURDSFdnd1hWMApPZVJGNUtxQXR5RGIzWlFvS3R3c3NtSE92SkxMUC9mb2xBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBMVo2QkpMOVZjVllZZ3QvaHZoV0NQcnFFOUhRU2x2ZXBWTk15dGtMcHVHWXFQOGpoCmx1SkUvUlZ4a0dQcW9EZTBrZ0R6ZFlhbHlVQUZPR2VqSVJKbjZCMm85R1Z3dUVOa1d0ZEdMMGtPUE45TmhDOVYKTUpjZ2kyWEloc3ZRa0EvNkU4d1U4dHlwWDZCVWVLeTk1bVVmWDZJaDhqSGFHU09maVE1Tjk1THBBQURyWGhtZQpHcmFVdVl1N24wTlZpWUJuQk1TTDNSMzlrcU5hUHIySytFdlRsYzFvakdST2xuRklRK2RiSVFsZkRLMUtNdlIyCkNlUG9XcTFJWmNmU1dSckpPRE04RkVEWU5UUkdweUNXb1RCRnRCUVpnYVNOVnJaZWEwWUlVMkZpU1pScGkwY2gKTks2ckx4SUNoZWlWR0tzUS9FTlRheGgreXQrTXByT1Eza2lSaXdJREFRQUJBb0lCQUJKWkMrU1pIb0Nla1hwawpPbUoyUEhxZzBKeWlmNXBCNldSa3czMU9ILzc3bjNOZEVLdENBZ1R1MjVNNFVjV3pJeXBMTko0S2s2REdnK3hGClVvaWJxUnNSdVJwTXdETERieEl5WFUvZ2FYMm0vR1IzSUUwTkhmbDdJNDhZWUhDUFByNkdqK0lRTytmL3dHR2gKREtxR1V2eUcwMzJXOUpHbU1xUzErdEpoNXV0ZUFLS3g5M0FVVTlWd0lGY3J1Q0NNQnBLaUxrZUh2YThnQ1Q5NApYQjVRVnI0aVRKOHZCdTBRTjZsZUJrS21WcnBMazQ1c051QSsvcytFdEhFQUN1YzNxZFZHMko0ZzczaXREMGIzCm51T0tIK2FrU0U4bFFQS1htK3lmSkg1dmxhSGI3SDVyTEFYQzlRWEVjelFGRzJDNUJrbVEvZHV5Mk9sWXR6QzEKRlRNZHVCa0NnWUVBOUx6ZHo2c0JVd0RoNURzMkdYY3dBWjJpU2Z6dFhwV2RhRnRKMTNTSzczTmRHVWNmZ28rcQpIdmFHOWVKMDNZVFJpWkxaUVRVcC92RG9FQzA0amhUTlRiYW54YTloZTg4dnRGWHVpWDhUY2FVbVpBOE9VRmdWCnY1SWI4MlI0bmlnbm5jNUpZZ0c3WUthYmpHdElnN1NSbEFHekRSQUhtNVZyd3hnMndjcy9oV1VDZ1lFQTMzTUwKbDhUUE9xTW5YM01sM3QrWktCenNkYndVTCtQTGxDSTdBZ2FlM3RabGM0cHBsWldUSldqemNmNUtsZmVob3llOApBWGJvekR6bnBoeHVocjYwcStYdVZvOWV1RHpoZ25HZ1k2d01rb2kyWFFSMUNOWm1ISDdpejdBd3dZU2laamJNCmJZV3RXc2dRNkpraUV0cWsrSnZSa0NMcHYzL0lGSjl3VG5PTGhDOENnWUJiMjMrTmNHdkEwYlgzU2RvV1dNdmwKNzFwNFZyeHBJZExBMW5LeXNZVnNObXFkRURyZGNEcTBBR2ZMWmtIaTJ2VWlvOEZ6WGhiekgweWF0YjVpWmFCaApLTXR6d1UzZmdIWXhRNGVTaCtXdVpBUXl6Z3ZiVUJScG9OZG8xUzhJZlozUTl4cEg5TXAxamxNWHN6UzJhbEd4CnNhbVluNG1iZGN2S29BMzlVdUgybVFLQmdHVW1iQklJNnBJdHR5NFRMd3FFQjQzTUFoS04wRW1aZ2RlTjQwNVkKZHVTREF5dlpkVkJjaEY3RDhxZ2dwOXpaVzFkRExtMHZTZFRpb1M1bDRuYW1yNXk5R2pZZThvN09LSHRuT21iRQpSSEMwNkhDVkN2RzBORWNqL1VKdERMVWRlSEp5emZtcU1MNU9vTERhV1QxVnNxWkR5d3JIY3k3WUJsZW5rU3hDClM5N0xBb0dBZHdURmljeUc4ZDgwSTIzSVJlZC9BdzFQUklnRE1lVW5kaDMwUHRzb0tJR0hZQjdJQWd4aGFXbFgKRVJQTHhRbXFSVmptSG0zS1k2d2tlbllIUzA4WGZQVUhXV1RBQWpBeUZleGlNdmRjaCsyamFzSVlHR1BqMUxSeQpBTk84bFVoTGRlaCszU0JsejBMVXd6ZnVNM09PTHNDTGFUNWEvOWwyQ0J1TW81amtmR289Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
4)查看集群组件状态
[root@k8s-master1 work]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.40.180:6443
[root@k8s-master1 work]# kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
[root@k8s-master1 work]# kubectl get all --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.255.0.1 <none> 443/TCP 51m
5)同步kubectl文件到其他节点
[root@k8s-master2 ~]# mkdir /root/.kube/
[root@k8s-master3 ~]# mkdir /root/.kube/
[root@k8s-master1 work]# rsync -vaz /root/.kube/config k8s-master2:/root/.kube/
[root@k8s-master1 work]# rsync -vaz /root/.kube/config k8s-master3:/root/.kube/
3.3.3、kubectl命令补全
Kubectl官方备忘单:https://kubernetes.io/zh/docs/reference/kubectl/cheatsheet/
[root@k8s-master1 work]# yum install -y bash-completion
[root@k8s-master1 work]# source /usr/share/bash-completion/bash_completion
[root@k8s-master1 work]# source <(kubectl completion bash)
[root@k8s-master1 work]# kubectl completion bash > ~/.kube/completion.bash.inc
[root@k8s-master1 work]# source '/root/.kube/completion.bash.inc'
[root@k8s-master1 work]# source $HOME/.bash_profile
3.4、部署kube-controller-manager组件
1)创建csr请求文件
[root@k8s-master1 work]# vim kube-controller-manager-csr.json
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts": [
"127.0.0.1",
"192.168.40.180",
"192.168.40.181",
"192.168.40.182",
"192.168.40.199"
],
"names": [
{
"C": "CN",
"ST": "Hubei",
"L": "Wuhan",
"O": "system:kube-controller-manager",
"OU": "system"
}
]
}
证书请求文件参数说明:
hosts 列表包含所有 kube-controller-manager 节点 IP;
CN 为 system:kube-controller-manager;
O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限
2)生成证书
[root@k8s-master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
[root@k8s-master1 work]# ll kube-controller-manager*
-rw-r--r-- 1 root root 1139 Jul 7 17:18 kube-controller-manager.csr
-rw-r--r-- 1 root root 419 Jul 7 17:17 kube-controller-manager-csr.json
-rw------- 1 root root 1679 Jul 7 17:18 kube-controller-manager-key.pem
-rw-r--r-- 1 root root 1505 Jul 7 17:18 kube-controller-manager.pem
3)创建kube-controller-manager的kubeconfig
# 1.设置集群参数
[root@k8s-master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.40.180:6443 --kubeconfig=kube-controller-manager.kubeconfig
# 2.设置客户端认证参数
[root@k8s-master1 work]# kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
# 3.设置上下文参数
[root@k8s-master1 work]# kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
# 4.设置当前上下文
[root@k8s-master1 work]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
# 5.查看kubeconfig
[root@k8s-master1 work]# cat kube-controller-manager.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUDVxU29ZZW55bUsvYzRnVzR6Sk1VT2tTRitnd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qRXdOekEzTURVeE9UQXdXaGNOTXpFd056QTFNRFV4T1RBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU1LMjNJWnJxNWRXVDdZQmlySmI5Rk1HYmNMaERPcDkKdDZLR05KUFV2YWVJaDlMaEFnWEtZSTJtTmRiTFVnZjVxZklOeGpKVXNhb2tZcHlpeVdNUFpIbWE0ZjV6bVFacwo4NTNiWmkySmtsT3paZllXclo1bmNFcHh3Z2hnWkNVMlovQTJFcDFvYTdManM0b3hWaDk0VnNjNTBvMjVWaTBEClF2ZWM5Qmg0QlRkSzRSSlhjYkpxeDlDSW5pUzFSV1p2eXB5YkJqdkNxZW9UM0xFK1FTcHNBSzJPZXhuM2NWdDkKZGdoeWoxbEhpTjNaejcwQUVVV09SaXhwSHFjTS9WYWJOTFd5amJscUpJM0x2UDRKOFhOUTMxMHBKbXBMT0ZENQpvYkNmVmY5R0FjUDZnTHhBNnRBSldZV0pKeWF1R1hqNDBJL010ZzdtbWhQNEFjT1BsamFMc0NVQ0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRklXTUIrUDNYZXM3WXdhTEZHYXl5YTQzZi9Tek1COEdBMVVkSXdRWU1CYUFGSVdNQitQM1hlczdZd2FMRkdheQp5YTQzZi9Tek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjFsV25ib1B4ZEdYSDVNM3ZYZ1ZOVEdYUURrME1hCnh4UWw4UlZmam1tQVo0OU10cncvZ3Y1UUFRS3paTUU0a2YycXJlWmVaOThVckhPMmNVd08rRXpoeW56ZVRTV0YKZUFLVDl0RjQ1OWQ0L0hnSnFLQVVHWUxnczEzVVlmemdTUDNpK3hDakI5eFZNc0RtdmpVYkVFSlJHSnpBc1Frdwp3MHp5bHBOWTFRU0xnL3hmTmhQZUNRTXY5NGZaWXlBNTFucXdtZHk0bFpFcmUrS3NsS1lsNDlQak9ONmhuUkl1CkMvZG9jM1VvbjV5L1UwNTVpVHdjQTd2U3Jsd0lzNGRoQ2gzSTByR3dLMVY4ekFCVWZHYjJVTVJ3V05WRkVsNzgKUFBhWTdJek1SdkdLbGFYdzB5amh3NmRkbUxVbVVPR0pmRmlzUzdsQWQvZDFRWVZwRU9rMktQSVQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.40.180:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:kube-controller-manager
name: system:kube-controller-manager
current-context: system:kube-controller-manager
kind: Config
preferences: {}
users:
- name: system:kube-controller-manager
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVLakNDQXhLZ0F3SUJBZ0lVY1NoSUhTelVmMTdPeUtYYmJ0cUtnaTBTNWVJd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qRXdOekEzTURreE5EQXdXaGNOTXpFd056QTFNRGt4TkRBd1dqQ0JrREVMTUFrR0ExVUUKQmhNQ1EwNHhEakFNQmdOVkJBZ1RCVWgxWW1WcE1RNHdEQVlEVlFRSEV3VlhkV2hoYmpFbk1DVUdBMVVFQ2hNZQpjM2x6ZEdWdE9tdDFZbVV0WTI5dWRISnZiR3hsY2kxdFlXNWhaMlZ5TVE4d0RRWURWUVFMRXdaemVYTjBaVzB4Ckp6QWxCZ05WQkFNVEhuTjVjM1JsYlRwcmRXSmxMV052Ym5SeWIyeHNaWEl0YldGdVlXZGxjakNDQVNJd0RRWUoKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTCtmZzJxSktVeVovdWM3MFhla2d1NUxhL3BYVFk3YQo1Q1NDeFZmR2lqa3IyK3cyVlNUNXRLSHRuZzV5RDJrckQxdVFXam9neHk3c09jWGZMTmlaQUMzY1ErL2R6SVdHCit6dTYvN2kyRkkrVG1TSnZ2M2RoYTFqZnNCWnhSaDA3Y1htR0JsNFcxVUlSa0NET09TVGg1akExWnNoMVc0ODYKQStqcFVzdXdkSmpONWthNlhEeE0xT3hKUDA2dGVRUzhYaTM4ZExhaU5BOE5OR3VXOFF6bVJ1dEhkSFNGaTNPUgpEaElmTnJJZ3YrTFdiZlVvU1FTWnFzOVludVhpNU1ZTXlqek1GbFVsRDN3UERFckVJMEI5dFc5RGw5RHM1elVhCjhESEZoZDJyajNTamc1RWcxYXlSREkxR1ZBNS84U01qaXljUExHWnZQWWl5Z2xpb2VDc25XYlVDQXdFQUFhT0IKcVRDQnBqQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUdDQ3NHQVFVRgpCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwT0JCWUVGR2lEczVzSTBNTDVsd3Z0YStYY2l1WmQ5cHN3Ck1COEdBMVVkSXdRWU1CYUFGSVdNQitQM1hlczdZd2FMRkdheXlhNDNmL1N6TUNjR0ExVWRFUVFnTUI2SEJIOEEKQUFHSEJNQ29LTFNIQk1Db0tMV0hCTUNvS0xhSEJNQ29LTWN3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUpIVApiVFJqa01qM3pTbWY3ZGFCYkxTRlc3MS9leFdobG9vMVlXamw3UzNjeUFHTURXdGZZeEZSbC9lUjJsSVVDdU9ZCi9VWDRRWTBmcHJJMVMwN2IyY0JqSTJxenRKcTdXa3BWRFVhV05ZRmVaeDRFdTNlUlVOcG1GTnJKcWgwclN6RTYKREs4S2RGaDY1V3R1OTRua0xMSG5FcDVxWHhsdVZDY0g4S0NSNnBWL3lETTBWRzNnVStMY2pBNnk5UG1aUWxCVwpOSkxCekMzUE42K0dwaHMwaUtDL3cwOEw3MUtBWFpBaE9WWFhxZ3o2SUlsd0s2eGtTc0JJRmF6dnYxbTZ5U044CjlWSTdDUE9TMndOSjBxTE1iblEvajhPQkpRcStxS0t2ak1wMkErMVM1UjdwUWQwR2NuL1E3Wk8xeHQ3OGNtcE4Kc0pzUytaZG9QRzNGVDI2NG5lZz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdjUrRGFva3BUSm4rNXp2UmQ2U0M3a3RyK2xkTmp0cmtKSUxGVjhhS09TdmI3RFpWCkpQbTBvZTJlRG5JUGFTc1BXNUJhT2lESEx1dzV4ZDhzMkprQUxkeEQ3OTNNaFliN083ci91TFlVajVPWkltKy8KZDJGcldOK3dGbkZHSFR0eGVZWUdYaGJWUWhHUUlNNDVKT0htTURWbXlIVmJqem9ENk9sU3k3QjBtTTNtUnJwYwpQRXpVN0VrL1RxMTVCTHhlTGZ4MHRxSTBEdzAwYTVieERPWkc2MGQwZElXTGM1RU9FaDgyc2lDLzR0WnQ5U2hKCkJKbXF6MWllNWVMa3hnektQTXdXVlNVUGZBOE1Tc1FqUUgyMWIwT1gwT3puTlJyd01jV0YzYXVQZEtPRGtTRFYKckpFTWpVWlVEbi94SXlPTEp3OHNabTg5aUxLQ1dLaDRLeWRadFFJREFRQUJBb0lCQUdYM0FsM1pPS0dyUEFsZApPanY0elRieDZUWWY2SVJBazYrZDZsYW5yZnQ0RENGb1UreEY5MGxIQUpqZE5yZ1drcWg5YXBXTnhZK0JZY2laCjFlbzNsL0hQU0ZORjZjT1ByUFgrcm41aVhSUjlUTG9YVG9HKzAvbEpwaEI3Ry8wSUdYeTV4WCtobEw4QVMzbzUKWWd6dks2YXhjOHp0TGRoTDNiSzlIVEtINWJNOHlSZVdVcTJ0dHl0MmFtQTRPS05tK0gxdHJLN3RmZWsvQ1llVQpXeURoZlRXT21SNXc4MU51dUhMMm94dm1KMlMvcTluN3l1QzQ1TWMwOXkzWnk0clZ3bTZUNWFMdFNWUmxlVHJSCmZ5N09Fc2ZPUU8weDFaUCt4c0xrWWMvUncxQUlCT0cxM1AwY2ZFOGd6bWVVOEthWnJuWllnT1dXc2RDZEpRZmgKMThiTXRjMENnWUVBMWhJUDhla3dMWWFSdHdZVk9VSE1IazU0NzJyUElUNDlyWWNZTjRiMFFKT2paQ0M0RlhGVwppQkFiNzc4c0Q5dkFEVFNrWjRiL1hnd0ErWk01aDdnMW12RnFVeWF0R2E3am5uQjJ6Z09DNGszMmlyVEowUDNkCjJyRFprOGpVbkRuekl4c2NmVlhUd0VTYUNmaWswS2JBa0FQVWI4QjI4NFBqMTR5Q3BuSkt5TnNDZ1lFQTVTZmoKc2RGaFdCSHNYNTRYRHFWb3AvYm1sZXRBM1pMcGtSdEoyNFpPMC91NElQK1Fvc04wdnNVcnJIaFdwSW8wQWtNZQp3TmQrZDN5S2dobzlTMzF0ZnRwWURIOGhsNkpwQTgyNnBZblF1a0tLV1pIVitOaVRKcGgvNlZSZElHUmlDdDRxCkhjZk5CT2RSSEhrdTkwTThNT3lpbjdXd2xsNlVhWWR6eWlmLzVLOENnWUJuUkhsYXFySXFGQk94SmdjUkF2T3oKM3drcC9lMkR6T0cyRjBpUWFOTGxZQk5mRndXV21vRXl6QXFlQWl3QVRuTDhLOXZ2Y1VrNWxqTFdNclo3Q1ZzYQpyc0VxOGFwcGpGdVRzQTh2M0xQRDlmWXIvWUNxQi8yQkpQVWcvSzNMMjR5MTc3c3puemF5TnFYVWo1VDZicWJRCkVuamxuQVFGL3libmNZb0pQM05pSndLQmdRQ1d5bEhkYjg4amVkL1Y1NXg2aWNPOVN3M2V0d2hmQlU1bXF0TkYKL2pJZThmUHUydHpkRGNyandiRUVjOGRuekgxK3c1WVlCWFYxd09FUHpaNXA3MlkrNUFTdWJIVzVaeWk5VlFJdAo3ZXNJdGNKK1FDWFI4d21aaXg0WWR1ZzA2WGxPZDNTMVZnV0Y1WVVOUEh6NFBpajhkS3BxZDg5MGsxWUx2eE1sCmduNnpod0tCZ1FESUw3M1J3NVpzRFdPQXB3UCtXVjUrOExlWklZNzhoYXhZYWZINWEvNHBZd0JsTC9idE1Db2QKUUg0YXMrMWdzdFNybFFDZGVKRWU0UW0rMHU5YXdxZzQwZU1qZ0hmOEowQmhPZzJVUHVOdHFpaU9aN2tKQVBkVQovQ09JbnY4NnZQQmxncUFPWXdhZVl0b002WlBiWHZmaEt5UDE3OTc4MjRpbDFpeG9ISjhYUWc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
4)创建配置文件kube-controller-manager.conf
[root@k8s-master1 work]# vim kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--port=0 \
--secure-port=10252 \
--bind-address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
--service-cluster-ip-range=10.255.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--allocate-node-cidrs=true \
--cluster-cidr=10.0.0.0/16 \
--experimental-cluster-signing-duration=87600h \
--root-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
--leader-elect=true \
--feature-gates=RotateKubeletServerCertificate=true \
--controllers=*,bootstrapsigner,tokencleaner \
--horizontal-pod-autoscaler-use-rest-clients=true \
--horizontal-pod-autoscaler-sync-period=10s \
--tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
--use-service-account-credentials=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"
5)创建启动文件
[root@k8s-master1 work]# vim kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
6)拷贝相关文件
[root@k8s-master1 work]# cp kube-controller-manager*.pem /etc/kubernetes/ssl/
[root@k8s-master1 work]# cp kube-controller-manager.kubeconfig /etc/kubernetes/
[root@k8s-master1 work]# cp kube-controller-manager.conf /etc/kubernetes/
[root@k8s-master1 work]# cp kube-controller-manager.service /usr/lib/systemd/system/
[root@k8s-master1 work]# rsync -vaz kube-controller-manager*.pem k8s-master2:/etc/kubernetes/ssl/
[root@k8s-master1 work]# rsync -vaz kube-controller-manager*.pem k8s-master3:/etc/kubernetes/ssl/
[root@k8s-master1 work]# rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf k8s-master2:/etc/kubernetes/
[root@k8s-master1 work]# rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf k8s-master3:/etc/kubernetes/
[root@k8s-master1 work]# rsync -vaz kube-controller-manager.service k8s-master2:/usr/lib/systemd/system/
[root@k8s-master1 work]# rsync -vaz kube-controller-manager.service k8s-master3:/usr/lib/systemd/system/
7)启动服务
[root@k8s-master1 work]# systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl start kube-controller-manager && systemctl status kube-controller-manager
[root@k8s-master2 work]# systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl start kube-controller-manager && systemctl status kube-controller-manager
[root@k8s-master3 work]# systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl start kube-controller-manager && systemctl status kube-controller-manager
8)检查测试
[root@k8s-master1 work]# ss -lntup|grep 10252
tcp LISTEN 0 128 127.0.0.1:10252 *:* users:(("kube-controller",pid=19640,fd=8))
[root@k8s-master2 work]# ss -lntup|grep 10252
tcp LISTEN 0 128 127.0.0.1:10252 *:* users:(("kube-controller",pid=19640,fd=8))
[root@k8s-master3 work]# ss -lntup|grep 10252
tcp LISTEN 0 128 127.0.0.1:10252 *:* users:(("kube-controller",pid=19640,fd=8))
3.5、部署kube-scheduler组件
1)创建csr请求
[root@k8s-master1 work]# vim kube-scheduler-csr.json
{
"CN": "system:kube-scheduler",
"hosts": [
"127.0.0.1",
"192.168.40.180",
"192.168.40.181",
"192.168.40.182",
"192.168.40.199"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Hubei",
"L": "Wuhan",
"O": "system:kube-scheduler",
"OU": "system"
}
]
}
证书请求文件参数说明:
hosts 列表包含所有 kube-scheduler 节点 IP;
CN 为 system:kube-scheduler、
O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限
2)生成证书
[root@k8s-master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
[root@k8s-master1 work]# ll kube-scheduler*
-rw-r--r-- 1 root root 1115 Jul 7 18:24 kube-scheduler.csr
-rw-r--r-- 1 root root 401 Jul 7 18:24 kube-scheduler-csr.json
-rw------- 1 root root 1679 Jul 7 18:24 kube-scheduler-key.pem
-rw-r--r-- 1 root root 1480 Jul 7 18:24 kube-scheduler.pem
3)创建kube-scheduler的kubeconfig
# 1.设置集群参数
[root@k8s-master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.40.180:6443 --kubeconfig=kube-scheduler.kubeconfig
# 2.设置客户端认证参数
[root@k8s-master1 work]# kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig
# 3.设置上下文参数
[root@k8s-master1 work]# kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
# 4.设置当前上下文
[root@k8s-master1 work]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
# 5.查看kubeconfig
[root@k8s-master1 work]# cat kube-scheduler.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUDVxU29ZZW55bUsvYzRnVzR6Sk1VT2tTRitnd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qRXdOekEzTURVeE9UQXdXaGNOTXpFd056QTFNRFV4T1RBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU1LMjNJWnJxNWRXVDdZQmlySmI5Rk1HYmNMaERPcDkKdDZLR05KUFV2YWVJaDlMaEFnWEtZSTJtTmRiTFVnZjVxZklOeGpKVXNhb2tZcHlpeVdNUFpIbWE0ZjV6bVFacwo4NTNiWmkySmtsT3paZllXclo1bmNFcHh3Z2hnWkNVMlovQTJFcDFvYTdManM0b3hWaDk0VnNjNTBvMjVWaTBEClF2ZWM5Qmg0QlRkSzRSSlhjYkpxeDlDSW5pUzFSV1p2eXB5YkJqdkNxZW9UM0xFK1FTcHNBSzJPZXhuM2NWdDkKZGdoeWoxbEhpTjNaejcwQUVVV09SaXhwSHFjTS9WYWJOTFd5amJscUpJM0x2UDRKOFhOUTMxMHBKbXBMT0ZENQpvYkNmVmY5R0FjUDZnTHhBNnRBSldZV0pKeWF1R1hqNDBJL010ZzdtbWhQNEFjT1BsamFMc0NVQ0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRklXTUIrUDNYZXM3WXdhTEZHYXl5YTQzZi9Tek1COEdBMVVkSXdRWU1CYUFGSVdNQitQM1hlczdZd2FMRkdheQp5YTQzZi9Tek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjFsV25ib1B4ZEdYSDVNM3ZYZ1ZOVEdYUURrME1hCnh4UWw4UlZmam1tQVo0OU10cncvZ3Y1UUFRS3paTUU0a2YycXJlWmVaOThVckhPMmNVd08rRXpoeW56ZVRTV0YKZUFLVDl0RjQ1OWQ0L0hnSnFLQVVHWUxnczEzVVlmemdTUDNpK3hDakI5eFZNc0RtdmpVYkVFSlJHSnpBc1Frdwp3MHp5bHBOWTFRU0xnL3hmTmhQZUNRTXY5NGZaWXlBNTFucXdtZHk0bFpFcmUrS3NsS1lsNDlQak9ONmhuUkl1CkMvZG9jM1VvbjV5L1UwNTVpVHdjQTd2U3Jsd0lzNGRoQ2gzSTByR3dLMVY4ekFCVWZHYjJVTVJ3V05WRkVsNzgKUFBhWTdJek1SdkdLbGFYdzB5amh3NmRkbUxVbVVPR0pmRmlzUzdsQWQvZDFRWVZwRU9rMktQSVQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.40.180:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:kube-scheduler
name: system:kube-scheduler
current-context: system:kube-scheduler
kind: Config
preferences: {}
users:
- name: system:kube-scheduler
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVGekNDQXYrZ0F3SUJBZ0lVWW9HTWpEazRUYmM1OGFENVpiV0lEUC9Ddi9Vd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qRXdOekEzTVRBeU1EQXdXaGNOTXpFd056QTFNVEF5TURBd1dqQitNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVI0d0hBWURWUVFLRXhWegplWE4wWlcwNmEzVmlaUzF6WTJobFpIVnNaWEl4RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVlTUJ3R0ExVUVBeE1WCmMzbHpkR1Z0T210MVltVXRjMk5vWldSMWJHVnlNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUIKQ2dLQ0FRRUF1alJrbU9KMmlPaWNsSmtSRmVsZGUzTjIyZ0ViTy9maHpya3lVT1RoU0ExM0hkc0NPajRkWkN3OAorOThPMHFudHlFbzh1MmhvZHE0MWtDZDZEVVBXYUV2L3RScC9XRUZWdWhQNkV3SG1FV0M1c0l4aE1QUy9reThaCkVDWjFzMmdkVTIzSU9wNlpNWmRDNGdTZ3pMc0pKMmd6N1RyZGI1ZGNrQnF2MEttTUZzL0diVm01ZlFpTnNUZWcKVUJ1TGZLUjBoZnZSNnRCelJWNFRKQzVDRDl1M0pMYnhXaEJkd1dJOUJZTmdUY0QzeFpZeWFRL2xYTTZYd01iYQpFVnI2ZDZUM3pPZkFjamR0WWxSeDRvMDlKQ25KZ0FNbDFUV1drUXJ0UTVDTnJ0MHR3TmJ5K1N5Tm5NZm94emhoClV2Z250SlZIRHJtZ0NIZmFoWVlYN0puRjRJTHllUUlEQVFBQm80R3BNSUdtTUE0R0ExVWREd0VCL3dRRUF3SUYKb0RBZEJnTlZIU1VFRmpBVUJnZ3JCZ0VGQlFjREFRWUlLd1lCQlFVSEF3SXdEQVlEVlIwVEFRSC9CQUl3QURBZApCZ05WSFE0RUZnUVU4UU8zZ1VZV1MyNDZ0NlppRXJzWXJydGYvTjB3SHdZRFZSMGpCQmd3Rm9BVWhZd0g0L2RkCjZ6dGpCb3NVWnJMSnJqZC85TE13SndZRFZSMFJCQ0F3SG9jRWZ3QUFBWWNFd0tnb3RJY0V3S2dvdFljRXdLZ28KdG9jRXdLZ294ekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBR2dZYkVPMUEvS0d3Mk5pR0hweUJobWJKTWVVdAp3am93VnFyNllibjEyYUoyUnZCYVk5TWhVVmdaZVhGSU1jL1NuTS93a3pNQzRwZ1UrY0JlclBOZWRJMm5zRk1nCm9TMk9oRUo2NUM5dFlEMmtoYTU0Wis2VTY0YkYweTkybGg4TStTRGpmQUwwVzJpcW91REprdEpyS28vNkQ0T1EKcURjK1IwcENTSmNodmN0bUdHb20xM3Z2ZmNIQkg1MFhaSW40cVcwQ3NmQTRlY3djUmVLajJKeTdxdk81YVFieQpxd09FZDloR1A0M0d0WUQxZUF2NUU1clRLclhEamREZW9sNVdCcW1zaUJKcEd2aHZNWWNoWm9xUlZuWURZZDBFCkNXQ2RTUCs3Y1BxK1lkT1RmRUV0anVwS2RMRWVjREtzTHhoYzlleVh5anNaZlJGR3RpTG55c3o5b1E9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdWpSa21PSjJpT2ljbEprUkZlbGRlM04yMmdFYk8vZmh6cmt5VU9UaFNBMTNIZHNDCk9qNGRaQ3c4Kzk4TzBxbnR5RW84dTJob2RxNDFrQ2Q2RFVQV2FFdi90UnAvV0VGVnVoUDZFd0htRVdDNXNJeGgKTVBTL2t5OFpFQ1oxczJnZFUyM0lPcDZaTVpkQzRnU2d6THNKSjJnejdUcmRiNWRja0JxdjBLbU1Gcy9HYlZtNQpmUWlOc1RlZ1VCdUxmS1IwaGZ2UjZ0QnpSVjRUSkM1Q0Q5dTNKTGJ4V2hCZHdXSTlCWU5nVGNEM3haWXlhUS9sClhNNlh3TWJhRVZyNmQ2VDN6T2ZBY2pkdFlsUng0bzA5SkNuSmdBTWwxVFdXa1FydFE1Q05ydDB0d05ieStTeU4Kbk1mb3h6aGhVdmdudEpWSERybWdDSGZhaFlZWDdKbkY0SUx5ZVFJREFRQUJBb0lCQUR4YkhUeDlNNFRmT1ZubApYNmRsbEZxZXE2aXdjUjU0RSthSkd5a2pkMjUraHR6VGo1NUhZZ21GV1dNZkExUC9wc2FrWVpreGw2TFloeDRwCjNhTU5HU09IZHVSQ0tZTDI4bzIxU2ZyOVE1RGdkSEFvb0p4WXlQd3hhUU5XSkJLNkxiOU1OM25nekxGSllYR1gKcEhPWU1MaG9TMlNiRHduTDIwSU9sR3lqZUhndjVVNGJsczR1K1NjanFNalNBazd2UGJVcERqbnBYWDZ5UWM0YgpuZ3NzVExDYUxpTjZpbmRodEtSbXR0OVpJK21Qd3g1SUZ0Zjh5Y0t1TGRuWjdZNkdnVG1GVlpkMS95U1FDS0JXCkE1QWF6YnZaV2lTN1JEN0lHOHo5cUp3a25kTm1DR3RJcHp4MzhJcW9ZbTdoSWU4SzBhTUJMYWRLQ1ZidEpnWXUKeG1wVmNvRUNnWUVBNTRBcU5taUNCWFV1WldNTXA2NU9DRUhGdDJSWFdFQjZHQWN4NDFNS0JKU2lTWWM0eG1qMApZRnZmcjR1b21pUVhzS2FDMG9ZWmRqTElEQjFYLzVEQmUvK0ZRTEVnMmVlMlNxMUhVRXdhMVhoM0ZDVmJDeFR0CklPTGRJMkQvTWlnaU9YTTFGZUlWZklWamNXZ1JGTEJXRkVOT2dneHRIWnMxZTZLY2loOEtuL0VDZ1lFQXpla1EKdnk1d0dlVFpZRi8waFVwY014MmFTb3lOUE9mbUZYUE84cXFlTVd6WVZNdk81eTgyYWpaMm8vSjYyd3lxK3FHWgozc29WSHR5VXNGVFZENDFKMlJuZHE5YmhwQVN2TGRIVkY3VkRQcDRUVkhYdWR5cU0vNHVTcHorazZOVTNRMzNoCktzSUZtVlUxOG1XY2gyUXArdjBJNUxBMVpCbURURGZPZmdqN2d3a0NnWUVBcjY2azJrdHZPTU1YNVp0SWRFd2sKTGNIMFVOdVdLWVFzNCtVNTUrRVJ2aTRxQnBEVzlrT2FDVEpQeThHNXZ0aGJIaFVQUE1MRnVkeUowaC9HczB6ZwplTUNPR0cwVG1DcHZQYmJJWXRpT21LZmwvbVRtOWI3NHdiZEl5TnVJYjBEajBDTnRDdUZiR3ZlRFl3SHR6SHlSCnBxajVnRm43eUxjTDNIcW9QMjJWTzVFQ2dZRUFxZjAyN2g0UVBkQklCT0F5cGJkMTFsMGgrMWw5WUVLeUdCTzcKVFdxOW5tQVZXQ3ZKYStIMk1razBPTFQ5NThqVmZvUGEyNnBKTldrMDl6MlJoMzFFOGc0QWl0U2pBeDA2NGNEUgpBdm1Kd2pBT0ZUUW00Z29telBFVTZTNEpubzRuU1hpcVl2bzZWUk9icmJsbE9BRGhCMnZONDczMDFlYWFGbG9jCkJzQ3pvc0VDZ1lBRU5ORVlFckU5SDZtVCs5bFVjUCtaWDU3UHo1YTBRZ2J4SjRYMVVvRUcxQTFqRXJPNnVvOUkKc1VibDBnVzhYb0lWc2tWZmFHVjZEdjI1VUhkYXRKNlZmck9reXh0dm5RSjBBZEpRSEVqSUhOVjBjYUc4SFhCQwpXZnVkQlVkK3VadEJvMjRCVVQzaXB0UHJ6cDZ4ajJSS2xxaXdvNGZudVBsKzhVelRubUc5aEE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
4)创建配置文件kube-scheduler.conf
[root@k8s-master1 work]# vim kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--leader-elect=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"
5)创建服务启动文件
[root@k8s-master1 work]# vim kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
6)拷贝相关文件
[root@k8s-master1 work]# cp kube-scheduler*.pem /etc/kubernetes/ssl/
[root@k8s-master1 work]# cp kube-scheduler.kubeconfig /etc/kubernetes/
[root@k8s-master1 work]# cp kube-scheduler.conf /etc/kubernetes/
[root@k8s-master1 work]# cp kube-scheduler.service /usr/lib/systemd/system/
[root@k8s-master1 work]# rsync -vaz kube-scheduler*.pem k8s-master2:/etc/kubernetes/ssl/
[root@k8s-master1 work]# rsync -vaz kube-scheduler*.pem k8s-master3:/etc/kubernetes/ssl/
[root@k8s-master1 work]# rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf k8s-master2:/etc/kubernetes/
[root@k8s-master1 work]# rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf k8s-master3:/etc/kubernetes/
[root@k8s-master1 work]# rsync -vaz kube-scheduler.service k8s-master2:/usr/lib/systemd/system/
[root@k8s-master1 work]# rsync -vaz kube-scheduler.service k8s-master3:/usr/lib/systemd/system/
7)启动服务
[root@k8s-master1 work]# systemctl daemon-reload && systemctl enable kube-scheduler && systemctl start kube-scheduler && systemctl status kube-scheduler
[root@k8s-master2 work]# systemctl daemon-reload && systemctl enable kube-scheduler && systemctl start kube-scheduler && systemctl status kube-scheduler
[root@k8s-master3 work]# systemctl daemon-reload && systemctl enable kube-scheduler && systemctl start kube-scheduler && systemctl status kube-scheduler
8)测试查看
[root@k8s-master1 work]# ss -lntup|grep 10251
tcp LISTEN 0 128 127.0.0.1:10251 *:* users:(("kube-scheduler",pid=20169,fd=8))
[root@k8s-master2 work]# ss -lntup|grep 10251
tcp LISTEN 0 128 127.0.0.1:10251 *:* users:(("kube-scheduler",pid=20169,fd=8))
[root@k8s-master3 work]# ss -lntup|grep 10251
tcp LISTEN 0 128 127.0.0.1:10251 *:* users:(("kube-scheduler",pid=20169,fd=8))
3.6、部署kubelet组件
kubelet: 每个Node节点上的kubelet定期就会调用API Server的REST接口报告自身状态,API Server接收这些信息后,将节点状态信息更新到etcd中。kubelet也通过API Server监听Pod信息,从而对Node机器上的POD进行管理,如创建、删除、更新Pod
1)创建kubelet-bootstrap.kubeconfig
[root@k8s-master1 work]# BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)
[root@k8s-master1 work]# echo $BOOTSTRAP_TOKEN
b0937520a8a36f99ea6bc95e67d77740
[root@k8s-master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.40.180:6443 --kubeconfig=kubelet-bootstrap.kubeconfig
[root@k8s-master1 work]# kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig
[root@k8s-master1 work]# kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
[root@k8s-master1 work]# kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
[root@k8s-master1 work]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
2)创建配置文件kubelet.json
# "cgroupDriver": "systemd"要和docker的驱动一致。
# address替换为自己k8s-node1的IP地址。
[root@k8s-master1 work]# vim kubelet.json
{
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"authentication": {
"x509": {
"clientCAFile": "/etc/kubernetes/ssl/ca.pem"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"address": "192.168.40.183",
"port": 10250,
"readOnlyPort": 10255,
"cgroupDriver": "systemd",
"hairpinMode": "promiscuous-bridge",
"serializeImagePulls": false,
"featureGates": {
"RotateKubeletClientCertificate": true,
"RotateKubeletServerCertificate": true
},
"clusterDomain": "cluster.local.",
"clusterDNS": ["10.255.0.2"]
}
3)创建启动文件
[root@k8s-master1 work]# vim kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
--cert-dir=/etc/kubernetes/ssl \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--config=/etc/kubernetes/kubelet.json \
--network-plugin=cni \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
参数说明:
# 注意:
–hostname-override:显示名称,集群中唯一
–network-plugin:启用CNI
–kubeconfig:空路径,会自动生成,后面用于连接apiserver
–bootstrap-kubeconfig:首次启动向apiserver申请证书
–config:配置参数文件
–cert-dir:kubelet证书生成目录
–pod-infra-container-image:管理Pod网络容器的镜像
4)拷贝相关文件
[root@k8s-node1 ~]# mkdir /etc/kubernetes/ssl -p
[root@k8s-master1 work]# scp kubelet-bootstrap.kubeconfig kubelet.json k8s-node1:/etc/kubernetes/
[root@k8s-master1 work]# scp ca.pem k8s-node1:/etc/kubernetes/ssl/
[root@k8s-master1 work]# scp kubelet.service k8s-node1:/usr/lib/systemd/system/
5)启动kubelet服务
[root@k8s-node1 ~]# mkdir /var/lib/kubelet
[root@k8s-node1 ~]# mkdir /var/log/kubernetes
[root@k8s-node1 ~]# systemctl daemon-reload
[root@k8s-node1 ~]# systemctl enable kubelet
[root@k8s-node1 ~]# systemctl start kubelet
[root@k8s-node1 ~]# systemctl status kubelet
6)bootstrap请求处理
[root@k8s-master1 work]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-iGnJAEavf-xOamgBSAHF-5dpg9_O_r0caZvPM80tcCM 55s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
[root@k8s-master1 work]# kubectl certificate approve node-csr-iGnJAEavf-xOamgBSAHF-5dpg9_O_r0caZvPM80tcCM
certificatesigningrequest.certificates.k8s.io/node-csr-iGnJAEavf-xOamgBSAHF-5dpg9_O_r0caZvPM80tcCM approved
[root@k8s-master1 work]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-iGnJAEavf-xOamgBSAHF-5dpg9_O_r0caZvPM80tcCM 84s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued
[root@k8s-master1 work]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node1 NotReady <none> 17s v1.20.7 # 网络插件未安装好
3.7、部署kube-proxy组件
1)创建csr请求
[root@k8s-master1 work]# vim kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Hubei",
"L": "Wuhan",
"O": "k8s",
"OU": "system"
}
]
}
2)生成证书
[root@k8s-master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
[root@k8s-master1 work]# ll kube-proxy*
-rw-r--r-- 1 root root 1005 Jul 7 19:28 kube-proxy.csr
-rw-r--r-- 1 root root 211 Jul 7 19:27 kube-proxy-csr.json
-rw------- 1 root root 1675 Jul 7 19:28 kube-proxy-key.pem
-rw-r--r-- 1 root root 1391 Jul 7 19:28 kube-proxy.pem
3)创建kubeconfig文件
[root@k8s-master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.40.180:6443 --kubeconfig=kube-proxy.kubeconfig
[root@k8s-master1 work]# kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
[root@k8s-master1 work]# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
[root@k8s-master1 work]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
4)创建kube-proxy配置文件
[root@k8s-master1 work]# vim kube-proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.40.183
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 192.168.40.0/24
healthzBindAddress: 192.168.40.183:10256
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.40.183:10249
mode: "ipvs"
5)创建服务启动文件
[root@k8s-master1 work]# vim kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
--config=/etc/kubernetes/kube-proxy.yaml \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
6)拷贝相关文件
[root@k8s-master1 work]# scp kube-proxy.kubeconfig kube-proxy.yaml k8s-node1:/etc/kubernetes/
[root@k8s-master1 work]# scp kube-proxy.service k8s-node1:/usr/lib/systemd/system/
7)启动服务
[root@k8s-node1 ~]# mkdir -p /var/lib/kube-proxy
[root@k8s-node1 ~]# systemctl daemon-reload
[root@k8s-node1 ~]# systemctl enable kube-proxy
[root@k8s-node1 ~]# systemctl start kube-proxy
[root@k8s-node1 ~]# systemctl status kube-proxy
3.8、部署calico组件
[root@k8s-master1 work]# cat calico.yaml
# Calico Version v3.5.3
# https://docs.projectcalico.org/v3.5/releases#v3.5.3
# This manifest includes the following component versions:
# calico/node:v3.5.3
# calico/cni:v3.5.3
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# Typha is disabled.
typha_service_name: "none"
# Configure the Calico backend to use.
calico_backend: "bird"
# Configure the MTU to use
veth_mtu: "1440"
# The CNI network configuration to install on each node. The special
# values in this config will be automatically populated.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.0",
"plugins": [
{
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "host-local",
"subnet": "usePodCidr"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
}
]
}
---
# This manifest installs the calico/node container, as well
# as the Calico CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: calico-node
annotations:
# This, along with the CriticalAddonsOnly toleration below,
# marks the pod as a critical add-on, ensuring it gets
# priority scheduling and that its resources are reserved
# if it ever gets evicted.
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
nodeSelector:
beta.kubernetes.io/os: linux
hostNetwork: true
tolerations:
# Make sure calico-node gets scheduled on all nodes.
- effect: NoSchedule
operator: Exists
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
serviceAccountName: calico-node
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
terminationGracePeriodSeconds: 0
initContainers:
# This container installs the Calico CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: quay.io/calico/cni:v3.5.3
command: ["/install-cni.sh"]
env:
# Name of the CNI config file to create.
- name: CNI_CONF_NAME
value: "10-calico.conflist"
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
# Set the hostname based on the k8s node name.
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# CNI MTU Config variable
- name: CNI_MTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Prevents the container from sleeping forever.
- name: SLEEP
value: "false"
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
containers:
# Runs calico/node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: quay.io/calico/node:v3.5.3
env:
# Use Kubernetes API as the backing datastore.
- name: DATASTORE_TYPE
value: "kubernetes"
# Wait for the datastore.
- name: WAIT_FOR_DATASTORE
value: "true"
# Set based on the k8s node name.
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Choose the backend to use.
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
- name: IP_AUTODETECTION_METHOD
value: "can-reach=192.168.40.131"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Always"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "10.0.0.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
# Set Felix logging to "info"
- name: FELIX_LOGSEVERITYSCREEN
value: "info"
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
livenessProbe:
httpGet:
path: /liveness
port: 9099
host: localhost
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
exec:
command:
- /bin/calico-node
- -bird-ready
- -felix-ready
periodSeconds: 10
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /run/xtables.lock
name: xtables-lock
readOnly: false
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
- mountPath: /var/lib/calico
name: var-lib-calico
readOnly: false
volumes:
# Used by calico/node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: var-lib-calico
hostPath:
path: /var/lib/calico
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system
---
# Create all the CustomResourceDefinitions needed for
# Calico policy and networking mode.
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: felixconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: FelixConfiguration
plural: felixconfigurations
singular: felixconfiguration
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgppeers.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPPeer
plural: bgppeers
singular: bgppeer
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgpconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPConfiguration
plural: bgpconfigurations
singular: bgpconfiguration
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ippools.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPPool
plural: ippools
singular: ippool
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: hostendpoints.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: HostEndpoint
plural: hostendpoints
singular: hostendpoint
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: clusterinformations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: ClusterInformation
plural: clusterinformations
singular: clusterinformation
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworkpolicies.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkPolicy
plural: globalnetworkpolicies
singular: globalnetworkpolicy
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworksets.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkSet
plural: globalnetworksets
singular: globalnetworkset
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networkpolicies.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkPolicy
plural: networkpolicies
singular: networkpolicy
---
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: calico-node
rules:
# The CNI plugin needs to get pods, nodes, and namespaces.
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
verbs:
- get
- apiGroups: [""]
resources:
- endpoints
- services
verbs:
# Used to discover service IPs for advertisement.
- watch
- list
# Used to discover Typhas.
- get
- apiGroups: [""]
resources:
- nodes/status
verbs:
# Needed for clearing NodeNetworkUnavailable flag.
- patch
# Calico stores some configuration information in node annotations.
- update
# Watch for changes to Kubernetes NetworkPolicies.
- apiGroups: ["networking.k8s.io"]
resources:
- networkpolicies
verbs:
- watch
- list
# Used by Calico for policy information.
- apiGroups: [""]
resources:
- pods
- namespaces
- serviceaccounts
verbs:
- list
- watch
# The CNI plugin patches pods/status.
- apiGroups: [""]
resources:
- pods/status
verbs:
- patch
# Calico monitors various CRDs for config.
- apiGroups: ["crd.projectcalico.org"]
resources:
- globalfelixconfigs
- felixconfigurations
- bgppeers
- globalbgpconfigs
- bgpconfigurations
- ippools
- globalnetworkpolicies
- globalnetworksets
- networkpolicies
- clusterinformations
- hostendpoints
verbs:
- get
- list
- watch
# Calico must create and update some CRDs on startup.
- apiGroups: ["crd.projectcalico.org"]
resources:
- ippools
- felixconfigurations
- clusterinformations
verbs:
- create
- update
# Calico stores some configuration information on the node.
- apiGroups: [""]
resources:
- nodes
verbs:
- get
- list
- watch
# These permissions are only requried for upgrade from v2.6, and can
# be removed after upgrade or on fresh installations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- bgpconfigurations
- bgppeers
verbs:
- create
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system
---
[root@k8s-master1 work]# kubectl apply -f calico.yaml
[root@k8s-master1 work]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-node-2lcth 1/1 Running 0 68s
[root@k8s-master1 work]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready <none> 34m v1.20.7
3.9、部署coredns组件
[root@k8s-master1 work]# cat coredns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/name: "CoreDNS"
spec:
# replicas: not specified here:
# 1. Default is 1.
# 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
kubernetes.io/os: linux
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values: ["kube-dns"]
topologyKey: kubernetes.io/hostname
containers:
- name: coredns
image: coredns/coredns:1.7.0
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.255.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
[root@k8s-master1 work]# kubectl apply -f coredns.yaml
[root@k8s-master1 work]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-node-2lcth 1/1 Running 0 9m55s
coredns-7bf4bd64bd-cthgw 1/1 Running 0 67s
[root@k8s-master1 work]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.255.0.2 <none> 53/UDP,53/TCP,9153/TCP 86s
四、集群测试
4.1、测试部署tomcat服务
[root@k8s-master1 work]# cat tomcat.yaml
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
namespace: default
labels:
app: myapp
env: dev
spec:
containers:
- name: tomcat-pod-java
ports:
- containerPort: 8080
image: tomcat:8.5-jre8-alpine
imagePullPolicy: IfNotPresent
- name: busybox
image: busybox:latest
command:
- "/bin/sh"
- "-c"
- "sleep 3600"
[root@k8s-master1 work]# cat tomcat-service.yaml
apiVersion: v1
kind: Service
metadata:
name: tomcat
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30080
selector:
app: myapp
env: dev
[root@k8s-master1 work]# kubectl apply -f tomcat.yaml
pod/demo-pod created
[root@k8s-master1 work]# kubectl apply -f tomcat-service.yaml
service/tomcat created
[root@k8s-master1 work]# kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-pod 2/2 Running 0 2m12s
[root@k8s-master1 work]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.255.0.1 <none> 443/TCP 4h10m
tomcat NodePort 10.255.239.94 <none> 8080:30080/TCP 2m10s
浏览器访问:
4.2、验证cordns是否正常
[root@k8s-master1 ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # ping www.baidu.com
PING www.baidu.com (110.242.68.3): 56 data bytes
64 bytes from 110.242.68.3: seq=0 ttl=127 time=34.505 ms
64 bytes from 110.242.68.3: seq=1 ttl=127 time=34.521 ms
/ # nslookup kubernetes.default.svc.cluster.local
Server: 10.255.0.2
Address 1: 10.255.0.2 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default.svc.cluster.local
Address 1: 10.255.0.1 kubernetes.default.svc.cluster.local
/ # nslookup tomcat.default.svc.cluster.local
Server: 10.255.0.2
Address 1: 10.255.0.2 kube-dns.kube-system.svc.cluster.local
Name: tomcat.default.svc.cluster.local
Address 1: 10.255.239.94 tomcat.default.svc.cluster.local
注意:busybox要用指定的1.28版本,不能用最新版本,最新版本,nslookup会解析不到dns和ip,报错如下:
/ # nslookup kubernetes.default.svc.cluster.local
Server: 10.255.0.2
Address: 10.255.0.2:53
*** Can't find kubernetes.default.svc.cluster.local: No answer
*** Can't find kubernetes.default.svc.cluster.local: No answer
五、安装keepalived+nginx实现k8s apiserver高可用
1)安装nginx,keepalived
# 在k8s-master1和k8s-master2上做nginx主备安装
[root@k8s-master1 ~]# yum install nginx keepalived -y
[root@k8s-master2 ~]# yum install nginx keepalived -y
# 注意需要安装如下模块,报错:nginx: [emerg] unknown directive "stream" in /etc/nginx/nginx.conf:13
[root@k8s-master1 ~]# yum install nginx-mod-stream -y
[root@k8s-master1 ~]# nginx -v
nginx version: nginx/1.20.1
2)修改nginx配置文件,主备一样
[root@k8s-master1 ~]# cat /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 192.168.40.180:6443; # k8s-master1 APISERVER IP:PORT
server 192.168.40.181:6443; # k8s-master2 APISERVER IP:PORT
server 192.168.40.182:6443; # k8s-master3 APISERVER IP:PORT
}
server {
listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
proxy_pass k8s-apiserver;
}
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 80 default_server;
server_name _;
location / {
}
}
}
[root@k8s-master2 ~]# cat /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 192.168.40.180:6443; # k8s-master1 APISERVER IP:PORT
server 192.168.40.181:6443; # k8s-master2 APISERVER IP:PORT
server 192.168.40.182:6443; # k8s-master3 APISERVER IP:PORT
}
server {
listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
proxy_pass k8s-apiserver;
}
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 80 default_server;
server_name _;
location / {
}
}
}
3)keepalive配置
# master上配置
[root@k8s-master1 ~]# cat /etc/keepalived/keepalived.conf
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state MASTER
interface eth0 # 修改为实际网卡名
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 100 # 优先级,备服务器设置 90
advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 1111
}
# 虚拟IP
virtual_ipaddress {
192.168.40.199/24
}
track_script {
check_nginx
}
}
# master节点脚本配置
[root@k8s-master1 ~]# cat /etc/keepalived/check_nginx.sh
#!/bin/bash
count=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
systemctl stop keepalived
fi
[root@k8s-master1 ~]# chmod +x /etc/keepalived/check_nginx.sh
# 备节点配置
[root@k8s-master2 ~]# cat /etc/keepalived/keepalived.conf
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_BACKUP
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.40.199/24
}
track_script {
check_nginx
}
}
[root@k8s-master2 ~]# cat /etc/keepalived/check_nginx.sh
#!/bin/bash
count=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
systemctl stop keepalived
fi
[root@k8s-master2 ~]# chmod +x /etc/keepalived/check_nginx.sh
#注:keepalived根据脚本返回状态码(0为工作正常,非0不正常)判断是否故障转移。
4)启动服务
[root@k8s-master1 ~]# systemctl daemon-reload
[root@k8s-master1 ~]# systemctl start nginx
[root@k8s-master1 ~]# systemctl start keepalived
[root@k8s-master1 ~]# systemctl enable nginx keepalived
[root@k8s-master2 ~]# systemctl daemon-reload
[root@k8s-master2 ~]# systemctl start nginx
[root@k8s-master2 ~]# systemctl start keepalived
[root@k8s-master2 ~]# systemctl enable nginx keepalived
5)测试vip是否绑定成功
[root@k8s-master1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:52:bf:68 brd ff:ff:ff:ff:ff:ff
inet 192.168.40.180/24 brd 192.168.40.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.40.199/24 scope global secondary eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe52:bf68/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:fc:92:c8:72 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
[root@k8s-master2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:f1:81:61 brd ff:ff:ff:ff:ff:ff
inet 192.168.40.181/24 brd 192.168.40.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fef1:8161/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:c6:90:ba:4c brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
6)测试keepalived
# 停掉k8s-master1上的nginx,vip会漂移到k8s-master2
[root@k8s-master1 ~]# systemctl stop nginx
[root@k8s-master1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:52:bf:68 brd ff:ff:ff:ff:ff:ff
inet 192.168.40.180/24 brd 192.168.40.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe52:bf68/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:fc:92:c8:72 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
[root@k8s-master2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:f1:81:61 brd ff:ff:ff:ff:ff:ff
inet 192.168.40.181/24 brd 192.168.40.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.40.199/24 scope global secondary eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fef1:8161/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:c6:90:ba:4c brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
# 重新启动master1的nginx和keepalived,vip会漂移回来
[root@k8s-master1 ~]# systemctl start nginx
[root@k8s-master1 ~]# systemctl start keepalived
[root@k8s-master1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:52:bf:68 brd ff:ff:ff:ff:ff:ff
inet 192.168.40.180/24 brd 192.168.40.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.40.199/24 scope global secondary eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe52:bf68/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:fc:92:c8:72 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
7)修改所有Worker Node配置
目前所有的Worker Node组件连接都还是k8s-master1 Node,如果不改为连接VIP走负载均衡器,那么Master还是单点故障。
因此接下来就是要改所有Worker Node(kubectl get node命令查看到的节点)组件配置文件,由原来192.168.40.180修改为192.168.40.199(VIP)。
[root@k8s-node1 ~]# sed -i 's#192.168.40.180:6443#192.168.40.199:16443#' /etc/kubernetes/kubelet-bootstrap.kubeconfig
[root@k8s-node1 ~]# sed -i 's#192.168.40.180:6443#192.168.40.199:16443#' /etc/kubernetes/kubelet.json
[root@k8s-node1 ~]# sed -i 's#192.168.40.180:6443#192.168.40.199:16443#' /etc/kubernetes/kubelet.kubeconfig
[root@k8s-node1 ~]# sed -i 's#192.168.40.180:6443#192.168.40.199:16443#' /etc/kubernetes/kube-proxy.yaml
[root@k8s-node1 ~]# sed -i 's#192.168.40.180:6443#192.168.40.199:16443#' /etc/kubernetes/kube-proxy.kubeconfig
[root@k8s-node1 ~]# systemctl restart kubelet kube-proxy