环境配置
本文档介绍搭建Kubernetes集群,版本为1.18.5,之前安装最新版1.18.8时发现Kubernetes安装所以来的容器在国内无法下载,并且切换使用阿里或腾讯的安装源之后仍无法正常下载,因此更换为1.18.5版本。
本文参考文章链接:https://www.cnblogs.com/hellxz/p/use-kubeadm-init-kubernetes-cluster.html
服务器信息
本文采用Centos 8作为操作系统,使用虚拟机模拟部署。
IP | Hostname | CPU核数 | 内存 | 硬盘 | 说明 |
---|---|---|---|---|---|
192.168.43.130 | master | 2 | 2G | 20G | 控制节点 |
192.168.43.129 | node01 | 2 | 2G | 20G | 执行节点 |
软件版本
软件 | 版本号 |
---|---|
CentOS | 8 |
Kubernetes | 1.18.5 |
Docker | 19.03.12 |
环境正确性
说明 | 查看命令 | 修改命令 |
---|---|---|
集群各节点互通 | ping 192.168.43.129 |
无 |
MAC地址唯一 |
ip link 或 ifconfig -a
|
请参考下面命令1 |
集群内主机名唯一 | hostnamectl status |
hostnamectl set-hostname <hostname> |
系统产品uuid唯一 | dmidecode -s system-uuid |
请参考网上修改方法 |
# 1.修改MAC地址,本命令为实际使用,待验证
ifconfig eth0 down
cd /etc/sysconfig/network-scripts
vim ifcfg-eth0
# 修改其中的"HWADDR=xx:xx:xx:xx:xx:xx"为"MACADDR=xx:xx:xx:xx:xx:xx"
ifconfig eth0 up
service network start
# 注意:关键词HWADDR和MACADDR是有区别的
端口正常开放
kube-master节点端口
协议 | 方向 | 端口 | 目的 |
---|---|---|---|
TCP | Inbound | 6443* | kube-api-server |
TCP | Inbound | 2379-2380 | etcd API |
TCP | Inbound | 10250 | Kubelet API |
TCP | Inbound | 10251 | kube-scheduler |
TCP | Inbound | 10252 | kube-controller-manager |
kube-node节点端口
协议 | 方向 | 端口 | 目的 |
---|---|---|---|
TCP | Inbound | 10250 | Kubelet API |
TCP | Inbound | 30000-32767 | NodePort Services |
# 查看防火墙状态
firewall-cmd --state
# 查看防火墙开放的所有端口
firewall-cmd --zone=public --list-ports
# 开放端口命令
firewall-cmd --zone=public --add-port=5672/tcp --permanent
# 批量开放端口
firewall-cmd --permanent --zone=public --add-port=100-500/tcp
# 重新加载防火墙,配置完端口一定要执行重新加载才能生效
firewall-cmd --reload
配置主机互信
配置hosts映射
在所有节点配置hosts映射,后面为对应主机名,与我们上面设置的相同。
如果后期增加节点需要在所有节点更新此文件。
注意:修改为实际对应的IP地址。
# 所有节点执行
cat >> /etc/hosts <<EOF
192.168.43.130 master
192.168.43.129 node01
EOF
配置ssh密钥
在master节点生成ssh密钥,分发公钥到各节点。
如果新加入其他节点需分发此密钥到新节点。
# master节点执行
# 生成ssh密钥,直接一路回车
ssh-keygen -t rsa
# 复制刚刚生成的密钥到各节点可信列表中,需分别输入各主机密码
ssh-copy-id root@master
ssh-copy-id root@node01
# 配置完成后使用下面命令查看是否可以登录到目标服务器
ssh 'root@master'
# 退出
exit
禁用swap
swap仅当在内存不足时使用硬盘空间充当额外内存,因为硬盘IO速度和内存差距较大,禁用swap可以提升性能。
# 所有节点服务器执行
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
关闭SELinux
如果开启SELinux,在kubelet挂在目录时可能会报错 Permission denied
,可以将SELinux设置为permissive
或者diable
,使用permissive
会提示warn级别的错误信息。
# 所有节点服务器执行
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
设置系统时区、同步时间
# 所有节点服务器执行
# 设置时区
timedatectl set-timezone Asia/Shanghai
systemctl enable --now chronyd
# 验证设置是否成功
date
# 查看同步状态
timedatectl status
# 输出结果中显示下列属性证明时钟同步正常
System clock synchronized: yes
NTP service: active
# 将当前的UTC时间写入硬件时钟
timedatectl set-local-rtc 0
# 重启依赖于系统时间的服务
systemctl restart rsyslog && systemctl restart crond
部署Docker
所有服务器节点均需安装docker容器软件。
添加docker yum源
# 所有节点服务器执行
# 安装必要依赖
yum install -y yum-utils device-mapper-persistent-data lvm2
# 添加aliyum docker-ce yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 重建yum缓存
yum makecache
安装Docker
# 所有节点服务器执行
# 查看可用的docker版本
yum list docker-ce.x86_64 --showduplicates | sort -r
# 结果如下:
[root@localhost ~]# yum list docker-ce.x86_64 --showduplicates | sort -r
Last metadata expiration check: 0:02:19 ago on Wed 26 Aug 2020 01:16:53 PM CST.
docker-ce.x86_64 3:19.03.9-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.8-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.7-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.6-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.12-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.11-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.10-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.0-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.9-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.8-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable
docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable
Available Packages
# 所有节点服务器执行
# 安装指定版本docker,这里以19.03.12为例说明
yum install -y docker-ce-19.03.12-3.el7
# 执行此命令后可能会出现错误如下:
Last metadata expiration check: 0:06:47 ago on Wed 26 Aug 2020 01:16:31 PM CST.
Error:
Problem: package docker-ce-3:19.03.12-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed
- conflicting requests
- package containerd.io-1.2.10-3.2.el7.x86_64 is filtered out by modular filtering
- package containerd.io-1.2.13-3.1.el7.x86_64 is filtered out by modular filtering
- package containerd.io-1.2.13-3.2.el7.x86_64 is filtered out by modular filtering
- package containerd.io-1.2.2-3.3.el7.x86_64 is filtered out by modular filtering
- package containerd.io-1.2.2-3.el7.x86_64 is filtered out by modular filtering
- package containerd.io-1.2.4-3.1.el7.x86_64 is filtered out by modular filtering
- package containerd.io-1.2.5-3.1.el7.x86_64 is filtered out by modular filtering
- package containerd.io-1.2.6-3.3.el7.x86_64 is filtered out by modular filtering
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
# 错误原因:需要安装高版本的containerd.io
# 解决措施:可以安装最新的containerd.io,但是使用官方的下载安装时比较慢,可以使用迅雷等下载工具将此安装包下载后上传至服务器安装
# 这里使用xshell连接服务器为例执行,具体命令如下:
yum install lrzsz
mkdir software
cd software/
rz
yum localinstall -y containerd.io-1.2.6-3.3.el7.x86_64.rpm
# 重新执行安装docker命令,安装成功
yum install -y docker-ce-19.03.12-3.el7
确保网络模块开机自动加载
# 所有节点服务器执行
lsmod | grep overlay
lsmod | grep br_netfilter
若上面的命令无返回值输出或提示文件不存在,需要执行以下命令:
# 所有节点服务器执行
cat > /etc/modules-load.d/docker.conf <<EOF
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter
使桥接流量对iptables可见
# 所有节点服务器执行
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
# 验证是否生效,下面两个命令结果需均返回 1
sysctl -n net.bridge.bridge-nf-call-iptables
sysctl -n net.bridge.bridge-nf-call-ip6tables
配置docker
# 所有节点服务器执行
mkdir /etc/docker
# 修改cgroup驱动为systemd[k8s官方推荐]、限制容器日志量、修改存储类型,最后的docker根目录可修改
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"registry-mirrors": ["https://7uuu3esz.mirror.aliyuncs.com"],
"data-root": "/data/docker"
}
EOF
# 添加开机自启动,立即启动
systemctl enable --now docker
验证docker是否正常
# 所有节点服务器均执行
# 查看docker信息,判断是否与配置一致
docker info
# hello-docker测试
docker run --rm hello-world
# 删除测试的image
docker rmi hello-world
添加用户到docker组
非root用户,无需sudo即可使用docker命令。
# 所有节点服务器均执行
# 添加用户到docker组,此处zgs为其他账号信息
usermod -aG docker zgs
# 当前会话立即更新docker组
newgrp docker
部署kubernetes集群
如未特殊说明,所有节点服务器均需执行下面的命令。
添加kubernetes源
# 所有节点服务器均执行
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 重建yum缓存,输入y添加证书认证
yum makecache
安装kubeadm、kubelet、kubectl
# 所有节点服务器均执行
# 安装
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# 配置开机启动并立即启动kubelet
systemctl enable --now kubelet
配置自动补全命令
# 所有节点服务器均执行
# 安装bash自动补全插件
yum install bash-completion -y
# 设置kubectl与kubeadm命令补全,下次login生效
kubectl completion bash >/etc/bash_completion.d/kubectl
kubeadm completion bash > /etc/bash_completion.d/kubeadm
预拉取kubernetes镜像
由于国内网络因素,kubernetes镜像需要从mirrors站点或通过dockerhub用户推送的镜像拉取。
# 所有节点服务器均执行
# 查看执行kubernetes版本需要哪些镜像
kubeadm config images list --kubernetes-version v1.18.5
# 结果如下
k8s.gcr.io/kube-apiserver:v1.18.5
k8s.gcr.io/kube-controller-manager:v1.18.5
k8s.gcr.io/kube-scheduler:v1.18.5
k8s.gcr.io/kube-proxy:v1.18.5
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7
在/root/k8s
目录下新建脚本get-k8s-images.sh
,命令如下:
# 所有节点服务器均执行
cd /root/
mkdir k8s
cd k8s/
# 创建脚本文件,文件内容如下一代码段所示
vim get-k8s-images.sh
#!/bin/bash
# Script For Quick Pull K8S Docker Images
# by Hellxz Zhang <hellxz001@foxmail.com>
KUBE_VERSION=v1.18.5
PAUSE_VERSION=3.2
CORE_DNS_VERSION=1.6.7
ETCD_VERSION=3.4.3-0
# pull kubernetes images from hub.docker.com
docker pull kubeimage/kube-proxy-amd64:$KUBE_VERSION
docker pull kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
docker pull kubeimage/kube-apiserver-amd64:$KUBE_VERSION
docker pull kubeimage/kube-scheduler-amd64:$KUBE_VERSION
# pull aliyuncs mirror docker images
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION
# retag to k8s.gcr.io prefix
docker tag kubeimage/kube-proxy-amd64:$KUBE_VERSION k8s.gcr.io/kube-proxy:$KUBE_VERSION
docker tag kubeimage/kube-controller-manager-amd64:$KUBE_VERSION k8s.gcr.io/kube-controller-manager:$KUBE_VERSION
docker tag kubeimage/kube-apiserver-amd64:$KUBE_VERSION k8s.gcr.io/kube-apiserver:$KUBE_VERSION
docker tag kubeimage/kube-scheduler-amd64:$KUBE_VERSION k8s.gcr.io/kube-scheduler:$KUBE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION k8s.gcr.io/coredns:$CORE_DNS_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION k8s.gcr.io/etcd:$ETCD_VERSION
# untag origin tag, the images won't be delete.
docker rmi kubeimage/kube-proxy-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-apiserver-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-scheduler-amd64:$KUBE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION
给脚本添加可执行权限,执行脚本拉去镜像。
# 所有节点服务器均执行
# 添加脚本执行权限
chmod +x get-k8s-images.sh
# 执行脚本
./get-k8s-images.sh
脚本执行结束后,执行docker iamges
命令确认镜像。
初始化master节点
本小节中代码仅需master节点服务器执行此步骤。
修改kubelet配置默认cgroup driver
# master节点服务器执行
mkdir /var/lib/kubelet
cat > /var/lib/kubelet/config.yaml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF
systemctl restart kubelet
生成kubeadm初始化配置文件
[可选],仅当需要自定义初始化配置时用,此时我们应该在/root/k8s
目录下。
# master节点服务器执行
kubeadm config print init-defaults > init.default.yaml
测试环境是否正常
WARNING是正常的。
# master节点服务器执行
kubeadm init phase preflight
# 原始命令:kubeadm init phase preflight [--config kubeadm-init.yaml]
# 命令执行结束如果出现warning是正常的,一般会出现防火墙、无法连接k8s站点的警告。
# 如果出现无法从k8s拉去镜像的错误属于正常的,在执行初始化时优先使用我们本地Docker中的镜像,如果本地镜像不存在才会从k8s站点拉取。
初始化master
10.244.0.0/16是flannel固定使用的IP段,设置取决于网络组件要求。
# master节点服务器执行
# 原始命令:kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.18.5 [--config kubeadm-init.yaml]
kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.18.5
初始化执行结果如下:
W0826 15:02:55.595805 40135 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.5
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.43.130]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.43.130 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.43.130 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0826 15:03:01.689893 40135 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0826 15:03:01.702934 40135 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 18.034495 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 3wolsi.61tnffn49i0clcth
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.43.130:6443 --token 3wolsi.61tnffn49i0clcth \
--discovery-token-ca-cert-hash sha256:fea2cc335b2f4b525bc71cc3f7fcbf68f19ced1efd43520710ad41f337ab6969
为日常使用集群的用户添加kubectl使用权限
以服务器节点的另一个用户(zgs)为例说明。
# master节点服务器执行
# 如果用户不在管理员组,则需要添加管理员权限
[root@master k8s]# usermod -g root zgs
[root@master k8s]# su zgs
[zgs@master k8s]$ mkdir -p $HOME/.kube
[zgs@master k8s]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/admin.conf
[sudo] password for zgs:
[zgs@master k8s]$ sudo chown $(id -u):$(id -g) $HOME/.kube/admin.conf
[zgs@master k8s]$ echo "export KUBECONFIG=$HOME/.kube/admin.conf" >> ~/.bashrc
[zgs@master k8s]$ exit
exit
如果在执行过程中出现权限相关问题,可能时因为没有将zgs用户添加至sudo权限组中,执行下面命令。
执行时需要切换至root用户下。
[root@master k8s]# su -
Last login: Wed Aug 26 11:06:42 CST 2020 from 192.168.43.130 on pts/1
[root@master ~]# chmod u+w /etc/sudoers
[root@master ~]# vim /etc/sudoers
# 在文件内找到:"root ALL=(ALL) ALL"在起下面添加XXX ALL=(ALL) ALL"
# (这里的XXX是我的用户名),然后保存退出。
[root@master ~]# chmod u-w /etc/sudoers
[root@master ~]# exit
配置master认证
# master节点服务器执行
echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/profile
. /etc/profile
如果不配置这个,会提示如下提示:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
此时,mastaer节点已经初始化成功,但是还没有安装网络组件,还无法与其他节点通讯。
安装网络组件
以flannel
为例。
# master节点服务器执行
cd ~/k8s/
yum install -y wget
#下载最新的flannel配置文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
运行结果如下:
[root@master k8s]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
查看kube-master节点状态
# master节点服务器执行
kubectl get nodes
运行结果如下:
[root@master k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 44m v1.18.8
备份镜像供其他节点使用
在master节点将镜像备份出来,便于后续传输给其他node节点,当然有镜像仓库更好。
# master节点服务器执行
docker save k8s.gcr.io/kube-proxy:v1.18.5 \
k8s.gcr.io/kube-apiserver:v1.18.5 \
k8s.gcr.io/kube-controller-manager:v1.18.5 \
k8s.gcr.io/kube-scheduler:v1.18.5 \
k8s.gcr.io/pause:3.2 \
k8s.gcr.io/coredns:1.6.7 \
k8s.gcr.io/etcd:3.4.3-0 > k8s-imagesV1.18.5.tar
将会在/root/k8s
目录下创建k8s-imagesV1.18.5.tar
文件,里面为k8s
所使用的docker image
。
[root@master k8s]# ll -h
total 694M
-rwxr-xr-x. 1 root docker 2.1K Aug 26 14:35 get-k8s-images.sh
-rw-r--r--. 1 root docker 826 Aug 26 14:52 init.default.yaml
-rw-r--r--. 1 root docker 694M Aug 26 15:52 k8s-imagesV1.18.5.tar
-rw-r--r--. 1 root docker 15K Aug 26 15:45 kube-flannel.yml
初始化node*节点并加入集群
拷贝镜像到node节点
每个node节点都需要相关的docker镜像,下面以其中一个为例,其他的可以参考此法执行。
# node节点服务器执行
mkdir ~/k8s
scp root@kube-master:/root/k8s/k8s-imagesV1.18.5.tar ~/k8s
cd ~/k8s
docker load < k8s-imagesV1.18.5.tar
获取加入kubernetes命令
刚才执行完kubeadm init
命令后,最后几行输出的为node节点加入集群的命令,如下所示:
# master节点服务器kubeadm init执行结果
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.43.130:6443 --token 3wolsi.61tnffn49i0clcth \
--discovery-token-ca-cert-hash sha256:fea2cc335b2f4b525bc71cc3f7fcbf68f19ced1efd43520710ad41f337ab6969
如果我们没有记录下执行结果,可以在master节点使用下面的命令创建新的接入token命令,如下所示:
# master节点服务器执行
kubeadm token create --print-join-command
执行结果如下:
W0826 16:12:44.007200 60971 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join 192.168.43.130:6443 --token ad399n.rqut2l5e16azf0dv --discovery-token-ca-cert-hash sha256:fea2cc335b2f4b525bc71cc3f7fcbf68f19ced1efd43520710ad41f337ab6969
执行上面创建接入token命令时将会替换掉旧的接入命令,请注意。
输出结果中有一个警告不用担心。
在node*节点上执行加入集群命令
要使用root
用户执行刚才创建的接入token命令。
# 所有node节点服务器执行
kubeadm join 192.168.43.130:6443 --token ad399n.rqut2l5e16azf0dv --discovery-token-ca-cert-hash sha256:fea2cc335b2f4b525bc71cc3f7fcbf68f19ced1efd43520710ad41f337ab6969
执行结果如下:
W0826 16:17:45.106961 44268 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
当出现This node has joined the cluster:
时证明node节点成功加入kubernetes集群。
查看集群节点状态
在master节点上查看集群中各节点状态。
# master节点服务器执行
kubectl get nodes
执行结果如下:
NAME STATUS ROLES AGE VERSION
master Ready master 77m v1.18.8
node01 NotReady <none> 2m50s v1.18.8
结束
至此,kubernetes集群已成功部署,可以在该集群上面执行业务层面的镜像操作。