集群安装前环境设置
1)环境查看
[dc2-user@10-255-20-74 ~]$ cat /etc/redhat-release
CentOS Linux release 7.5.1804 (Core)
[dc2-user@10-255-20-74 ~]$ uname -r
3.10.0-862.14.4.el7.x86_64
2)禁用SELINUX:
设置:setenforce 0
查看:getenforce
cat /etc/selinux/config
SELINUX=disabled
3)配置转发参数:
创建/etc/sysctl.d/k8s.conf文件,内容:
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
立即生效:
sysctl -p /etc/sysctl.d/k8s.conf
再次确认:iptables的值是否为1
[dc2-user@10-255-20-74 ~]$ cat /proc/sys/net/bridge/bridge-nf-call-iptables
1
[dc2-user@10-255-20-74 ~]$ cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
1
[dc2-user@10-255-20-74 ~]$ cat /proc/sys/net/ipv4/ip_forward
1
4)各节点禁用防火墙
centos7默认防火墙为firewall的
systemctl stop firewalld.service 关闭防火墙
systemctl disable firewalld.service 禁止开机启动
firewall-cmd --state 查看状态
5)确保各个节点上已经安装
yum install ipset –y ---》iptables
yum install ipvsadm –y----》ipvs
6)配置禁止使用缓存分区
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
使用free -m确认swap已经关闭:
total used free shared buff/cache available
Mem: 64265 5686 47302 3218 11276 54178
Swap: 0 0 0
7)为kube-proxy开启ipvs的前提需要加载以下的内核模块,所有的Kubernetes节点上执行以下脚本:
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
# 使配置生效
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
8)查看docker版本
yum list docker-ce.x86_64 --showduplicates |sort –r
Installed Packages
docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.5-3.el7 @docker-ce-stable
docker-ce.x86_64 3:18.09.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
9)卸载旧的docker
yum -y remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
10)安装一些必要的系统工具
yum install -y yum-utils device-mapper-persistent-data lvm2
11)列出你安装过的包
yum list installed | grep docker
docker-engine.x86_64 1.7.1-1.el7 @/docker-engine-1.7.1-1.el7.x86_64.rpm
删除安装包
sudo yum -y remove docker-engine.x86_64
删除镜像/容器等
rm -rf /var/lib/docker
12)安装docker:
yum install docker-ce- 18.09.6-3.el7 –y
启动docker服务
systemctl enable docker
systemctl daemon-reload
systemctl start docker
systemctl status docker
13)查看docker可配置的参数表
docker info
创建vim /etc/docker/daemon.json添加内容
创建镜像存储目录:mkdir /APP/docker/lib/
{"insecure-registries" : [""],"graph": "/APP/docker/lib/","storage-driver": "overlay2"}
修改docker镜像和容器存储的路径
cd /etc/systemd/system/multi-user.target.wants
vim docker.service 添加--graph=/APP/docker指定的镜像存储目录
ExecStart=/usr/bin/dockerd --graph=/APP/docker
重启生效配置:
[root@node3 docker]# systemctl stop docker
[root@node3 docker]# systemctl daemon-reload
[root@node3 docker]# systemctl start docker
检查docker版:
docker version
14)安装yum
创建/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=kubernetes.repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
enabled=1
15)
确认一下iptables filter表中FOWARD链的默认策略(pllicy)为ACCEPT。
iptables -nvL
安装一些必备的工具
yum install -y epel-release
yum install -y net-tools wget vim ntpdate
16)确保再次在启动配置中添加去掉必须关闭Swap的限制
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
systemctl daemon-reload
17)更新
yum clean all
yum makecache
yum repolist
主节点安装
yum install -y kubelet-1.13.1(每台节点)
yum install -y kubeadm-1.13.1
yum install -y kubectl-1.13.1
每台先设置开机自启动
systemctl enable kubelet
18)安装k8s:
集群初始化如果遇到问题,可以使用下面的命令进行清理使用如下命令重置:
拆卸集群
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
一旦节点移除之后,则可以执行如下命令来重置集群
kubeadm reset
rm -rf /var/lib/cni/ $HOME/.kube/config
卸载网络接口:
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
19)准备下载镜像:
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.13.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24
给pull下来的镜像打标记,让它认为是从k8s下拉取过来的
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
删除重复镜像节省空间:
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.13.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24
20)其他节点需要部署:
k8s.gcr.io/kube-proxy:v1.13.1 quay.io/coreos/flannel:v0.11.0-amd64 k8s.gcr.io/pause:3.1
保存服务端下载镜像操作
docker save -o mynode.gz k8s.gcr.io/kube-proxy:v1.13.1 quay.io/coreos/flannel:v0.11.0-amd64 k8s.gcr.io/pause:3.1
将mynode.gz转发到其他工作节点
[root@10-255-20-174 flannel]# scp mynode.gz root@node3:/root
客户端操作导入镜像
docker load -i mynode.gz
节点服务器需要有的镜像
[root@node3 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.13.3 98db19758ad4 3 months ago 80.3MB
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 3 months ago 52.6MB
k8s.gcr.io/kube-proxy v1.13.1 fdb321fd30a0 4 months ago 80.2MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.13.1 fdb321fd30a0 4 months ago 80.2MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 16 months ago 742kB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 16 months ago 742kB
21)在k8s主节点初始化集群:
kubeadm init --kubernetes-version=v1.13.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors="--fail-swap-on=false" --apiserver-advertise-address=10.255.20.174
You can now join any number of machines by running the following on each node
as root:
kubeadm join 10.255.20.174:6443 --token fk2mbj.89et0n59bpvpjhwr --discovery-token-ca-cert-hash sha256:d3bc8ce928d2a8a92d94ff13e531685b6069789066f96a94ec762eb2017682a7
[root@10-255-20-174 flannel]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@10-255-20-174 flannel]# mkdir -p $HOME/.kube
[root@10-255-20-174 flannel]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@10-255-20-174 flannel]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
主节点查看集群:
[root@10-255-20-174 flannel]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
[root@10-255-20-174 flannel]# kubectl get componentstatus
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
[root@10-255-20-174 flannel]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
10-255-20-174 NotReady master 109s v1.13.1
[root@10-255-20-174 flannel]#
[root@10-255-20-174 flannel]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-86c58d9df4-9ztft 0/1 Pending 0 2m58s
coredns-86c58d9df4-kk86r 0/1 Pending 0 2m58s
etcd-10-255-20-174 1/1 Running 0 2m7s
kube-apiserver-10-255-20-174 1/1 Running 0 2m1s
kube-controller-manager-10-255-20-174 1/1 Running 0 114s
kube-proxy-4hd5m 1/1 Running 0 2m59s
kube-scheduler-10-255-20-174 1/1 Running 0 2m18s
13)安装网络插件flannel
[root@10-255-20-174 flannel]# kubectl create -f kube-flannel.yml
podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
[root@10-255-20-174 flannel]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-86c58d9df4-9ztft 1/1 Running 0 3m57s
coredns-86c58d9df4-kk86r 1/1 Running 0 3m57s
etcd-10-255-20-174 1/1 Running 0 3m6s
kube-apiserver-10-255-20-174 1/1 Running 0 3m
kube-controller-manager-10-255-20-174 1/1 Running 0 2m53s
kube-flannel-ds-amd64-ppttr 1/1 Running 0 18s
kube-proxy-4hd5m 1/1 Running 0 3m58s
kube-scheduler-10-255-20-174 1/1 Running 0 3m17s
查看状态:
[root@10-255-20-174 flannel]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
10-255-20-174 Ready master 4m55s v1.13.1
[root@10-255-20-174 flannel]#
k8s的master节点已经启动好了。
k8s其他客户端节点操作
1)节点服务器环境配置
yum install docker-ce -y
yum clean all
yum makecache
yum repolist
查看版本:
yum list --showduplicates | grep 'kubeadm\|kubectl\|kubelet'
yum install -y kubelet-1.13.1
yum install -y kubeadm-1.13.1
变更:
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
重新启动:
systemctl enable docker
systemctl enable kubelet
systemctl daemon-reload
systemctl start docker
关闭swap缓存区:
swapoff -a
移动镜像:
[root@10-255-20-174 flannel]# scp mynode.gz root@node3:/root
客户端操作导入镜像:
docker load -i mynode.gz
2)节点服务器需要有的镜像
[root@node3 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.13.3 98db19758ad4 3 months ago 80.3MB
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 3 months ago 52.6MB
k8s.gcr.io/kube-proxy v1.13.1 fdb321fd30a0 4 months ago 80.2MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.13.1 fdb321fd30a0 4 months ago 80.2MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 16 months ago 742kB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 16 months ago 742kB
3)各节点加入集群:
kubeadm join 10.255.20.174:6443 --token fk2mbj.89et0n59bpvpjhwr --discovery-token-ca-cert-hash sha256:d3bc8ce928d2a8a92d94ff13e531685b6069789066f96a94ec762eb2017682a7
[root@10-255-20-174 flannel]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
10-255-20-174 Ready master 14m v1.13.1
node1 Ready <none> 35s v1.13.1
node3 Ready <none> 14s v1.13.1