K8S高可用安装手册

K8S-kubeadm安装高可用(完成版本)


1 主机列表

主机名

centos版本

ip

docker version

master01

7.6.1810

192.168.59.11

18.09.9

master02

7.6.1810

192.168.59.12

18.09.9

master03

7.6.1810

192.168.59.13

18.09.9

work01

7.6.1810

192.168.59.21

18.09.9

work02

7.6.1810

192.168.59.22

18.09.9

VIP

7.6.1810

192.168.59.10

18.09.9


共有5台设备,3台master,1台work


2 K8S版本

主机名

kubelet版本

kubeadm版本

kubectl版本

master01

v1.16.4

v1.16.4

v1.16.4(选装)

master02

v1.16.4

v1.16.4

v1.16.4(选装)

master03

v1.16.4

v1.16.4

v1.16.4(选装)

work01

v1.16.4

v1.16.4

v1.16.4(选装)

work02

v1.16.4

v1.16.4

v1.16.4(选装)


3 高可用架构


K8S高可用安装手册


采用kubeadm来构建k8s集群,apiserver利用vip进行构建,kubelet连接vip,实现高可用的目的


4 高可用说明

核心组件

高可用模式

高可用实现方式

apiserver

主备

keepalived

controller-manager

主备

leader election

scheduler

主备

leader election

etcd

集群

kubeadm


  • apiserver 通过keepalived实现高可用,当某个节点故障时触发keepalived vip 转移;
  • controller-manager k8s内部通过选举方式产生领导者(由--leader-elect 选型控制,默认为true),同一时刻集群内只有一个controller-manager组件运行;
  • scheduler k8s内部通过选举方式产生领导者(由--leader-elect 选型控制,默认为true),同一时刻集群内只有一个scheduler组件运行;
  • etcd 通过运行kubeadm方式自动创建集群来实现高可用,部署的节点数为奇数,3节点方式最多容忍一台机器宕机。


5 安装准备工作


所有设备master01,master02,master03,work01,work02上执行


5.1  修改主机名


hostnamectl set-hostname master01

hostnamectl set-hostname master02

hostnamectl set-hostname master03

hostnamectl set-hostname work01

hostnamectl set-hostname work02

systemctl restart systemd-hostnamed


5.2 修改host文件


cat >> /etc/hosts << EOF

192.168.59.11 master01

192.168.59.12 master02

192.168.59.13 master03

192.168.59.21 work01

192.168.59.22 work02

EOF


5.3 修改DNS


cat >> /etc/resolv.conf << EOF

nameserver 114.114.114.114

nameserver 8.8.8.8

EOF


5.4 关闭SELINUX


 sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config


5.5 关闭防火墙


#查看防火墙状态

systemctl status firewalld.service

#状态是运行中,需要关闭

Active: active (running)

#关闭防火墙

systemctl stop firewalld

#再次查看状态确认

#设置开机关闭,不要启动

systemctl disable firewalld


5.6 设置iptables为空规则


yum -y install iptables-services 

#安装iptablesservices

systemctl start iptables

#启动防火墙

systemctl enable iptables

#设置防火墙卡开机启动

iptables -F

service iptables save

#清空规则,并保存


5.7 关闭swap分区


sed -i.bak '/swap/s/^/#/' /etc/fstab

swapoff -a


5.8 加载模块


k8s网络使用的flannel,该网络需要使用内核的br_netfilter模块


#查询是否加载了这个模块

lsmod |grep br_netfilter

#永久加载模块

cat > /etc/rc.sysinit << EOF

#!/bin/bash

for file in /etc/sysconfig/modules/*.modules ; do

[ -x $file ] && $file

done

EOF


cat > /etc/sysconfig/modules/br_netfilter.modules << EOF

modprobe br_netfilter

EOF


chmod 755 /etc/sysconfig/modules/br_netfilter.modules


5.9 设置内核参数


cat << EOF > /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

vm.swappiness=0

net.ipv4.ip_forward=1

net.ipv4.tcp_tw_recycle=0

vm.overcommit_memory=1

vm.panic_on_oom=0

fs.inotify.max_user_instances=8192

fs.inotify.max_user_watches=1048576

fs.file-max=52706963

fs.nr_open=52706963

net.ipv6.conf.all.disable_ipv6=1

net.netfileter.nf_conntrack_max=2310720

EOF


5.10 安装常用软件包


yum -y install epel-release vim net-tools gcc gcc-c++ glibc htop atop iftop iotop nethogs lrzsz telnet ipvsadm ipset conntrack  libnl-devel  libnfnetlink-devel openssl openssl-devel contrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git


5.11 同步时间


yum install ntpdate -y &&/usr/sbin/ntpdate -u ntp1.aliyun.com


5.12 关闭不需要服务


systemctl stop postfix && systemctl disable postfix


5.13 配置k8s YUM源


5.13.1新增阿里云yum


cat << EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF


  • [] 中括号中的是repository id,唯一,用来标识不同仓库
  • name 仓库名称,自定义
  • baseurl 仓库地址
  • enable 是否启用该仓库,默认为1表示启用
  • gpgcheck 是否验证从该仓库获得程序包的合法性,1为验证
  • repo_gpgcheck 是否验证元数据的合法性 元数据就是程序包列表,1为验证
  • gpgkey=URL 数字签名的公钥文件所在位置,如果gpgcheck值为1,此处就需要指定gpgkey文件的位置,如果gpgcheck值为0就不需要此项了


5.13.2 更新缓存


yum clean all && yum -y makecache


5.14 SSH无密码认证


这里在master01上配置无密码认证,方便连接master02,master03上


[root@master01 ~]#ssh-keygen -t rsa

[root@master01 ~]#ssh-copy-id -i /root/.ssh/id_rsa.pub root@master02

[root@master01 ~]#ssh-copy-id -i /root/.ssh/id_rsa.pub root@master03


6 Docker安装


master01,master02,master03,work01,work02上执行安装


6.1 安装依赖包


yum install -y yum-utils   device-mapper-persistent-data  lvm2


6.2 设置docker源


yum-config-manager --add-repo     https://download.docker.com/linux/centos/docker-ce.repo


6.3 查看docker ce版本


yum list docker-ce --showduplicates | sort -r

Repository epel is listed more than once in the configuration

Repository epel-debuginfo is listed more than once in the configuration

Repository epel-source is listed more than once in the configuration

已加载插件:fastestmirror

可安装的软件包

 * updates: mirrors.aliyun.com

Loading mirror speeds from cached hostfile

 * extras: mirrors.aliyun.com

docker-ce.x86_64            3:19.03.9-3.el7                     docker-ce-stable

docker-ce.x86_64            3:19.03.8-3.el7                     docker-ce-stable

docker-ce.x86_64            3:19.03.7-3.el7                     docker-ce-stable

docker-ce.x86_64            3:19.03.6-3.el7                     docker-ce-stable

docker-ce.x86_64            3:19.03.5-3.el7                     docker-ce-stable

docker-ce.x86_64            3:19.03.4-3.el7                     docker-ce-stable

docker-ce.x86_64            3:19.03.3-3.el7                     docker-ce-stable

docker-ce.x86_64            3:19.03.2-3.el7                     docker-ce-stable

docker-ce.x86_64            3:19.03.1-3.el7                     docker-ce-stable

docker-ce.x86_64            3:19.03.13-3.el7                    docker-ce-stable

docker-ce.x86_64            3:19.03.12-3.el7                    docker-ce-stable

docker-ce.x86_64            3:19.03.11-3.el7                    docker-ce-stable

docker-ce.x86_64            3:19.03.10-3.el7                    docker-ce-stable

docker-ce.x86_64            3:19.03.0-3.el7                     docker-ce-stable

docker-ce.x86_64            3:18.09.9-3.el7                     docker-ce-stable

docker-ce.x86_64            3:18.09.8-3.el7                     docker-ce-stable

docker-ce.x86_64            3:18.09.7-3.el7                     docker-ce-stable

docker-ce.x86_64            3:18.09.6-3.el7                     docker-ce-stable

docker-ce.x86_64            3:18.09.5-3.el7                     docker-ce-stable

docker-ce.x86_64            3:18.09.4-3.el7                     docker-ce-stable

docker-ce.x86_64            3:18.09.3-3.el7                     docker-ce-stable

docker-ce.x86_64            3:18.09.2-3.el7                     docker-ce-stable

docker-ce.x86_64            3:18.09.1-3.el7                     docker-ce-stable

docker-ce.x86_64            3:18.09.0-3.el7                     docker-ce-stable

docker-ce.x86_64            18.06.3.ce-3.el7                    docker-ce-stable

docker-ce.x86_64            18.06.2.ce-3.el7                    docker-ce-stable

docker-ce.x86_64            18.06.1.ce-3.el7                    docker-ce-stable

docker-ce.x86_64            18.06.0.ce-3.el7                    docker-ce-stable

docker-ce.x86_64            18.03.1.ce-1.el7.centos             docker-ce-stable

docker-ce.x86_64            18.03.0.ce-1.el7.centos             docker-ce-stable

docker-ce.x86_64            17.12.1.ce-1.el7.centos             docker-ce-stable

docker-ce.x86_64            17.12.0.ce-1.el7.centos             docker-ce-stable

docker-ce.x86_64            17.09.1.ce-1.el7.centos             docker-ce-stable

docker-ce.x86_64            17.09.0.ce-1.el7.centos             docker-ce-stable

docker-ce.x86_64            17.06.2.ce-1.el7.centos             docker-ce-stable

docker-ce.x86_64            17.06.1.ce-1.el7.centos             docker-ce-stable

docker-ce.x86_64            17.06.0.ce-1.el7.centos             docker-ce-stable

docker-ce.x86_64            17.03.3.ce-1.el7                    docker-ce-stable

docker-ce.x86_64            17.03.2.ce-1.el7.centos             docker-ce-stable

docker-ce.x86_64            17.03.1.ce-1.el7.centos             docker-ce-stable

docker-ce.x86_64            17.03.0.ce-1.el7.centos             docker-ce-stable

 * base: mirrors.aliyun.com


6.4 安装指定版本docker


yum install docker-ce-18.09.9 docker-ce-cli-18.09.9 containerd.io -y


6.5 启动docker


systemctl start docker

systemctl enable docker


7 安装命令补全


所有设备安装


 yum -y install bash-completion

 source /etc/profile.d/bash_completion.sh


8 镜像加速


所有设备安装


默认的docker下载地址速度很慢,这里用阿里云的镜像加速来作为镜像站


登陆地址为:https://cr.console.aliyun.com ,未注册的可以先注册阿里云账户


K8S高可用安装手册


8.1 配置加速


mkdir -p /etc/docker


cat > /etc/docker/daemon.json <<EOF

{

  "registry-mirrors": ["https://lss3ndia.mirror.aliyuncs.com"],

  "exec-opts": ["native.cgroupdriver=systemd"],

  "log-driver": "json-file",

  "log-opts": {

    "max-size": "100m"

  },

  "storage-driver": "overlay2"

}

EOF


8.2 重启docker


systemctl daemon-reload && systemctl restart docker


8.3 验证


 docker --version

 Docker version 18.09.9, build 039a7df9ba


9 Keepalived安装


在mster01,master02,master03上安装


yum -y install keepalived


9.1 配置keepalived


master01上


[root@master01 ~]#more /etc/keepalived/keepalived.conf 

! Configuration File for keepalived

global_defs {

   router_id master01

}

vrrp_instance VI_1 {

    state MASTER

    interface ens32

    virtual_router_id 50

    priority 100

    advert_int 1

    authentication {

        auth_type PASS

        auth_pass 1111

    }

    virtual_ipaddress {

        192.168.59.10

    }

}


master02上


[root@master02 ~]#more /etc/keepalived/keepalived.conf 

! Configuration File for keepalived

global_defs {

   router_id master02

}

vrrp_instance VI_1 {

    state BACKUP 

    interface ens32

    virtual_router_id 50

    priority 90

    advert_int 1

    authentication {

        auth_type PASS

        auth_pass 1111

    }

    virtual_ipaddress {

        192.168.59.10

    }

}


master03上


[root@master03 ~]#more /etc/keepalived/keepalived.conf    

! Configuration File for keepalived

global_defs {

   router_id master03

}

vrrp_instance VI_1 {

    state BACKUP 

    interface ens32

    virtual_router_id 50

    priority 80

    advert_int 1

    authentication {

        auth_type PASS

        auth_pass 1111

    }

    virtual_ipaddress {

        192.168.59.10

    }


9.2 启动服务


service keepalived start

systemctl enable keepalived


10 安装K8S


所有节点安装


10.1 查看版本


本文安装的kubelet版本是1.16.4,该版本支持的docker版本为1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09


yum list kubelet --showduplicates | sort -r


10.2 安装指定版本


这里安装16.4版本


yum install -y kubelet-1.16.4 kubeadm-1.16.4 kubectl-1.16.4


10.3 安装包说明


kubelet 运行在集群所有节点上,用于启动Pod和容器等对象的工具

kubeadm 用于初始化集群,启动集群的命令工具

kubectl 用于和集群通信的命令行,通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件


10.4 启动kubelet


systemctl enable kubelet && systemctl start kubelet


10.5 kubectl命令补全


echo "source <(kubectl completion bash)" >> ~/.bash_profile

source .bash_profile


11 镜像包下载


因为默认的k8s的镜像都在google,访问不了,这里用互联网上的分享的源下载


10月28号补充,我已经将所有的镜像打包成k8s_1.16.4.tar.gz,使用时候导入镜像即可


所有master,所有work上


docker pull registry.cn-hangzhou.aliyuncs.com/loong576/kube-apiserver:v1.16.4

docker pull registry.cn-hangzhou.aliyuncs.com/loong576/kube-controller-manager:v1.16.4

docker pull registry.cn-hangzhou.aliyuncs.com/loong576/kube-scheduler:v1.16.4

docker pull registry.cn-hangzhou.aliyuncs.com/loong576/kube-proxy:v1.16.4

docker pull registry.cn-hangzhou.aliyuncs.com/loong576/pause:3.1

docker pull registry.cn-hangzhou.aliyuncs.com/loong576/etcd:3.3.15-0

docker pull registry.cn-hangzhou.aliyuncs.com/loong576/coredns:1.6.2


11.1 查看下载的镜像


[root@master01 ~]#docker images

REPOSITORY                                                           TAG                 IMAGE ID            CREATED             SIZE

hello-world                                                          latest              bf756fb1ae65        9 months ago        13.3kB

registry.cn-hangzhou.aliyuncs.com/loong576/kube-apiserver            v1.16.4             3722a80984a0        10 months ago       217MB

registry.cn-hangzhou.aliyuncs.com/loong576/kube-controller-manager   v1.16.4             fb4cca6b4e4c        10 months ago       163MB

registry.cn-hangzhou.aliyuncs.com/loong576/kube-scheduler            v1.16.4             2984964036c8        10 months ago       87.3MB

registry.cn-hangzhou.aliyuncs.com/loong576/kube-proxy                v1.16.4             091df896d78f        10 months ago       86.1MB

registry.cn-hangzhou.aliyuncs.com/loong576/etcd                      3.3.15-0            b2756210eeab        13 months ago       247MB

registry.cn-hangzhou.aliyuncs.com/loong576/coredns                   1.6.2               bf261d157914        14 months ago       44.1MB

registry.cn-hangzhou.aliyuncs.com/loong576/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB


11.2 镜像改名字


因为k8s adm工具初始化的时候,查找有没有这些镜像,而且是按照名字来找的,不修改名字,还是从默认的google来下载


docker tag registry.cn-hangzhou.aliyuncs.com/loong576/kube-apiserver:v1.16.4 k8s.gcr.io/kube-apiserver:v1.16.4


docker tag registry.cn-hangzhou.aliyuncs.com/loong576/kube-controller-manager:v1.16.4 k8s.gcr.io/kube-controller-manager:v1.16.4


docker tag registry.cn-hangzhou.aliyuncs.com/loong576/kube-scheduler:v1.16.4 k8s.gcr.io/kube-scheduler:v1.16.4


docker tag registry.cn-hangzhou.aliyuncs.com/loong576/kube-proxy:v1.16.4 k8s.gcr.io/kube-proxy:v1.16.4


docker tag registry.cn-hangzhou.aliyuncs.com/loong576/etcd:3.3.15-0 k8s.gcr.io/etcd:3.3.15-0


docker tag registry.cn-hangzhou.aliyuncs.com/loong576/coredns:1.6.2 k8s.gcr.io/coredns:1.6.2


docker tag registry.cn-hangzhou.aliyuncs.com/loong576/pause:3.1 k8s.gcr.io/pause:3.1


#确认有如下镜像

[root@master01 ~]#docker images | grep k8s

k8s.gcr.io/kube-apiserver                                            v1.16.4             3722a80984a0        10 months ago       217MB

k8s.gcr.io/kube-controller-manager                                   v1.16.4             fb4cca6b4e4c        10 months ago       163MB

k8s.gcr.io/kube-scheduler                                            v1.16.4             2984964036c8        10 months ago       87.3MB

k8s.gcr.io/kube-proxy                                                v1.16.4             091df896d78f        10 months ago       86.1MB

k8s.gcr.io/etcd                                                      3.3.15-0            b2756210eeab        13 months ago       247MB

k8s.gcr.io/coredns                                                   1.6.2               bf261d157914        14 months ago       44.1MB

k8s.gcr.io/pause                                                     3.1                 da86e6ba6ca1        2 years ago         742kB


12 初始化master


此操作只在master01上执行即可


12.1 修改kubeadm配置文件


#apiserver里面写所有apiserver的主机名,ip地址或者后期可能加的ip

#controlPlaneEndpoint:vip地址

#podSubnet: "10.244.0.0/16",后期网络的网段

[root@master01 ~]#more kubeadm-config.yaml 

apiVersion: kubeadm.k8s.io/v1beta2

kind: ClusterConfiguration

kubernetesVersion: v1.16.4

apiServer:

  certSANs:    

  - master01

  - master02

  - master03

  - work01

  - work02

  - work03

  - 192.168.59.10

  - 192.168.59.11

  - 192.168.59.12

  - 192.168.59.13

  - 192.168.59.21

  - 192.168.59.22

controlPlaneEndpoint: "192.168.59.10:6443"

networking:

  podSubnet: "10.244.0.0/16"


12.2 初始化


#初始化,并且把内容导入到一个k8s_install.log的文件,方便后期加入节点的时候查看秘钥

[root@master01 ~]#kubeadm init --config=kubeadm-config.yaml | tee k8s_install.log


12.3 初始化后操作


[root@master01 ~]#mkdir -p $HOME/.kube

[root@master01 ~]#sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@master01 ~]#sudo chown $(id -u):$(id -g) $HOME/.kube/config


12.4 加载环境变量


echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

source .bash_profile


13 安装flannel


只在master01上执行


[root@master01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml



[root@master01 ~]#kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.ymlpodsecuritypolicy.policy/psp.flannel.unprivileged created

clusterrole.rbac.authorization.k8s.io/flannel created

clusterrolebinding.rbac.authorization.k8s.io/flannel created

serviceaccount/flannel created

configmap/kube-flannel-cfg created

daemonset.apps/kube-flannel-ds-amd64 created

daemonset.apps/kube-flannel-ds-arm64 created

daemonset.apps/kube-flannel-ds-arm created

daemonset.apps/kube-flannel-ds-ppc64le created

daemonset.apps/kube-flannel-ds-s390x created


安装完flannel之后,其实单节点apiserver的集群已经好了,可以尝试检查下


[root@master01 ~]#kubectl get pods -n kube-system

NAME                               READY   STATUS    RESTARTS   AGE

coredns-5644d7b6d9-lkq9x           1/1     Running   0          4m59s

coredns-5644d7b6d9-rjtl8           1/1     Running   0          4m59s

etcd-master01                      1/1     Running   0          4m3s

kube-apiserver-master01            1/1     Running   0          3m59s

kube-controller-manager-master01   1/1     Running   0          3m49s

kube-flannel-ds-amd64-mcccp        1/1     Running   0          57s

kube-proxy-l48f6                   1/1     Running   0          4m59s

kube-scheduler-master01            1/1     Running   0          3m51s


14 添加其他master节点


master02 master03上


14.1 拷贝证书


需要将证书拷贝到其他的节点,才能加入集群


cd /etc/kubernetes/pki/

#拷贝6个证书

scp /etc/kubernetes/pki/ca.crt ca.key sa.key sa.pub front-proxy-ca.crt front-proxy-ca.key root@master02:/root/


scp /etc/kubernetes/pki/ca.crt ca.key sa.key sa.pub front-proxy-ca.crt front-proxy-ca.key root@master02:/root/


#拷贝etcd的两个证书

#先在目标主机创建两个目录

[root@master02 ~]#mkdir etcd

[root@master03 ~]#mkdir etcd


[root@master01 /etc/kubernetes/pki]#scp /etc/kubernetes/pki/etcd/ca.* root@master02:/root/etcd


[root@master01 /etc/kubernetes/pki]#scp /etc/kubernetes/pki/etcd/ca.* root@master03:/root/etcd


14.2 移动证书


需要在master02,03上把刚才拷贝的证书,移动到应该在的目录


#创建证书目录

[root@master02 ~]#mkdir -p /etc/kubernetes/pki/etcd

[root@master03 ~]#mkdir -p /etc/kubernetes/pki/etcd


#移动证书

[root@master03 ~]#mv ca.* sa.* front-* /etc/kubernetes/pki/

[root@master03 ~]#mv etcd/ca.* /etc/kubernetes/pki/etcd/


[root@master02 ~]#mv ca.* sa.* front-* /etc/kubernetes/pki/

[root@master02 ~]#mv etcd/ca.* /etc/kubernetes/pki/etcd/


14.3 master02加入集群


#查看master01初始化时候生成的日志文件

You can now join any number of control-plane nodes by copying certificate authorities 

and service account keys on each node and then running the following as root:


  kubeadm join 192.168.59.10:6443 --token 1l0i97.wf409vm48u2qlb77 \

    --discovery-token-ca-cert-hash sha256:db8729d532f23e25c63df17f7cfaca073778f4ea5d4e0c88a9392b61b4a0eba0 \

    --control-plane       


Then you can join any number of worker nodes by running the following on each as root:


kubeadm join 192.168.59.10:6443 --token 1l0i97.wf409vm48u2qlb77 \

    --discovery-token-ca-cert-hash sha256:db8729d532f23e25c63df17f7cfaca073778f4ea5d4e0c88a9392b61b4a0eba0


开始初始化


#执行之前需要给docker镜像改名,还是之前的操作

[root@master02 ~]#docker images | grep k8s

k8s.gcr.io/kube-apiserver                                            v1.16.4             3722a80984a0        10 months ago       217MB

k8s.gcr.io/kube-controller-manager                                   v1.16.4             fb4cca6b4e4c        10 months ago       163MB

k8s.gcr.io/kube-proxy                                                v1.16.4             091df896d78f        10 months ago       86.1MB

k8s.gcr.io/kube-scheduler                                            v1.16.4             2984964036c8        10 months ago       87.3MB

k8s.gcr.io/etcd                                                      3.3.15-0            b2756210eeab        13 months ago       247MB

k8s.gcr.io/coredns                                                   1.6.2               bf261d157914        14 months ago       44.1MB

k8s.gcr.io/pause


kubeadm join 192.168.59.10:6443 --token 1l0i97.wf409vm48u2qlb77     --discovery-token-ca-cert-hash sha256:db8729d532f23e25c63df17f7cfaca073778f4ea5d4e0c88a9392b61b4a0eba0     --control-plane


14.4 master03加入集群


执行同上的操作


14.5 两台maseter初始化后


master02,master03上执行


To start administering your cluster from this node, you need to run the following as a regular user:

#执行下面操作

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config


master01上操作


#把master01上的adminconf配置文件拷贝到master02,03上

[root@master01 ~]#scp /etc/kubernetes/admin.conf master02:/etc/kubernetes/

  

[root@master01 ~]#scp /etc/kubernetes/admin.conf master03:/etc/kubernetes/


master02,master03上执行


#这样做的目的是让master02,03可以执行kubectl命令

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

source .bash_profile


15 验证集群


#master01上

[root@master01 ~]#kubectl get nodes

NAME       STATUS   ROLES    AGE     VERSION

master01   Ready    master   41m     v1.16.4

master02   Ready    master   4m53s   v1.16.4

master03   Ready    master   4m53s   v1.16.4


#master02上

[root@master02 ~]#kubectl get nodes

NAME       STATUS   ROLES    AGE     VERSION

master01   Ready    master   41m     v1.16.4

master02   Ready    master   4m53s   v1.16.4

master03   Ready    master   4m53s   v1.16.4


#master03上

[root@master03 ~]#kubectl get nodes

NAME       STATUS   ROLES    AGE     VERSION

master01   Ready    master   41m     v1.16.4

master02   Ready    master   4m53s   v1.16.4

master03   Ready    master   4m53s   v1.16.4


[root@master01 ~]#kubectl get po -o wide -n kube-system 

NAME                               READY   STATUS    RESTARTS   AGE     IP              NODE       NOMINATED NODE   READINESS GATES

coredns-5644d7b6d9-lkq9x           1/1     Running   0          41m     10.244.0.2      master01   <none>           <none>

coredns-5644d7b6d9-rjtl8           1/1     Running   0          41m     10.244.0.3      master01   <none>           <none>

etcd-master01                      1/1     Running   0          40m     192.168.59.11   master01   <none>           <none>

etcd-master02                      1/1     Running   0          5m36s   192.168.59.12   master02   <none>           <none>

etcd-master03                      1/1     Running   0          5m21s   192.168.59.13   master03   <none>           <none>

kube-apiserver-master01            1/1     Running   0          40m     192.168.59.11   master01   <none>           <none>

kube-apiserver-master02            1/1     Running   0          5m36s   192.168.59.12   master02   <none>           <none>

kube-apiserver-master03            1/1     Running   0          4m19s   192.168.59.13   master03   <none>           <none>

kube-controller-manager-master01   1/1     Running   1          40m     192.168.59.11   master01   <none>           <none>

kube-controller-manager-master02   1/1     Running   0          5m36s   192.168.59.12   master02   <none>           <none>

kube-controller-manager-master03   1/1     Running   0          4m30s   192.168.59.13   master03   <none>           <none>

kube-flannel-ds-amd64-6v67w        1/1     Running   0          5m29s   192.168.59.13   master03   <none>           <none>

kube-flannel-ds-amd64-9c75g        1/1     Running   0          5m37s   192.168.59.12   master02   <none>           <none>

kube-flannel-ds-amd64-mcccp        1/1     Running   0          37m     192.168.59.11   master01   <none>           <none>

kube-proxy-4mxlf                   1/1     Running   0          5m29s   192.168.59.13   master03   <none>           <none>

kube-proxy-hrlsn                   1/1     Running   0          5m37s   192.168.59.12   master02   <none>           <none>

kube-proxy-l48f6                   1/1     Running   0          41m     192.168.59.11   master01   <none>           <none>

kube-scheduler-master01            1/1     Running   1          40m     192.168.59.11   master01   <none>           <none>

kube-scheduler-master02            1/1     Running   0          5m36s   192.168.59.12   master02   <none>           <none>

kube-scheduler-master03            1/1     Running   0          4m33s   192.168.59.13   master03   <none>           <none>



K8S高可用安装手册

可以看到有3个etcd,3个apiserver,3个schedule,3个cm,3个flannel,3个proxy,2个dns


16 添加work节点


16.1 查看添加命令


kubeadm join 192.168.59.10:6443 --token 1l0i97.wf409vm48u2qlb77 \

    --discovery-token-ca-cert-hash sha256:db8729d532f23e25c63df17f7cfaca073778f4ea5d4e0c88a9392b61b4a0eba0


16.2 在work01上添加


kubeadm join 192.168.59.10:6443 --token 1l0i97.wf409vm48u2qlb77     --discovery-token-ca-cert-hash sha256:db8729d532f23e25c63df17f7cfaca073778f4ea5d4e0c88a9392b61b4a0eba0


16.3 在work02上添加


kubeadm join 192.168.59.10:6443 --token 1l0i97.wf409vm48u2qlb77     --discovery-token-ca-cert-hash sha256:db8729d532f23e25c63df17f7cfaca073778f4ea5d4e0c88a9392b61b4a0eba0


17 查看集群


K8S高可用安装手册


发现flannel,proxy都是无法创建,查看日志,是无法下载镜像,想到了之前下载的镜像没有修改名字,于是,在work01和work02上修改名字,就可以了


K8S高可用安装手册


18 安装dashboard


这里在work01上安装dashboard


18.1 下载yaml文件


wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml


18.2 修改yaml文件


#因为默认的yum源的地址不通,换成阿里云

sed -i 's/kubernetesui/registry.cn-hangzhou.aliyuncs.com\/loong576/g' recommended.yaml


#修改端口

sed -i '/targetPort: 8443/a\ \ \ \ \ \ nodePort: 30001\n\ \ type: NodePort' recommended.yaml


cat >> recommended.yaml << EOF

---

# ------------------- dashboard-admin ------------------- #

apiVersion: v1

kind: ServiceAccount

metadata:

  name: dashboard-admin

  namespace: kubernetes-dashboard


---

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: ClusterRoleBinding

metadata:

  name: dashboard-admin

subjects:

- kind: ServiceAccount

  name: dashboard-admin

  namespace: kubernetes-dashboard

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: cluster-admin

EOF


18.3 启动dashborad


[root@work01 ~]#kubectl apply -f recommended.yaml

namespace/kubernetes-dashboard created

serviceaccount/kubernetes-dashboard created

service/kubernetes-dashboard created

secret/kubernetes-dashboard-certs created

secret/kubernetes-dashboard-csrf created

secret/kubernetes-dashboard-key-holder created

configmap/kubernetes-dashboard-settings created

role.rbac.authorization.k8s.io/kubernetes-dashboard created

clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created

rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

deployment.apps/kubernetes-dashboard created

service/dashboard-metrics-scraper created

deployment.apps/dashboard-metrics-scraper created


18.3 查看状态


[root@work01 ~]#kubectl get all -n kubernetes-dashboard


K8S高可用安装手册


18.4 查看token


kubectl describe secrets -n kubernetes-dashboard dashboard-admin


token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IldBNTktTWUxVUtTMm1BaWNzeEE2eFZWcGEtMjlZMDlfLUt4WmJpWEMtYlUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tZDU4NmQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNzY0NzRjNWItNWUzYy00MDFhLWI2NzktYWVlMmRlYjg2MzQ3Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.CkJVRmU_41JAqHa-jzl8m-Jzh7qr4Ct-jXY-LUzAg0ilR48wVQHUl48D1j-eHYU5_POdSgsoEJwGD77gy8AeDgjiF6BHbknRci5z-XA3x1WmMkziTkjUjRC2hsvi81zGSDRCgfP4iNotggg361yXbhokjwq82W6jWPSUskOvttpVAN7px3hc34bjvMJTWXaoAtWem29BGoi-FjQUF2nOJD5JqoKO7k5LNwgylMWqcsMeNDU9aJQSWy3axZP7BEUnhPiCMi94MjNeqYXDrREZO0GWvJVvGF8V8lj_w0TdTDwp8zJoZqe6N2IdWeRNc834949EACAHKrizXQO0QdccZg


19 故障模拟


19.1 关闭master01设备


#先查看调度器在哪个节点

[root@master01 ~]kubectl get endpoints kube-scheduler -n kube-system

#关闭master01

[root@master01 ~]#init 0

#在查看

[root@master02 ~]#kubectl get endpoints kube-scheduler -n kube-system -o yaml |grep holderIdentity

    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master02_a802b7d7-27df-46fb-8422-3c08115fb76f","leaseDurationSeconds":15,"acquireTime":"2020-10-27T03:59:37Z","renewTime":"2020-10-27T04:01:19Z","leaderTransitions":2}'



#查看集群节点

[root@master02 ~]#kubectl get nodes

NAME       STATUS     ROLES    AGE   VERSION

master01   NotReady   master   88m   v1.16.4

master02   Ready      master   51m   v1.16.4

master03   Ready      master   51m   v1.16.4

work01     Ready      <none>   42m   v1.16.4

work02     Ready      <none>   31m   v1.16.4

#master01已经是not ready了

#此时vip已经飘到master02上了


20 创建pod


#编辑nginx.yaml文件

apiVersion: apps/v1             #描述文件遵循extensions/v1beta1版本的Kubernetes API

kind: Deployment                #创建资源类型为Deployment

metadata:                       #该资源元数据

  name: nginx-master            #Deployment名称

spec:                           #Deployment的规格说明

  selector:

    matchLabels:

      app: nginx 

  replicas: 3                   #指定副本数为3

  template:                     #定义Pod的模板

    metadata:                   #定义Pod的元数据

      labels:                   #定义label(标签)

        app: nginx              #label的key和value分别为app和nginx

    spec:                       #Pod的规格说明

      containers:               

      - name: nginx             #容器的名称

        image: nginx:latest     #创建容器所使用的镜像


kubectl apply -f nginx.yaml 


kubectl get po -o wide


K8S高可用安装手册

上一篇:全解今日头条大数据算法原理(附PPT&视频)


下一篇:idea 两种启动maven项目方式