使用kubeadm快速部署高可用kubernetes集群

 

一、     服务器规划

环境要求:

    2台rhel7.8机器

    硬件配置:2C/2G/30G及以上

服务器及IP规划:

角色

主机名

ip

组件

master节点

master73

192.168.27.73

keepalived、HAproxy、master组件

master节点

master74

192.168.27.74

keepalived、HAproxy、master组件

VIP

matervip

192.168.27.70

 

work节点

node75

192.168.27.75

node组件

 

二、初始化操作系统(所有机器)

#关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

#关闭selinux

sed -i 's/Enforcing/disabled/' /etc/selinux/config

#禁止swap分区

sed -i 's/.*swap.*/#&/' /etc/fstab

 

#添加hosts解析

cat >> /etc/hosts << EOF

192.168.27.73    master73

192.168.27.74    master74

192.168.27.70    node70

192.168.27.75    node75

EOF

 

#将桥接的IPv4流量传递到iptables的链

cat > /etc/sysctl.d/k8s.conf << EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sysctl --system

 

#时间同步

yum install ntpdata

ntpdate time.windows.com    #根据实际情况同步时间服务器

 

#重启

reboot

 

 

三、master节点部署keepalived+HAproxy(所有master节点)

部署keepalive

yum install conntrack-tools libseccomp libtool-ltdl

yum install keepalived

写keep alive配置文件

备份初始化文件

cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.old

 

cat > /etc/keepalived/keepalived.conf <<EOF

! Configuration File for keepalived

global_defs {

    router_id k8s

}

 

vrrp_script check_haproxy{

    script "killall -0 haproxy"

    interval 3

    weight -2

    fall 10

    rise 2

}

 

vrrp_instance VI_1 {

state MASTER

interface ens33  #网卡设备名称,根据自己网卡信息进行更改

virtual_router_id 51

advert_int 1

priority 250

authentication {

auth_type PASS

auth_pass ceb1b3ec013d66163d6ab

}

 

virtual_ipaddress {

    192.168.27.70  # 这就就是虚拟IP地址

}

track_script{

    check_haproxy

}

}

EOF

systemctl enable keepalived

systemctl start keepalived

 

部署HAproxy

yum install haproxy

cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.old

cat > /etc/haproxy/haproxy.cfg <<EOF

 

global

    log         127.0.0.1 local2

 

    chroot      /var/lib/haproxy

    pidfile     /var/run/haproxy.pid

    maxconn     4000

    user        haproxy

    group       haproxy

    daemon

 

    # turn on stats unix socket

    stats socket /var/lib/haproxy/stats

 

defaults

    mode                    http

    log                     global

    option                  httplog

    option                  dontlognull

    option http-server-close

    option forwardfor       except 127.0.0.0/8

    option                  redispatch

    retries                 3

    timeout http-request    10s

    timeout queue           1m

    timeout connect         10s

    timeout client          1m

    timeout server          1m

    timeout http-keep-alive 10s

    timeout check           10s

    maxconn                 3000

frontend kubernetes-apiserver

    mode                    tcp

    bind                    *:16443

    option                  tcplog

    default_backend         kubernetes-apiserver

 

 

backend kubernetes-apiserver

    mode        tcp

    balance     roundrobin

    server      master73  192.168.27.73:6443 check

    server      master74  192.168.27.74:6443 check

 

listen stats

    bind            *:1080

    stats auth      admin:awesomePassword

    stats refresh   5s

    stats realm     HAProxy\ Statistics

stats uri       /admin?stats

EOF

systemctl enable haproxy

systemctl start haproxy

 

 

 

 

四、安装docker-ce/kubelet/kubeadm

 

#下载yum源

yum install -y wget

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo /etc/yum.repos.d/

yum clean all

#安装依赖包:

yum install policycoreutils-python

wget http://mirror.centos.org/centos/7/extras/x86_64/Packages/container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm

rpm -ivh container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm

yum install docker-ce-18.06.1.ce-3.el7

cat > /etc/docker/daemon.json <<EOF

{

    "exec-opts":["native.cgroupdriver=systemd"],

    "registry-mirrors":["https://b9pmyelo.mirror.aliyuncs.com"]

}

EOF

systemctl daemon-reload

systemctl start docker

systemctl enable docker

 

配置kubernetes源

cat > /etc/yum.repos.d/kubernetes.repo <<EOF

[kubernetes]

name=kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=0

EOF

yum install kubelet-1.16.3 kubeadm-1.16.3 kubectl-1.16.3

启动kubelet并设置为开机自启

systemctl enable kubelet

 

 

 

 

五、部署主备master节点

部署主master节点(在vip所在的主机上操作)

准备工作目录

mkdir /usr/local/kubernetes/manifests/ -p

cd /usr/local/kubernetes/manifests/

生成部署文件

cat > kubeadm-config.yaml <<EOF

apiServer:

  certSANs:

    - master73

    - master74

    - master

    - 192.168.27.73

    - 192.168.27.74

    - 192.168.27.70

    - 127.0.0.1

  extraArgs:

    authorization-mode: Node,RBAC

  timeoutForControlPlane: 4m0s

apiVersion: kubeadm.k8s.io/v1beta1

certificatesDir: /etc/kubernetes/pki

clusterName: kubernetes

controlPlaneEndpoint: "master:16443"

controllerManager: {}

dns:

  type: CoreDNS

etcd:

  local:

    dataDir: /var/lib/etcd

imageRepository: registry.aliyuncs.com/google_containers

kind: ClusterConfiguration

KubernetesVersion: v1.16.3

networking:

  dnsDomain: cluster.local

  podSubnet: 10.244.0.0/16

  serviceSubnet: 10.1.0.0/16

scheduler: {}

EOF

kubeadm init --config kubeadm-config.yaml

保存输出(需要用它进行加集群)

Your Kubernetes control-plane has initialized successfully!

 

To start using your cluster, you need to run the following as a regular user:

 

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

 

You can now join any number of control-plane nodes by copying certificate authorities

and service account keys on each node and then running the following as root:

 

  kubeadm join master:16443 --token 24q1yw.y8a5fspmfgqafee4 \

    --discovery-token-ca-cert-hash sha256:1efc02c1e36672ed8cb2d9b72d7fb4ff01fd052e61fda3fd609e49133b6f412f \

    --control-plane       

 

Then you can join any number of worker nodes by running the following on each as root:

 

kubeadm join master:16443 --token 24q1yw.y8a5fspmfgqafee4 \

    --discovery-token-ca-cert-hash sha256:1efc02c1e36672ed8cb2d9b72d7fb4ff01fd052e61fda3fd609e49133b6f412f

按照提示执行:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/confi

部署flannel网络

kubectl apply -f  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

 

查看状态

kubectl get cs

kubectl get nodes

kubectl get pod -n kube-system

 

部署将另一个master加入集群

将密钥相关文件从主master复制到另一个节点

ssh root@master73 mkdir -p /etc/kubernetes/pki/etcd/

scp /etc/kubernetes/admin.conf root@master73:/etc/kubernetes/

scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@master73:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/etcd/ca.* root@master73:/etc/kubernetes/pki/etcd/

 

在备节点执行上面保存的输出

kubeadm join master:16443 --token 24q1yw.y8a5fspmfgqafee4 \

    --discovery-token-ca-cert-hash sha256:1efc02c1e36672ed8cb2d9b72d7fb4ff01fd052e61fda3fd609e49133b6f412f \

--control-plane

 

执行输出:

    mkdir -p $HOME/.kube

    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

    sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

部署node节点

在node节点执行之前保存的输出

kubeadm join master:16443 --token 24q1yw.y8a5fspmfgqafee4 \

--discovery-token-ca-cert-hash sha256:1efc02c1e36672ed8cb2d9b72d7fb4ff01fd052e61fda3fd609e49133b6f412f

 

重新安装fannel网络(主master节点上执行)

kubectl  create -f kube-flannel.yml

查看集群状态

 

六、部署应用

 略

上一篇:kubeadm搭建高可用集群-版本1.18.2


下一篇:Kubernetes-安装笔记-kubeadm