kubeadm安装高可用Kubernetes-v1.19.0集群(进阶2版)

软件版本


  1. 操作系统:Centos7
  2. 容器引擎:Docker 19.03.ce
  3. Kubernetes:v1.19

服务规划

IP地址 功能/主机 角色
192.168.143.201 k8s-master01.ilinux.io k8s-master01 master
192.168.143.211 k8s-node01.ilinux.io k8s-node01 node01
192.168.143.212 k8s-node02.ilinux.io k8s-node02 node02
192.168.143.213 k8s-node03.ilinux.io k8s-node03 node03

初始环境配置


注意:环境检擦步骤k8s-master、k8s-node01、k8s-node02、k8s-node03 都要操作

配置host

cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.143.201 k8s-master01.ilinux.io k8s-master01 k8s-api.ilinux.io
192.168.143.211 k8s-node01.ilinux.io k8s-node01
192.168.143.212 k8s-node02.ilinux.io k8s-node02
192.168.143.213 k8s-node03.ilinux.io k8s-node03

时钟同步

注意:需验证时间是否同步,date

systemctl start chronyd;systemctl enable chronyd
chronyc sources #看连接状态
chronyc tracking #看同步状态,时间差
chronyc -a makestep #立即同步时间

关闭防火墙

systemctl stop firewalld;systemctl disable firewalld

关闭swap

swapoff -a
vim /etc/fstab
#UUID=f66b8109-4699-4628-880f-942e1951ce38 swap                    swap    defaults        0 0

关闭selinux

vim /etc/selinux/config
SELINUX=disabled

getenforce

配置NAT转发

echo "modprobe br_netfilter" >> /etc/profile

cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

sysctl -p /etc/sysctl.d/k8s.conf #执行同步参数

SSH密钥免密

ssh-keygen

ssh-copy-id 192.168.143.201
ssh-copy-id 192.168.143.211
ssh-copy-id 192.168.143.212
ssh-copy-id 192.168.143.213

YUM源配置

Centos7 的yum 阿里源

http://mirrors.aliyun.com/repo/epel-7.repo

http://mirrors.aliyun.com/repo/Centos-7.repo

部署docker


注意:docker部署k8s-master、k8s-node01、k8s-node02、k8s-node03 都要操作

移除旧版本docker

yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine

安装docker依赖关系

yum install -y yum-utils device-mapper-persistent-data lvm2

### 配置docker仓库

```shell
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

查看docker版本

yum list docker-ce --showduplicates | sort -r

安装docker

yum install docker-ce-19.03.15 docker-ce-cli-19.03.15 containerd.io

配置kubemaster

注意:

  1. 安装kubelet-1.19.0 kubeadm-1.19.0 kubectl-1.19.0 步骤k8s-master 都要操作

  2. 安装kubelet-1.19.0 kubectl-1.19.0 步骤k8s-node01、k8s-node02、k8s-node03 都要操作

部署初始化工具

kubelet :运行在集群所有节点上,用于启动 Pod 和容器等对象的工具
kubeadm :用于初始化集群,启动集群的命令工具
kubectl :用于和集群通信的命令行,通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件

 yum install -y kubelet-1.19.0 kubeadm-1.19.0 kubectl-1.19.0
 
 systemctl enable kubelet && systemctl start kubelet

集群初始化

初始化命令

 kubeadm init --kubernetes-version=1.19.0 --apiserver-advertise-address=192.168.143.201 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.96.0.0/12 --pod-network-cidr=10.224.0.0/16 --control-plane-endpoint "k8s-api.ilinux.io" --token-ttl 0

初始参数说明

--kubernetes-version=1.19.0  #kubernetes版本

--apiserver-advertise-address=192.168.143.201   #apiserver地址

--image-repository registry.aliyuncs.com/google_containers  #kubernetes仓库

--service-cidr=10.96.0.0/12  #指定server资源地址段

--pod-network-cidr=10.224.0.0/16   #指定pod资源地址段

--control-plane-endpoint "k8s-api.ilinux.io"   #k8s高可用使用,可增加控制面,通过访问域名即可

--token-ttl 0 #执行token过期时间,默认24小时,永不过期0

初始化过程

请保留初始化输出的内容,仅在k8s-master节点执行


[root@k8s-master ~]#  kubeadm init --kubernetes-version=1.19.0 --apiserver-advertise-address=192.168.143.201 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.96.0.0/12 --pod-network-cidr=10.224.0.0/16 --control-plane-endpoint "k8s-api.ilinux.io" --token-ttl 0
W0506 06:25:51.292136    2115 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull‘
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-api.ilinux.io k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.143.201]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.143.201 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.143.201 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.503757 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the label "node-role.kubernetes.io/master=‘‘"
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 5ppzww.4lv5mi6q59gu46mo
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8s-api.ilinux.io:6443 --token 5ppzww.4lv5mi6q59gu46mo     --discovery-token-ca-cert-hash sha256:b5054b0ddbbc0f048f569fc732ce23ae021cf21bb1cee38c7a2dddd5a0f73a43     --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-api.ilinux.io:6443 --token 5ppzww.4lv5mi6q59gu46mo     --discovery-token-ca-cert-hash sha256:b5054b0ddbbc0f048f569fc732ce23ae021cf21bb1cee38c7a2dddd5a0f73a43 

初始华后收尾解释

  1. 集群使用的kubeconfig
  2. 集群部署网络插件
  3. api高可用,增加控制平台(暂不操作)
  4. 集群加入node节点

集群使用的kubeconfig

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

集群部署网络插件

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

API高可用,增加控制平台(暂不操作)

kubeadm join k8s-api.ilinux.io:6443 --token 5ppzww.4lv5mi6q59gu46mo --discovery-token-ca-cert-hash sha256:b5054b0ddbbc0f048f569fc732ce23ae021cf21bb1cee38c7a2dddd5a0f73a43 --control-plane 

集群加入node节点

kubeadm join k8s-api.ilinux.io:6443 --token 5ppzww.4lv5mi6q59gu46mo --discovery-token-ca-cert-hash sha256:b5054b0ddbbc0f048f569fc732ce23ae021cf21bb1cee38c7a2dddd5a0f73a43 

集群使用的kubeconfig

仅在k8s-master上执行

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

部署网络插件

k8s-master上执行

其他k8s-node01、k8s-node02、k8s-node03 无需执行

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

kube-flannel.yml内容如下:

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: [‘NET_ADMIN‘, ‘NET_RAW‘]
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: ‘RunAsAny‘
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: [‘extensions‘]
  resources: [‘podsecuritypolicies‘]
  verbs: [‘use‘]
  resourceNames: [‘psp.flannel.unprivileged‘]
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.14.0-rc1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.14.0-rc1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

获取节点信息

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
k8s-master01   Ready    master   22h   v1.19.0

加入NODE

k8s-node01、k8s-node02、k8s-node03 上分别执行node加入集群命令

NODE加入集群命令

[root@k8s-node01 ~]# kubeadm join k8s-api.ilinux.io:6443 --token 5ppzww.4lv5mi6q59gu46mo --discovery-token-ca-cert-hash sha256:b5054b0ddbbc0f048f569fc732ce23ae021cf21bb1cee38c7a2dddd5a0f73a43

查看NODE加入状态

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
k8s-master01   Ready    master   22h   v1.19.0
k8s-node01     Ready    <none>   22h   v1.19.0
k8s-node02     Ready    <none>   22h   v1.19.0
k8s-node03     Ready    <none>   22h   v1.19.0

问题

  • scheduler/controller-manager 提示 Unhealthy

[root@k8s-master01 ~]# kubectl  get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
etcd-0               Healthy     {"health":"true"}
  • 检查kube-scheduler和kube-controller-manager组件配置是否禁用了非安全端口
vim /etc/kubernetes/manifests/kube-scheduler.yaml 
vim /etc/kubernetes/manifests/kube-controller-manager.yaml

# 注释掉- --port=0这一行
  • 重启kubelet服务
systemctl restart kubelet

kubeadm安装高可用Kubernetes-v1.19.0集群(进阶2版)

上一篇:lvs之搭建NAT模式的HTTPS负载集群


下一篇:Webpack前端构建工具