K8S 基于ansible 搭建高可用集权(kubernetes_v1.22.2)

  • 由于资源有限,只开启了3台虚拟机,前期准备工作请参考这篇博客或我的另一篇博客;

1.基础系统配置

  • 2c/4g内存/40硬盘(该配置仅测试用)
  • 最小化安装 Ubuntu 16.04 server 或者Centos 7 Minimal
  • 配置基础网络、更新源、SSH登录等

2.在每个节点上安装依赖工具

  • 2.1 Ubuntu 16.04 请执行以下脚本:
apt-get update && apt-get upgrade -y && apt-get dist-upgrade -y
#安装python 2
apt-get install python2.7
# Ubuntu 16.04 可能需要配置以下软连接
ln -s /usr/python2.7 /user/bin/python

3.在每个节点上安装依赖工具

  • 3.1 安装ansible
# 注意pip 21.0以后不再支持python2和python3.5,需要如下安装
# To install pip for Python 2.7 install it from https://bootstrap.pypa.io/2.7/ :
curl -O https://bootstrap.pypa.io/pip/2.7/get-pip.py
python get-pip.py
python -m pip install --upgrade "pip < 21.0"
 
# pip安装ansible(国内如果安装太慢可以直接用pip阿里云加速)
pip install ansible -i https://mirrors.aliyun.com/pypi/simple/
  • 3.2 在ansible控制端配置免密码登录
# 更安全 Ed25519 算法
ssh-keygen -t ed25519 -N '' -f ~/.ssh/id_ed25519
# 或者传统 RSA 算法
ssh-keygen -t rsa -b 2048 -N '' -f ~/.ssh/id_rsa

ssh-copy-id $IPs #$IPs为所有节点地址包括自身,按照提示输入yes 和root密码

# 为每个节点设置python软链接
ssh $IPs ln -s /usr/bin/python3 /usr/bin/python

4.在部署节点上编排k8s安装

  • 4.1 下载项目源码、二进制及离线镜像
# 下载工具脚本ezdown,举例使用kubeasz版本3.1.1
export release=3.1.1
wget https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
chmod +x ./ezdown
# 使用工具脚本下载
./ezdown -D

上述脚本运行成功后,所有文件(kubeasz代码、二进制、离线镜像)均已整理好放入目录/etc/kubeasz



一)、准备环境

序号 角色 IP 备注
1 keepalived+haproxy+vip 192.168.117.132
2 master-etcd-node-2 192.168.117.131
3 mester-etcd-node-1 192.168.117.130
  • 三台虚拟机,130当做master与etcd与node,131与130同样,132当做keepalived+haproxy+vip
  • 配置好三台虚拟机的镜像仓库

更新镜像

  • 1、网页搜索阿里巴巴开运镜像站,由于我的系统是Ubuntu系统,所以使用Ubuntu的镜像
    K8S 基于ansible 搭建高可用集权(kubernetes_v1.22.2)
  • 2、找到ubuntu的镜像
    K8S 基于ansible 搭建高可用集权(kubernetes_v1.22.2)
  • 3、copy最新的镜像复制到配置文件中,配置文件在/etc/apt/sources.list
ubuntu 20.04(focal) 配置如下
deb http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse

K8S 基于ansible 搭建高可用集权(kubernetes_v1.22.2)


二)、192.168.117.132 部署安装Keepalived+haproxy+vip

  • 1、使用apt install 安装keepalived与haproxy
root@superops:~# apt install keepalived haproxy   #安装部署
root@superops:~# find / -name keepalived*         #查找配置文件
将查找到的:/usr/share/doc/keepalived/samples/keepalived.conf.vrrp复制到/etc/keepalived/命名为keepalived.conf
root@superops:~# cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf 
  • 2、修改配置文件:/etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state MASTER
    interface ens32                         #修改网卡名字
    garp_master_delay 10
    smtp_alert
    virtual_router_id 60
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.117.188 dev ens32 label ens32:0    #创建vip
    }
}
  • 3、重启服务查看状态,将服务设置成开机自启
root@superops:~# systemctl restart keepalived.service 
root@superops:~# systemctl status keepalived.service 
root@superops:~# systemctl enable keepalived.service
Synchronizing state of keepalived.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable keepalived
  • 4、修改haproxy的配置文件/etc/haproxy/haproxy.cfg
check: 状态检测
inter:间隔时间
fall: 失败测试
rise: 恢复次数
# 将以下添加到最底层
listen k8s-6443
  bind 192.168.117.188:6443   # 监听 vip的6443端口
  mode tcp                    # tcp 模式
  server 192.168.117.130 192.168.117.130:6443 check inter 2s fall 3 rise 3   #将两台master的ip添加上,加上状态检测  间隔时间  2s   失败次数  3次    rise 回复次数 3次
  server 192.168.117.131 192.168.117.131:6443 check inter 2s fall 3 rise 3
  • 5、启动服务查看服务状态与端口状态
root@superops:~# systemctl restart haproxy.service 
root@superops:~# systemctl status haproxy.service 
● haproxy.service - HAProxy Load Balancer
     Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2022-01-10 23:29:11 CST; 5s ago
       Docs: man:haproxy(1)
             file:/usr/share/doc/haproxy/configuration.txt.gz
    Process: 23830 ExecStartPre=/usr/sbin/haproxy -f $CONFIG -c -q $EXTRAOPTS (code=exited, status=0/SUCCESS)
   Main PID: 23843 (haproxy)
      Tasks: 3 (limit: 2240)
     Memory: 2.7M
     CGroup: /system.slice/haproxy.service
             ├─23843 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock
             └─23844 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock

Jan 10 23:29:11 superops systemd[1]: Starting HAProxy Load Balancer...
Jan 10 23:29:11 superops haproxy[23843]: [WARNING] 009/232911 (23843) : parsing [/etc/haproxy/haproxy.cfg:23] : 'option httplog' not usable with proxy 'k8s-6443' (needs 'mode http'). Fall>
Jan 10 23:29:11 superops haproxy[23843]: Proxy k8s-6443 started.
Jan 10 23:29:11 superops haproxy[23843]: Proxy k8s-6443 started.
Jan 10 23:29:11 superops haproxy[23843]: [NOTICE] 009/232911 (23843) : New worker #1 (23844) forked
Jan 10 23:29:11 superops systemd[1]: Started HAProxy Load Balancer.
Jan 10 23:29:11 superops haproxy[23844]: [WARNING] 009/232911 (23844) : Server k8s-6443/192.168.117.130 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check durat>
Jan 10 23:29:12 superops haproxy[23844]: [WARNING] 009/232912 (23844) : Server k8s-6443/192.168.117.131 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check durat>
Jan 10 23:29:12 superops haproxy[23844]: [ALERT] 009/232912 (23844) : proxy 'k8s-6443' has no server available!
root@superops:~# ss -lnt
State                 Recv-Q                Send-Q                                 Local Address:Port                               Peer Address:Port                Process                
LISTEN                0                     491                                  192.168.117.188:6443                                    0.0.0.0:*                                          
LISTEN                0                     4096                                   127.0.0.53%lo:53                                      0.0.0.0:*                                          
LISTEN                0                     128                                        127.0.0.1:8118                                    0.0.0.0:*                                          
LISTEN                0                     128                                          0.0.0.0:22                                      0.0.0.0:*                                          
LISTEN                0                     128                                        127.0.0.1:6010                                    0.0.0.0:*                                          
LISTEN                0                     16384                                      127.0.0.1:1514                                    0.0.0.0:*                                          
LISTEN                0                     128                                            [::1]:8118                                       [::]:*                                          
LISTEN                0                     128                                             [::]:22                                         [::]:*                                          
LISTEN                0                     128                                            [::1]:6010                                       [::]:*           

如果服务启动不成功,可能是缺少内核参数

1.使用sysctl -a | grep local
2.找到net.ipv4.ip_nonlocal_bind = 0
3.修改vim /etc/sysctl.conf 将第2条添加进去,把0改为1即可
4.修改完成后执行sysctl -p 即可,然后在重新启动一下服务

三)、192.168.117.130使用ansible部署服务

  • 安装ansible
root@superops:~# apt install ansible
  • 配置免秘钥认证
root@superops:~# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
/root/.ssh/id_rsa already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:azBtE/ipRTeejzNF3k8LFsSYsHUBdDM4TxXJPot+Gsk root@superops
The key's randomart image is:
+---[RSA 3072]----+
|         .o+*B+oo|
|       .  o=+ooo |
|      . o.o =..  |
|       + = = o.o |
|      o S o ooo +|
|       * o +o.o+.|
|      . o + oE ..|
|       .   o ... |
|             .o  |
+----[SHA256]-----+
root@superops:~# ssh-copy-id 192.168.117.130
root@192.168.117.130's password: 

root@superops:~# ssh-copy-id 192.168.117.131
root@192.168.117.131's password: 

root@superops:~# ssh-copy-id 192.168.117.132
root@192.168.117.132's password: 
# 使用ssh测试连接 
ssh 192.168.117.130  连接成功
ssh 192.168.117.131  连接成功
ssh 192.168.117.132  连接成功

四)、编排安装K8S

  • 下载安装脚本ezdown,使用kubeasz版本(3.1.1),下载好的安装包放在/etc/kubeasz
root@superops:~# export release=3.1.1   # 生成环境变量
root@superops:~# wget https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown  #下载ezdown脚本
root@superops:~# chmod +x ezdown   #赋给执行权限
root@superops:~# ./ezdown -D    # -D 下载所有的安装包
  • 下载完成后创建集群
root@superops:~# cd /etc/kubeasz/   #切换到/etc/kubeasz/下
root@superops:/etc/kubeasz# ./ezctl new k8s-01  
2022-01-11 00:29:25 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8s-01
2022-01-11 00:29:25 DEBUG set version of common plugins
2022-01-11 00:29:25 DEBUG cluster k8s-01: files successfully created.
2022-01-11 00:29:25 INFO next steps 1: to config '/etc/kubeasz/clusters/k8s-01/hosts'
2022-01-11 00:29:25 INFO next steps 2: to config '/etc/kubeasz/clusters/k8s-01/config.yml'
root@superops:/etc/kubeasz# cd clusters/k8s-01/  #切换路径 修改hosts 与config.yml 文件
root@superops:/etc/kubeasz/clusters/k8s-01# ll
total 20
drwxr-xr-x 2 root root 4096 Jan 11 00:29 ./
drwxr-xr-x 3 root root 4096 Jan 11 00:29 ../
-rw-r--r-- 1 root root 6692 Jan 11 00:29 config.yml
-rw-r--r-- 1 root root 1685 Jan 11 00:29 hosts

root@superops:/etc/kubeasz/clusters/k8s-01# vim hosts   # 修改hosts文件

# 'etcd' cluster should have odd member(s) (1,3,5,...)
# 添加etcd主机ip
[etcd]
192.168.117.130
192.168.117.131

# master node(s)
# 添加master主机ip
[kube_master]
192.168.117.130
192.168.117.131

# work node(s)
# 添加node主机ip
[kube_node]
192.168.117.130
192.168.117.131

# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'true' to install a harbor server; 'false' to integrate with existed one
[harbor]
#192.168.1.8 NEW_INSTALL=false

# [optional] loadbalance for accessing k8s from outside
[ex_lb]  # 需要打开,传递安装服务时使用,将负载均衡器的ip添加上即可
192.168.117.132 LB_ROLE=master EX_APISERVER_VIP=192.168.17.188 EX_APISERVER_PORT=6443

# [optional] ntp server for the cluster
[chrony]
#192.168.1.1

[all:vars]
# --------- Main Variables ---------------
# Secure port for apiservers
SECURE_PORT="6443"

# Cluster container-runtime supported: docker, containerd
CONTAINER_RUNTIME="docker"

# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"   #网路组件修改成calico

# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"

# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.100.0.0/16"   #service地址

# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="10.200.0.0/16"   #容器地址

# NodePort Range
NODE_PORT_RANGE="30000-40000"  #端口范围增加到400000

# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="cluster.local"

# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/usr/bin"   #可执行程序的路径

# Deploy Directory (kubeasz workspace)
base_dir="/etc/kubeasz"    #安装程序的路径

# Directory for a specific cluster
cluster_dir="{{ base_dir }}/clusters/k8s-01"

# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"

root@superops:/etc/kubeasz/clusters/k8s-01# vim config.yml  # 修改config.yml

############################
# prepare
############################
# 可选离线安装系统软件包 (offline|online)
INSTALL_SOURCE: "online"

# 可选进行系统安全加固 github.com/dev-sec/ansible-collection-hardening
OS_HARDEN: false

# 设置时间源服务器【重要:集群内机器时间必须同步】
ntp_servers:
  - "ntp1.aliyun.com"
  - "time1.cloud.tencent.com"
  - "0.cn.pool.ntp.org"

# 设置允许内部时间同步的网络段,比如"10.0.0.0/8",默认全部允许
local_network: "0.0.0.0/0"


############################
# role:deploy
############################
# default: ca will expire in 100 years
# default: certs issued by the ca will expire in 50 years
CA_EXPIRY: "876000h"
CERT_EXPIRY: "438000h"

# kubeconfig 配置参数
CLUSTER_NAME: "cluster1"
CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"


############################
# role:etcd
############################
# 设置不同的wal目录,可以避免磁盘io竞争,提高性能
ETCD_DATA_DIR: "/var/lib/etcd"
ETCD_WAL_DIR: ""


############################
# role:runtime [containerd,docker]
############################
# ------------------------------------------- containerd
# [.]启用容器仓库镜像
ENABLE_MIRROR_REGISTRY: true

# [containerd]基础容器镜像
SANDBOX_IMAGE: "easzlab/pause-amd64:3.5"

# [containerd]容器持久化存储目录
CONTAINERD_STORAGE_DIR: "/var/lib/containerd"

# ------------------------------------------- docker
# [docker]容器存储目录
DOCKER_STORAGE_DIR: "/var/lib/docker"

# [docker]开启Restful API
ENABLE_REMOTE_API: false

# [docker]信任的HTTP仓库
INSECURE_REG: '["127.0.0.1/8"]'


############################
# role:kube-master
############################
# k8s 集群 master 节点证书配置,可以添加多个ip和域名(比如增加公网ip和域名)
MASTER_CERT_HOSTS:
  - "10.1.1.1"
  - "k8s.test.io"
  #- "www.test.com"

# node 节点上 pod 网段掩码长度(决定每个节点最多能分配的pod ip地址)
# 如果flannel 使用 --kube-subnet-mgr 参数,那么它将读取该设置为每个节点分配pod网段
# https://github.com/coreos/flannel/issues/847
NODE_CIDR_LEN: 24


############################
# role:kube-node
############################
# Kubelet 根目录
KUBELET_ROOT_DIR: "/var/lib/kubelet"

# node节点最大pod 数
MAX_PODS: 400

# 配置为kube组件(kubelet,kube-proxy,dockerd等)预留的资源量
# 数值设置详见templates/kubelet-config.yaml.j2
KUBE_RESERVED_ENABLED: "no"

# k8s 官方不建议草率开启 system-reserved, 除非你基于长期监控,了解系统的资源占用状况;
# 并且随着系统运行时间,需要适当增加资源预留,数值设置详见templates/kubelet-config.yaml.j2
# 系统预留设置基于 4c/8g 虚机,最小化安装系统服务,如果使用高性能物理机可以适当增加预留
# 另外,集群安装时候apiserver等资源占用会短时较大,建议至少预留1g内存
SYS_RESERVED_ENABLED: "no"

# haproxy balance mode
BALANCE_ALG: "roundrobin"


############################
# role:network [flannel,calico,cilium,kube-ovn,kube-router]
############################
# ------------------------------------------- flannel
# [flannel]设置flannel 后端"host-gw","vxlan"等
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false

# [flannel] flanneld_image: "quay.io/coreos/flannel:v0.10.0-amd64"
flannelVer: "v0.13.0-amd64"
flanneld_image: "easzlab/flannel:{{ flannelVer }}"

# [flannel]离线镜像tar包
flannel_offline: "flannel_{{ flannelVer }}.tar"

# ------------------------------------------- calico
# [calico]设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 docs/setup/calico.md
CALICO_IPV4POOL_IPIP: "Always"

# [calico]设置 calico-node使用的host IP,bgp邻居通过该地址建立,可手工指定也可以自动发现
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"

# [calico]设置calico 网络 backend: brid, vxlan, none
CALICO_NETWORKING_BACKEND: "brid"

# [calico]更新支持calico 版本: [v3.3.x] [v3.4.x] [v3.8.x] [v3.15.x]
calico_ver: "v3.19.2"

# [calico]calico 主版本
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"

# [calico]离线镜像tar包
calico_offline: "calico_{{ calico_ver }}.tar"

# ------------------------------------------- cilium
# [cilium]CILIUM_ETCD_OPERATOR 创建的 etcd 集群节点数 1,3,5,7...
ETCD_CLUSTER_SIZE: 1

# [cilium]镜像版本
cilium_ver: "v1.4.1"

# [cilium]离线镜像tar包
cilium_offline: "cilium_{{ cilium_ver }}.tar"

# ------------------------------------------- kube-ovn
# [kube-ovn]选择 OVN DB and OVN Control Plane 节点,默认为第一个master节点
OVN_DB_NODE: "{{ groups['kube_master'][0] }}"

# [kube-ovn]离线镜像tar包
kube_ovn_ver: "v1.5.3"
kube_ovn_offline: "kube_ovn_{{ kube_ovn_ver }}.tar"

# ------------------------------------------- kube-router
# [kube-router]公有云上存在限制,一般需要始终开启 ipinip;自有环境可以设置为 "subnet"
OVERLAY_TYPE: "full"

# [kube-router]NetworkPolicy 支持开关
FIREWALL_ENABLE: "true"

# [kube-router]kube-router 镜像版本
kube_router_ver: "v0.3.1"
busybox_ver: "1.28.4"

# [kube-router]kube-router 离线镜像tar包
kuberouter_offline: "kube-router_{{ kube_router_ver }}.tar"
busybox_offline: "busybox_{{ busybox_ver }}.tar"


############################
# role:cluster-addon
############################
# coredns 自动安装
dns_install: "yes"
corednsVer: "1.8.4"
ENABLE_LOCAL_DNS_CACHE: false
dnsNodeCacheVer: "1.17.0"
# 设置 local dns cache 地址
LOCAL_DNS_CACHE: "169.254.20.10"

# metric server 自动安装
metricsserver_install: "no"
metricsVer: "v0.5.0"

# dashboard 自动安装
dashboard_install: "no"
dashboardVer: "v2.3.1"
dashboardMetricsScraperVer: "v1.0.6"

# ingress 自动安装
ingress_install: "no"
ingress_backend: "traefik"
traefik_chart_ver: "9.12.3"

# prometheus 自动安装
prom_install: "no"
prom_namespace: "monitor"
prom_chart_ver: "12.10.6"

# nfs-provisioner 自动安装
nfs_provisioner_install: "no"
nfs_provisioner_namespace: "kube-system"
nfs_provisioner_ver: "v4.0.1"
nfs_storage_class: "managed-nfs-storage"
nfs_server: "192.168.1.10"
nfs_path: "/data/nfs"

############################
# role:harbor
############################
# harbor version,完整版本号
HARBOR_VER: "v2.1.3"
HARBOR_DOMAIN: "harbor.yourdomain.com"
HARBOR_TLS_PORT: 8443

# if set 'false', you need to put certs named harbor.pem and harbor-key.pem in directory 'down'
HARBOR_SELF_SIGNED_CERT: true

# install extra component
HARBOR_WITH_NOTARY: false
HARBOR_WITH_TRIVY: false
HARBOR_WITH_CLAIR: false
HARBOR_WITH_CHARTMUSEUM: true

然后根据提示配置'/etc/kubeasz/clusters/k8s-01/hosts' 和 '/etc/kubeasz/clusters/k8s-01/config.yml':根据前面节点规划修改hosts 文件和其他集群层面的主要配置选项;其他集群组件等配置项可以在config.yml 文件中修改。

  • 修改完成后,回到主目录/etc/kubeasz
00-规划集群和配置介绍 02-安装etcd集群 04-安装master节点 06-安装集群网络
01-创建证书和安装准备 03-安装容器运行时 05-安装node节点 07-安装集群插件
  • 01:环境初始化,签发证书,系统级别的配置,内核优化等
  • 02:部署etcd
  • 03:配置运行时
  • 04:部署master
  • 05:部署node
  • 06:部署网络组件

因为手动安装了 lb 需要在系统初始化的把相应规则删掉

root@superops:/etc/kubeasz# vim playbooks/01.prepare.yml 
- hosts:
  - kube_master
  - kube_node
  - etcd
  - ex_lb   # 删除
  - chrony  # 删除
  roles:
  - { role: os-harden, when: "OS_HARDEN|bool" }
  - { role: chrony, when: "groups['chrony']|length > 0" }

# to create CA, kubeconfig, kube-proxy.kubeconfig etc.
- hosts: localhost
  roles:
  - deploy

# prepare tasks for all nodes
- hosts:
  - kube_master
  - kube_node
  - etcd
  roles:
  - prepare
上一篇:阿里云部署k8s集群


下一篇:深入理解K8S网络原理下