什么是Kubernetes ?
Kubernetes是一个轻便的和可扩展的开源平台,用于管理容器化应用和服务。通过Kubernetes能够进行应用的自动化部署和扩缩容。在Kubernetes中,会将组成应用的容器组合成 一个逻辑单元以更易管理和发现。 Kubernetes集群架构节点角色功能,包括: Master Node节点: 1.k8s集群控制节点,对集群进行调度管理,接受集群外用户去集群操作请求; 2.Master Node由API Server、Scheduler、ClusterState Store(ETCD数据库)和Controller MangerServer所组成; Worker Node节点: 1.集群工作节点,运行用户业务应用容器; 2.Worker Node包含kubelet、kube proxy和ContainerRuntime;Kubernetes 集群组件功能:
Master组件
1、API Server:
K8S对外的唯一接口,提供HTTP/HTTPS RESTful API,所有的请求都需要经过这个接口进行通信,主要负责接收、校验并响应所有的REST请求,结果状态被持久存储在etcd当中,所有资源增删改查的唯一入口。
2、etcd:
负责保存k8s 集群的配置信息和各种资源的状态信息,当数据发生变化时,etcd会快速地通知k8s相关组件,etcd是一个独立的服务组件,并不隶属于K8S集群。生产环境当中etcd应该以集群方式运行,以确保服务的可用性。
3、Controller Manager:
负责管理集群各种资源,保证资源处于预期的状态。Controller Manager由多种controller组成,包括replication controller、endpoints controller、namespace controller、serviceaccounts controller等 。由控制器完成的主要功能主要包括生命周期功能和API业务逻辑,
4、调度器(Schedule)
资源调度,负责决定将Pod放到哪个Node上运行。Scheduler在调度时会对集群的结构进行分析,当前各个节点的负载,以及应用对高可用、性能等方面的需求。
Node组件
1、Kubelet
kubelet是node的agent,当Scheduler确定在某个Node上运行Pod后,会将Pod的具体配置信息(image、volume等)发送给该节点的kubelet,kubelet会根据这些信息创建和运行容器,并向master报告运行状态。
2、Container Runtime
每个Node都需要提供一个容器运行时(Container Runtime)环境,它负责下载镜像并运行容器。
3、Kube-proxy:
service在逻辑上代表了后端的多个Pod,外借通过service访问Pod。service接收到请求就需要kube-proxy完成转发到Pod的。每个Node都会运行kube-proxy服务,负责将访问的service的TCP/UDP数据流转发到后端的容器。
什么是pod ?
Kubernetes 并不直接地运行容器,而是被一个抽象的资源对象Pod所封装,它是K8S最小的调度单位,Pod可以封装一个或多个容器,同一个Pod*享网络名称空间和存储资源,而容器之间可以通过本地回环接口直接通信,但是彼此之间又在Mount、User和Pid等名称空间上保持了隔离。
pod创建调度过程
1.首先,用户使用create yaml创建pod,请求给apiseerver,apiserver将yaml中的属性信息写入etcd。
2.apiserver触发watch机制开始创建pod,信息转发给调度器,调度器使用算法选择相应的node节点,并将node信息反馈给apiserver,apiserver将绑定的node信息写入etcd。
3.apiserver再通过watch机制,调用kubelet,指定pod信息,触发docker run命令创建容器,创建完成后反馈给kubelet, kubelet再将pod的状态信息给apiserver,apiserver则将pod的状态信息写入etcd。
集群部署:
环境准备:
系统:centos7.6 1810 (Core)
K8s版本:1.21.x
docker版本:19.03.15
各虚拟服务器规划如下:
注:
一.各节点按规划配置好hosts,另本次复用在master01节点部署安装工具,因此在master01做ssh-keygen,并ssh-copy-id其余节点
二.同时做好chrony时间同步
三. 各master以及node节点安装docker
1.部署harbor,本次采用https
# 解压软件
[root@k8s-harbor tools]# tar xzvf harbor-offline-installer-v2.3.2.tgz
[root@k8s-harbor ~]# mkdir -p /key/harbor/certs/
[root@k8s-harbor ~]# cd /key/harbor/certs/
#生成key 以及签发证书
[root@k8s-harbor certs]# openssl genrsa -out harbor-ca.key
[root@k8s-harbor certs]# openssl req -x509 -new -nodes -key harbor-ca.key -subj "/CN=magedu.gfeng.net" -days 7120 -out harbor-ca.crt
[root@k8s-harbor certs]# ls
修改配置文件:
[root@k8s-harbor tools]# vim harbor/harbor.yml
配置如下:
# https related config
https:
# https port for harbor, default is 443
port: 443
# The path of cert and key files for nginx
certificate: /key/harbor/certs/harbor-ca.crt
private_key: /key/harbor/certs/harbor-ca.key
# # Uncomment following will enable tls communication between all harbor components
# internal_tls:
# # set enabled to true means internal tls is enabled
# enabled: true
# # put your cert and key files on dir
# dir: /etc/harbor/tls/internal
# Uncomment external_url if you want to enable external proxy
# And when it enabled the hostname will no longer used
# external_url: https://reg.mydomain.com:8433
# The initial password of Harbor admin
# It only works in first time to install harbor
# Remember Change the admin password from UI after launching Harbor.
harbor_admin_password: 123456
# Harbor DB configuration
database:
# The password for the root user of Harbor DB. Change this before any production use.
password: root123
# The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained.
max_idle_conns: 100
# The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections.
# Note: the default number of connections is 1024 for postgres of harbor.
max_open_conns: 900
# The default data volume
data_volume: /data
安装harbor:
[root@k8s-harbor harbor]# ./install.sh --with-trivy
安装完成后,访问https://172.16.1.174测试:
2.客户端同步证书验证
[root@k8s-master01 ~]# mkdir -p /etc/docker/certs.d/magedu.gfeng.net/
[root@k8s-harbor certs]# scp harbor-ca.crt root@172.16.1.190:/etc/docker/certs.d/magedu.gfeng.net/
[root@k8s-master01 magedu.gfeng.net]# ls
重启docker并验证
[root@k8s-master01 magedu.gfeng.net]# docker login magedu.gfeng.net
采用同样的方式,对master02和node节点做同样的操作,当然也可以采写脚本,这里不做演示
3.部署haproxy+keepalive高可用负载(之前文章已经部署,这里不做演示)
#配置haproxy
[root@lb ~]# vim /etc/haproxy/haproxy.cfg
#添加如下内容:
frontend main 172.16.1.96:6443
default_backend k8s
backend k8s
balance roundrobin
server server1 172.16.1.190:6443 check
server server2 172.16.1.191:6443 check
#配置完成后,重启haproxy
K8s部署:
1.在master01节点操作
#安装ansible
[root@k8s-master01 ~]# yum install ansible -y
#下载部署工具以及组件
[root@k8s-master01 ~]# export release=3.1.0
[root@k8s-master01 ~]#curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
[root@k8s-master01 ~]# chmod a+x ezdown
#修改配置文件
[root@k8s-master01 ~]# vim ezdown
# default settings, can be overridden by cmd line options, see usage
DOCKER_VER=19.03.15
KUBEASZ_VER=3.1.0
K8S_BIN_VER=v1.21.0
#使用工具脚本下载
./ezdown -D
上述脚本运行成功后,所有文件(kubeasz代码、二进制、离线镜像)均已整理好放入目录/etc/kubeasz
2.生成ansible hosts文件
[root@k8s-master01 ~]# cd /etc/kubeasz/
[root@k8s-master01 kubeasz]# ./ezctl new k8s-001
#编辑生成hosts文件
[root@k8s-master01 kubeasz]# vim clusters/k8s-001/hosts
内容如下:
# 'etcd' cluster should have odd member(s) (1,3,5,...)
[etcd]
172.16.1.194
172.16.1.195
172.16.1.196
# master node(s)
[kube_master]
172.16.1.190
172.16.1.191
# work node(s)
[kube_node]
172.16.1.192
172.16.1.193
# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'true' to install a harbor server; 'false' to integrate with existed one
[harbor]
#172.16.1.8 NEW_INSTALL=false
# [optional] loadbalance for accessing k8s from outside
[ex_lb]
172.16.1.97 LB_ROLE=backup EX_APISERVER_VIP=172.16.1.96 EX_APISERVER_PORT=8443
172.16.1.98 LB_ROLE=master EX_APISERVER_VIP=172.16.1.96 EX_APISERVER_PORT=8443
# [optional] ntp server for the cluster
[chrony]
#172.16.1.1
[all:vars]
# --------- Main Variables ---------------
# Secure port for apiservers
SECURE_PORT="6443"
# Cluster container-runtime supported: docker, containerd
CONTAINER_RUNTIME="docker"
# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"
# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"
# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.100.0.0/16"
# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="10.200.0.0/16"
# NodePort Range
NODE_PORT_RANGE="30000-32767"
# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="magedu.local"
# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
#编辑生成config.yml文件
[root@k8s-master01 kubeasz]# vim /etc/kubeasz/clusters/k8s-001/config.yml
内容如下:
SYS_RESERVED_ENABLED: "no"
# haproxy balance mode
BALANCE_ALG: "roundrobin"
############################
# role:network [flannel,calico,cilium,kube-ovn,kube-router]
############################
# ------------------------------------------- flannel
# [flannel]设置flannel 后端"host-gw","vxlan"等
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false
# [flannel] flanneld_image: "quay.io/coreos/flannel:v0.10.0-amd64"
flannelVer: "v0.13.0-amd64"
flanneld_image: "easzlab/flannel:{{ flannelVer }}"
# [flannel]离线镜像tar包
flannel_offline: "flannel_{{ flannelVer }}.tar"
# ------------------------------------------- calico
# [calico]设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 docs/setup/calico.md
CALICO_IPV4POOL_IPIP: "Always"
# [calico]设置 calico-node使用的host IP,bgp邻居通过该地址建立,可手工指定也可以自动发现
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"
# [calico]设置calico 网络 backend: brid, vxlan, none
CALICO_NETWORKING_BACKEND: "brid"
# [calico]更新支持calico 版本: [v3.3.x] [v3.4.x] [v3.8.x] [v3.15.x]
calico_ver: "v3.15.3"
# [calico]calico 主版本
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"
# [calico]离线镜像tar包
calico_offline: "calico_{{ calico_ver }}.tar"
# ------------------------------------------- cilium
# [cilium]CILIUM_ETCD_OPERATOR 创建的 etcd 集群节点数 1,3,5,7...
ETCD_CLUSTER_SIZE: 1
# [cilium]镜像版本
cilium_ver: "v1.4.1"
# [cilium]离线镜像tar包
cilium_offline: "cilium_{{ cilium_ver }}.tar"
# ------------------------------------------- kube-ovn
# [kube-ovn]选择 OVN DB and OVN Control Plane 节点,默认为第一个master节点
OVN_DB_NODE: "{{ groups['kube_master'][0] }}"
# [kube-ovn]离线镜像tar包
kube_ovn_ver: "v1.5.3"
kube_ovn_offline: "kube_ovn_{{ kube_ovn_ver }}.tar"
# ------------------------------------------- kube-router
# [kube-router]公有云上存在限制,一般需要始终开启 ipinip;自有环境可以设置为 "subnet"
OVERLAY_TYPE: "full"
# [kube-router]NetworkPolicy 支持开关
FIREWALL_ENABLE: "true"
# [kube-router]kube-router 镜像版本
############################
# prepare
############################
# 可选离线安装系统软件包 (offline|online)
INSTALL_SOURCE: "online"
# 可选进行系统安全加固 github.com/dev-sec/ansible-collection-hardening
OS_HARDEN: false
# 设置时间源服务器【重要:集群内机器时间必须同步】
ntp_servers:
- "ntp1.aliyun.com"
- "time1.cloud.tencent.com"
- "0.cn.pool.ntp.org"
# 设置允许内部时间同步的网络段,比如"10.0.0.0/8",默认全部允许
local_network: "0.0.0.0/0"
############################
# role:deploy
############################
# default: ca will expire in 100 years
# default: certs issued by the ca will expire in 50 years
CA_EXPIRY: "876000h"
CERT_EXPIRY: "438000h"
# kubeconfig 配置参数
CLUSTER_NAME: "cluster1"
CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"
############################
# role:etcd
############################
# 设置不同的wal目录,可以避免磁盘io竞争,提高性能
ETCD_DATA_DIR: "/var/lib/etcd"
ETCD_WAL_DIR: ""
############################
# role:runtime [containerd,docker]
############################
# ------------------------------------------- containerd
# [.]启用容器仓库镜像
ENABLE_MIRROR_REGISTRY: true
# [containerd]基础容器镜像
SANDBOX_IMAGE: "easzlab/pause-amd64:3.4.1"
# [containerd]容器持久化存储目录
CONTAINERD_STORAGE_DIR: "/var/lib/containerd"
# ------------------------------------------- docker
# [docker]容器存储目录
DOCKER_STORAGE_DIR: "/var/lib/docker"
# [docker]开启Restful API
ENABLE_REMOTE_API: false
# [docker]信任的HTTP仓库
INSECURE_REG: '["127.0.0.1/8","172.16.1.174"]'
############################
# role:kube-master
############################
# k8s 集群 master 节点证书配置,可以添加多个ip和域名(比如增加公网ip和域名)
MASTER_CERT_HOSTS:
- "10.1.1.1"
- "k8s.test.io"
#- "www.test.com"
# node 节点上 pod 网段掩码长度(决定每个节点最多能分配的pod ip地址)
# 如果flannel 使用 --kube-subnet-mgr 参数,那么它将读取该设置为每个节点分配pod网段
# https://github.com/coreos/flannel/issues/847
NODE_CIDR_LEN: 24
############################
# role:kube-node
############################
# Kubelet 根目录
KUBELET_ROOT_DIR: "/var/lib/kubelet"
# node节点最大pod 数
MAX_PODS: 210
# 配置为kube组件(kubelet,kube-proxy,dockerd等)预留的资源量
# 数值设置详见templates/kubelet-config.yaml.j2
KUBE_RESERVED_ENABLED: "yes"
# k8s 官方不建议草率开启 system-reserved, 除非你基于长期监控,了解系统的资源占用状况;
# 并且随着系统运行时间,需要适当增加资源预留,数值设置详见templates/kubelet-config.yaml.j2
# 系统预留设置基于 4c/8g 虚机,最小化安装系统服务,如果使用高性能物理机可以适当增加预留
# 另外,集群安装时候apiserver等资源占用会短时较大,建议至少预留1g内存
SYS_RESERVED_ENABLED: "no"
# haproxy balance mode
BALANCE_ALG: "roundrobin"
############################
# role:network [flannel,calico,cilium,kube-ovn,kube-router]
############################
# ------------------------------------------- flannel
# [flannel]设置flannel 后端"host-gw","vxlan"等
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false
# [flannel] flanneld_image: "quay.io/coreos/flannel:v0.10.0-amd64"
flannelVer: "v0.13.0-amd64"
flanneld_image: "easzlab/flannel:{{ flannelVer }}"
# [flannel]离线镜像tar包
flannel_offline: "flannel_{{ flannelVer }}.tar"
# ------------------------------------------- calico
# [calico]设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 docs/setup/calico.md
CALICO_IPV4POOL_IPIP: "Always"
# [calico]设置 calico-node使用的host IP,bgp邻居通过该地址建立,可手工指定也可以自动发现
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"
# [calico]设置calico 网络 backend: brid, vxlan, none
CALICO_NETWORKING_BACKEND: "brid"
# [calico]更新支持calico 版本: [v3.3.x] [v3.4.x] [v3.8.x] [v3.15.x]
calico_ver: "v3.15.3"
# [calico]calico 主版本
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"
# [calico]离线镜像tar包
calico_offline: "calico_{{ calico_ver }}.tar"
# ------------------------------------------- cilium
# [cilium]CILIUM_ETCD_OPERATOR 创建的 etcd 集群节点数 1,3,5,7...
ETCD_CLUSTER_SIZE: 1
# [cilium]镜像版本
cilium_ver: "v1.4.1"
# [cilium]离线镜像tar包
cilium_offline: "cilium_{{ cilium_ver }}.tar"
# ------------------------------------------- kube-ovn
# [kube-ovn]选择 OVN DB and OVN Control Plane 节点,默认为第一个master节点
OVN_DB_NODE: "{{ groups['kube_master'][0] }}"
# [kube-ovn]离线镜像tar包
kube_ovn_ver: "v1.5.3"
kube_ovn_offline: "kube_ovn_{{ kube_ovn_ver }}.tar"
# ------------------------------------------- kube-router
# [kube-router]公有云上存在限制,一般需要始终开启 ipinip;自有环境可以设置为 "subnet"
OVERLAY_TYPE: "full"
# [kube-router]NetworkPolicy 支持开关
FIREWALL_ENABLE: "true"
# [kube-router]kube-router 镜像版本
kube_router_ver: "v0.3.1"
busybox_ver: "1.28.4"
# [kube-router]kube-router 离线镜像tar包
kuberouter_offline: "kube-router_{{ kube_router_ver }}.tar"
busybox_offline: "busybox_{{ busybox_ver }}.tar"
############################
# role:cluster-addon
############################
# coredns 自动安装
dns_install: "no"
corednsVer: "1.8.0"
ENABLE_LOCAL_DNS_CACHE: false
dnsNodeCacheVer: "1.17.0"
# 设置 local dns cache 地址
LOCAL_DNS_CACHE: "169.254.20.10"
# metric server 自动安装
metricsserver_install: "no"
metricsVer: "v0.3.6"
# dashboard 自动安装
dashboard_install: "no"
dashboardVer: "v2.2.0"
dashboardMetricsScraperVer: "v1.0.6"
# ingress 自动安装
ingress_install: "no"
ingress_backend: "traefik"
traefik_chart_ver: "9.12.3"
# prometheus 自动安装
prom_install: "no"
prom_namespace: "monitor"
prom_chart_ver: "12.10.6"
# nfs-provisioner 自动安装
nfs_provisioner_install: "no"
nfs_provisioner_namespace: "kube-system"
nfs_provisioner_ver: "v4.0.1"
nfs_storage_class: "managed-nfs-storage"
nfs_server: "192.168.1.10"
nfs_path: "/data/nfs"
############################
# role:harbor
############################
# harbor version,完整版本号
HARBOR_VER: "v2.1.3"
HARBOR_DOMAIN: "harbor.yourdomain.com"
HARBOR_TLS_PORT: 8443
# if set 'false', you need to put certs named harbor.pem and harbor-key.pem in directory 'down'
HARBOR_SELF_SIGNED_CERT: true
# install extra component
HARBOR_WITH_NOTARY: false
HARBOR_WITH_TRIVY: false
HARBOR_WITH_CLAIR: false
HARBOR_WITH_CHARTMUSEUM: true
注:几个自动安装都设置成no,DNS设置成false
3.部署集群
首先:
[root@k8s-master01 kubeasz]#vim playbooks/01.prepare.yml 关掉负载均衡初始化
# [optional] to synchronize system time of nodes with 'chrony'
- hosts:
- kube_master
- kube_node
- etcd
- ex_lb
- chrony
去掉- ex_lb
- chrony
开始集群初始化安装:
[root@k8s-master01 kubeasz]# ./ezctl setup k8s-001 01 初始化集群
[root@k8s-master01 kubeasz]# ./ezctl setup k8s-001 02 部署etcd集群
etcd节点验证:
编写一个脚本,内容如下:
[root@k8s-etcd01 server]# vim etcd.sh
#!/bin/sh
export NODE_IPS="172.16.1.194 172.16.1.195 172.16.1.196"
for ip in ${NODE_IPS}; do
ETCDCTL_API=3 /opt/kube/bin/etcdctl \
--endpoints=https://${ip}:2379 \
--cacert=/etc/kubernetes/ssl/ca.pem \
--cert=/etc/kubernetes/ssl/etcd.pem \
--key=/etc/kubernetes/ssl/etcd-key.pem \
endpoint health; done
[root@k8s-etcd01 server]# chmod+x etcd.sh
[root@k8s-etcd01 server]# bash etcd.sh
全部为successfully表示正常,否则错误
[root@k8s-master01 kubeasz]# ./ezctl setup k8s-001 03
[root@k8s-master01 kubeasz]# ./ezctl setup k8s-001 04 部署master节点
#master部署完成后,执行验证
[root@k8s-master01 kubeasz]# kubectl get node
[root@k8s-master01 kubeasz]# ./ezctl setup k8s-001 05 部署node节点
#node部署完成后,执行验证
[root@k8s-master01 kubeasz]# kubectl get node
[root@k8s-master01 kubeasz]# ./ezctl setup k8s-001 06 部署网络组件
PLAY [kube_master,kube_node] ***************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************
ok: [172.16.1.190]
ok: [172.16.1.191]
ok: [172.16.1.193]
ok: [172.16.1.192]
TASK [calico : 在节点创建相关目录] ******************************************************************************************************************************
ok: [172.16.1.191] => (item=/etc/cni/net.d)
ok: [172.16.1.193] => (item=/etc/cni/net.d)
ok: [172.16.1.192] => (item=/etc/cni/net.d)
ok: [172.16.1.190] => (item=/etc/cni/net.d)
changed: [172.16.1.191] => (item=/etc/calico/ssl)
changed: [172.16.1.192] => (item=/etc/calico/ssl)
changed: [172.16.1.193] => (item=/etc/calico/ssl)
changed: [172.16.1.190] => (item=/etc/calico/ssl)
ok: [172.16.1.191] => (item=/opt/kube/images)
ok: [172.16.1.193] => (item=/opt/kube/images)
ok: [172.16.1.192] => (item=/opt/kube/images)
ok: [172.16.1.190] => (item=/opt/kube/images)
TASK [创建calico 证书请求] ***********************************************************************************************************************************
changed: [172.16.1.190]
ok: [172.16.1.191]
ok: [172.16.1.192]
ok: [172.16.1.193]
TASK [创建 calico证书和私钥] **********************************************************************************************************************************
changed: [172.16.1.191]
changed: [172.16.1.190]
changed: [172.16.1.193]
changed: [172.16.1.192]
TASK [分发calico证书相关] ************************************************************************************************************************************
changed: [172.16.1.191] => (item=ca.pem)
changed: [172.16.1.193] => (item=ca.pem)
changed: [172.16.1.192] => (item=ca.pem)
changed: [172.16.1.190] => (item=ca.pem)
changed: [172.16.1.191] => (item=calico.pem)
changed: [172.16.1.193] => (item=calico.pem)
changed: [172.16.1.192] => (item=calico.pem)
changed: [172.16.1.190] => (item=calico.pem)
changed: [172.16.1.191] => (item=calico-key.pem)
changed: [172.16.1.193] => (item=calico-key.pem)
changed: [172.16.1.192] => (item=calico-key.pem)
changed: [172.16.1.190] => (item=calico-key.pem)
TASK [get calico-etcd-secrets info] ********************************************************************************************************************
changed: [172.16.1.190]
TASK [创建 calico-etcd-secrets] **************************************************************************************************************************
changed: [172.16.1.190]
TASK [检查是否已下载离线calico镜像] *******************************************************************************************************************************
changed: [172.16.1.190]
TASK [calico : 尝试推送离线docker 镜像(若执行失败,可忽略)] *************************************************************************************************************
changed: [172.16.1.191] => (item=pause.tar)
changed: [172.16.1.193] => (item=pause.tar)
changed: [172.16.1.190] => (item=pause.tar)
changed: [172.16.1.192] => (item=pause.tar)
changed: [172.16.1.193] => (item=calico_v3.15.3.tar)
changed: [172.16.1.190] => (item=calico_v3.15.3.tar)
changed: [172.16.1.191] => (item=calico_v3.15.3.tar)
changed: [172.16.1.192] => (item=calico_v3.15.3.tar)
TASK [获取calico离线镜像推送情况] ********************************************************************************************************************************
changed: [172.16.1.191]
changed: [172.16.1.190]
changed: [172.16.1.192]
changed: [172.16.1.193]
TASK [导入 calico的离线镜像(若执行失败,可忽略)] ***********************************************************************************************************************
changed: [172.16.1.190] => (item=pause.tar)
changed: [172.16.1.193] => (item=pause.tar)
changed: [172.16.1.192] => (item=pause.tar)
changed: [172.16.1.191] => (item=pause.tar)
changed: [172.16.1.190] => (item=calico_v3.15.3.tar)
changed: [172.16.1.193] => (item=calico_v3.15.3.tar)
changed: [172.16.1.191] => (item=calico_v3.15.3.tar)
changed: [172.16.1.192] => (item=calico_v3.15.3.tar)
TASK [配置 calico DaemonSet yaml文件] **********************************************************************************************************************
changed: [172.16.1.190]
TASK [运行 calico网络] *************************************************************************************************************************************
changed: [172.16.1.190]
TASK [calico : 删除默认cni配置] ******************************************************************************************************************************
changed: [172.16.1.190]
changed: [172.16.1.191]
changed: [172.16.1.192]
changed: [172.16.1.193]
TASK [下载calicoctl 客户端] *********************************************************************************************************************************
changed: [172.16.1.193] => (item=calicoctl)
changed: [172.16.1.192] => (item=calicoctl)
changed: [172.16.1.191] => (item=calicoctl)
changed: [172.16.1.190] => (item=calicoctl)
TASK [准备 calicoctl配置文件] ********************************************************************************************************************************
changed: [172.16.1.192]
changed: [172.16.1.193]
changed: [172.16.1.191]
changed: [172.16.1.190]
TASK [轮询等待calico-node 运行,视下载镜像速度而定] ********************************************************************************************************************
changed: [172.16.1.190]
changed: [172.16.1.193]
changed: [172.16.1.192]
changed: [172.16.1.191]
PLAY RECAP *********************************************************************************************************************************************
172.16.1.190 : ok=17 changed=16 unreachable=0 failed=0 skipped=51 rescued=0 ignored=0
172.16.1.191 : ok=12 changed=10 unreachable=0 failed=0 skipped=40 rescued=0 ignored=0
172.16.1.192 : ok=12 changed=10 unreachable=0 failed=0 skipped=40 rescued=0 ignored=0
172.16.1.193 : ok=12 changed=10 unreachable=0 failed=0 skipped=40 rescued=0 ignored=0
#验证calico
[root@k8s-master01 kubeasz]# calicoctl node status
4.创建容器测试网络通信
[root@k8s-master01 kubeasz]# docker pull alpine
[root@k8s-master01 kubeasz]# docker tag alpine magedu.gfeng.net/magedu/alpine
[root@k8s-master01 kubeasz]# docker push magedu.gfeng.net/magedu/alpine
#创建pod测试主机网络通信是否正常
[root@k8s-master01 kubeasz]# kubectl run net-test1 --image=magedu.gfeng.net/magedu/alpine:latest sleep 30000
[root@k8s-master01 kubeasz]# kubectl run net-test2 --image=magedu.gfeng.net/magedu/alpine:latest sleep 30000
[root@k8s-master01 kubeasz]# kubectl get pod -A -o wide
5. 部署coredns
上传kubernetes.tar.gz到master01节点并解压
[root@k8s-master01 kubeasz]#cd /server/kubernetes/cluster/addons/dns/coredns
[root@k8s-master01 kubeasz]# ls
[root@k8s-master01 kubeasz]# cp coredns.yaml.base /root/coredns-n56.yml
[root@k8s-master01 kubeasz]# cd ~
#首先pull coredns(版本1.8.0)
[root@k8s-master01 ~]# docker pull coredns/coredns:1.8.0
[root@k8s-master01 ~]# docker tag coredns/coredns:1.8.0 magedu.gfeng.net/magedu/coredns:1.8.0
[root@k8s-master01 ~]# docker push magedu.gfeng.net/magedu/coredns:1.8.0
#更改配置文件
[root@k8s-master01 ~]# vim coredns-n56.yaml
找到并更改其中几项如下:
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes magedu.local(更改为hosts中设置的域名) in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . 223.6.6.6(更改为外网地址) {
max_concurrent 1000
- name: coredns
image: magedu.gfeng.net/magedu/coredns:1.8.0 (镜像地址更改为下载并tag上传到仓库地址)
resources:
limits:
memory: 256Mi (更改大小,此处为测试,具体环境请自行根据情况设置)
spec:
type: NodePort (添加选项)
selector:
k8s-app: kube-dns
clusterIP: 10.100.0.2(此处地址为hosts文件设置的网段,配置地址可以通过如下查看)
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
targetPort: 9153
nodePort: 30009(暴露端口,后面可以通过web访问)
配置完后,保存,然后执行如下:
[root@k8s-master01 ~]# kubectl apply -f coredns-n56.yaml
#验证coredns是否运行,如下方正常
#验证pod是否能访问域名:
#验证coredns 指标
http://172.16.1.193:30009/metrics
[root@k8s-master01 ~]# docker push magedu.gfeng.net/magedu/metrics-scraper:v1.0.6
6. 部署dashboard
下载镜像并 tag上传仓库:
[root@k8s-master01 ~]# docker pull kubernetesui/dashboard:v2.3.1
[root@k8s-master01 ~]# docker tag kubernetesui/dashboard:v2.3.1 magedu.gfeng.net/magedu/dashboard:v2.3.1
[root@k8s-master01 ~]# docker push magedu.gfeng.net/magedu/dashboard:v2.3.1
[root@k8s-master01 ~]# docker pull kubernetesui/metrics-scraper:v1.0.6
[root@k8s-master01 ~]# docker tag kubernetesui/metrics-scraper:v1.0.6 magedu.gfeng.net/magedu/metrics-scraper:v1.0.6
[root@k8s-master01 ~]# docker push magedu.gfeng.net/magedu/metrics-scraper:v1.0.6
[root@k8s-master01 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
#编辑下载的配置文件
[root@k8s-master01 ~]# mv recommended.yaml dashboard-v2.3.1.yaml
[root@k8s-master01 ~]# vim dashboard-v2.3.1.yaml
更改以及添加内容如下:
spec:
type: NodePort(增加选项)
ports:
- port: 443
targetPort: 8443
nodePort: 30002(暴露访问端口)
selector:
spec:
containers:
- name: kubernetes-dashboard
image: magedu.gfeng.net/magedu/dashboard:v2.3.1(镜像地址更改为上传的仓库地址)
spec:
containers:
- name: dashboard-metrics-scraper
image: magedu.gfeng.net/magedu/metrics-scraper:v1.0.6(镜像地址更改为上传的地址)
配置完成后,保存,然后执行:
[root@k8s-master01 ~]# kubectl apply -f dashboard-v2.3.1.yaml
浏览器访问:
https://172.16.1.192:30002
发现有token,这时需要另外一个yaml文件配置生成token
上传admin-user.yml到master节点,然后执行如下截图,生成token
#再次访问刚才的web页面,输入token登录,最终结果如下
至此,部署完成