开启更多功能,提升办公效能
Kubernetes基础篇
Kubernetes 是一个可移植的、可扩展的开源平台,用于管理容器化的工作负载和服务,可促进声明式配置和自动化。Kubernetes 拥有一个庞大且快速增长的生态系统。Kubernetes 的服务、支持和工具广泛可用。
一、简介
Kubernetes是一个全新的基于容器技术的分布式领先方案。简称:K8S。它是Google开源的容器集群管理系统,它的设计灵感来自于Google内部的一个叫作Borg的容器管理系统。继承了Google十余年的容器集群使用经验。它为容器化的应用提供了部署运行、资源调度、服务发现和动态伸缩等一些列完整的功能,极大地提高了大规模容器集群管理的便捷性。
kubernetes是一个完备的分布式系统支撑平台。具有完备的集群管理能力,多扩多层次的安全防护和准入机制、多租户应用支撑能力、透明的服务注册和发现机制、內建智能负载均衡器、强大的故障发现和自我修复能力、服务滚动升级和在线扩容能力、可扩展的资源自动调度机制以及多粒度的资源配额管理能力。
在集群管理方面,Kubernetes将集群中的机器划分为一个Master节点和一群工作节点Node,其中,在Master节点运行着集群管理相关的一组进程kube-apiserver、kube-controller-manager和kube-scheduler,这些进程实现了整个集群的资源管理、Pod调度、弹性伸缩、安全控制、系统监控和纠错等管理能力,并且都是全自动完成的。Node作为集群中的工作节点,运行真正的应用程序,在Node上Kubernetes管理的最小运行单元是Pod。Node上运行着Kubernetes的kubelet、kube-proxy服务进程,这些服务进程负责Pod的创建、启动、监控、重启、销毁以及实现软件模式的负载均衡器。
在Kubernetes集群中,它解决了传统IT系统中服务扩容和升级的两大难题。如果今天的软件并不是特别复杂并且需要承载的峰值流量不是特别多,那么后端项目的部署其实也只需要在虚拟机上安装一些简单的依赖,将需要部署的项目编译后运行就可以了。但是随着软件变得越来越复杂,一个完整的后端服务不再是单体服务,而是由多个职责和功能不同的服务组成,服务之间复杂的拓扑关系以及单机已经无法满足的性能需求使得软件的部署和运维工作变得非常复杂,这也就使得部署和运维大型集群变成了非常迫切的需求。
Kubernetes 的出现不仅主宰了容器编排的市场,更改变了过去的运维方式,不仅将开发与运维之间边界变得更加模糊,而且让 DevOps 这一角色变得更加清晰,每一个软件工程师都可以通过 Kubernetes 来定义服务之间的拓扑关系、线上的节点个数、资源使用量并且能够快速实现水平扩容、蓝绿部署等在过去复杂的运维操作。
二、架构
Kubernetes 遵循非常传统的客户端服务端架构,客户端通过 RESTful 接口或者直接使用 kubectl 与 Kubernetes 集群进行通信,这两者在实际上并没有太多的区别,后者也只是对 Kubernetes 提供的 RESTful API 进行封装并提供出来。每一个 Kubernetes 集群都由一组 Master 节点和一系列的 Worker 节点组成,其中 Master 节点主要负责存储集群的状态并为 Kubernetes 对象分配和调度资源。
Master
它主要负责接收客户端的请求,安排容器的执行并且运行控制循环,将集群的状态向目标状态进行迁移,Master 节点内部由三个组件构成:
API Server
负责处理来自用户的请求,其主要作用就是对外提供 RESTful 的接口,包括用于查看集群状态的读请求以及改变集群状态的写请求,也是唯一一个与 etcd 集群通信的组件。
Controller
Controller 管理器运行了一系列的控制器进程,这些进程会按照用户的期望状态在后台不断地调节整个集群中的对象,当服务的状态发生了改变,控制器就会发现这个改变并且开始向目标状态迁移。
Scheduler
Scheduler 调度器其实为 Kubernetes 中运行的 Pod 选择部署的 Worker 节点,它会根据用户的需要选择最能满足请求的节点来运行 Pod,它会在每次需要调度 Pod 时执行。
Node
Node节点实现相对简单一点,主要是由kubelet和kube-proxy两部分组成:
kubelet 是一个节点上的主要服务,它周期性地从 API Server 接受新的或者修改的 Pod 规范并且保证节点上的 Pod 和其中容器的正常运行,还会保证节点会向目标状态迁移,该节点仍然会向 Master 节点发送宿主机的健康状况。
kube-proxy 负责宿主机的子网管理,同时也能将服务暴露给外部,其原理就是在多个隔离的网络中把请求转发给正确的 Pod 或者容器。
Kubernetes架构图
在这张系统架构图中,我们把服务分为运行在工作节点上的服务和组成集群级别控制板的服务。
Kubernetes主要由以下几个核心组件组成:
etcd保存了整个集群的状态;
apiserver提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制;
controller manager负责维护集群的状态,比如故障检测、自动扩展、滚动更新等;
scheduler负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上;
kubelet负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理;
Container runtime负责镜像管理以及Pod和容器的真正运行(CRI);
kube-proxy负责为Service提供cluster内部的服务发现和负载均衡;
除了核心组件,还有一些推荐的组件:
kube-dns负责为整个集群提供DNS服务
Ingress Controller为服务提供外网入口
Heapster提供资源监控
Dashboard提供GUI
Federation提供跨可用区的集群
Fluentd-elasticsearch提供集群日志采集、存储与查询
三、安装
部署Kubernetes有两种方式,第一种是二进制的方式,可定制但是部署复杂容易出错;第二种是kubeadm工具安装,部署简单,不可定制化。
二进制安装
环境准备
主机名
IP
角色
kubernetes-master-01
172.26.203.203
Master-01
kubernetes-masert-02
172.26.203.199
Master-02
kubernetes-master-03
172.26.203.204
Master-03
kubernetes-node-01
172.26.203.202
Node-01
kubernetes-node-02
172.26.203.200
Node-02
kubernetes-master-vip
172.26.203.201
Master-vip
升级内核
升级内核
wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-ml{,-devel}-5.8.3-1.el7.elrepo.x86_64.rpm
yum install -y kernel-ml{,-devel}-5.8.3-1.el7.elrepo.x86_64.rpm
调整默认内核
cat /boot/grub2/grub.cfg |grep menuentry
grub2-set-default "CentOS Linux (5.8.3-1.el7.elrepo.x86_64) 7 (Core)"
检查是否修改正确
grub2-editenv list
reboot
IPVS的支持开启
确认内核版本后,开启 IPVS
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
/sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe ${kernel_module}
fi
done
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
设置 iptables
echo """
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
""" > /etc/sysctl.conf
sysctl -p
同步时间
crontab -e # 加入定时任务
*/5 * * * * ntpdate ntp.aliyun.com 1 > /dev/null
同步秘钥
for i in kubernetes-master-01 kubernetes-master-02 kubernetes-master-03 kubernetes-node-01 kubernetes-node-02 kubernetes-master-vip; do
ssh-copy-id -i ~/.ssh/id_rsa.pub root@$i ;
done
签发证书
安装签发软件
chmod +x cfssl cfssljson
mv cfssl cfssljson /usr/local/bin/
需要签发证书的组件:
admin user
kubelet
kube-controller-manager
kube-proxy
kube-scheduler
kube-api
创建证书
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}
EOF
CA证书签名
cat > ca-csr.json <<EOF
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shanghai",
"O": "Organization",
"OU": "Organizational Unit",
"ST": "Organization alias"
}
]
}
EOF
生成CA证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
生成admin用户证书
cat > admin-csr.json <<EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shanghai",
"O": "Organization",
"OU": "Organizational Unit",
"ST": "Organization alias"
}
]
}
EOF
生成admin用户证书
cfssl gencert
-ca=ca.pem
-ca-key=ca-key.pem
-config=ca-config.json
-profile=kubernetes
admin-csr.json | cfssljson -bare admin
kubelet授权
kubernetes-node-01
cat > kubernetes-node-01-csr.json <<EOF
{
"CN": "system:node:kubernetes-node-01",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shanghai",
"O": "Organization",
"OU": "Organizational Unit",
"ST": "Organization alias"
}
]
}
EOF
cfssl gencert
-ca=ca.pem
-ca-key=ca-key.pem
-config=ca-config.json
-hostname=kubernetes-node-01,172.26.203.202
-profile=kubernetes
kubernetes-node-01-csr.json | cfssljson -bare kubernetes-node-01
kubernetes-node-02
cat > kubernetes-node-02-csr.json <<EOF
{
"CN": "system:node:kubernetes-node-02",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shanghai",
"O": "Organization",
"OU": "Organizational Unit",
"ST": "Organization alias"
}
]
}
EOF
cfssl gencert
-ca=ca.pem
-ca-key=ca-key.pem
-config=ca-config.json
-hostname=kubernetes-node-02,172.26.203.200
-profile=kubernetes
kubernetes-node-02-csr.json | cfssljson -bare kubernetes-node-02
Controller Manager客户端证书
cat > kube-controller-manager-csr.json <<EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shanghai",
"O": "Organization",
"OU": "Organizational Unit",
"ST": "Organization alias"
}
]
}
EOF
cfssl gencert
-ca=ca.pem
-ca-key=ca-key.pem
-config=ca-config.json
-profile=kubernetes
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
Kube Proxy客户端证书
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shanghai",
"O": "Organization",
"OU": "Organizational Unit",
"ST": "Organization alias"
}
]
}
EOF
cfssl gencert
-ca=ca.pem
-ca-key=ca-key.pem
-config=ca-config.json
-profile=kubernetes
kube-proxy-csr.json | cfssljson -bare kube-proxy
scheduler客户端证书
cat > kube-scheduler-csr.json <<EOF
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shanghai",
"O": "Organization",
"OU": "Organizational Unit",
"ST": "Organization alias"
}
]
}
EOF
cfssl gencert
-ca=ca.pem
-ca-key=ca-key.pem
-config=ca-config.json
-profile=kubernetes
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
kuberentes API 证书
CERT_HOSTNAME=10.32.0.1,172.26.203.203,kubernetes-master-01,172.26.203.199,kubernetes-masert-02,172.26.203.204,kubernetes-master-03,172.26.203.202,kubernetes-node-01,172.26.203.200,kubernetes-node-02,172.26.203.201,kubernetes-master-vip,127.0.0.1,localhost,kubernetes.default
cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shanghai",
"O": "Organization",
"OU": "Organizational Unit",
"ST": "Organization alias"
}
]
}
EOF
cfssl gencert
-ca=ca.pem
-ca-key=ca-key.pem
-config=ca-config.json
-hostname=${CERT_HOSTNAME}
-profile=kubernetes
kubernetes-csr.json | cfssljson -bare kubernetes
服务账户证书
cat > service-account-csr.json <<EOF
{
"CN": "service-accounts",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shanghai",
"O": "Organization",
"OU": "Organizational Unit",
"ST": "Organization alias
}
]
}
EOF
cfssl gencert
-ca=ca.pem
-ca-key=ca-key.pem
-config=ca-config.json
-profile=kubernetes
service-account-csr.json | cfssljson -bare service-account
将证书复制到各个节点。
创建配置
kubeconfig常用来在kubernetes组件之间和用户到kubernetes之间。
Entity
解释
集群
api-Server的IP及以base64位编码的证书
用户
用户相关的信息,比如认证用户名,它的证书和key或者服务帐户的token
上下文
拥有集群和证书的引用,假如你有多个集群和和户,那么使用上下文将会变得非常方便
生成kubelet的kubeconfig
chmod +x kubectl
cp kubectl /usr/local/bin/
for instance in kubernetes-node-01 kubernetes-node-02; do
kubectl config set-cluster kubernetes-the-hard-way
--certificate-authority=ca.pem
--embed-certs=true
--server=https://172.26.203.201:6443
--kubeconfig=${instance}.kubeconfig
kubectl config set-credentials system:node:${instance}
--client-certificate=${instance}.pem
--client-key=${instance}-key.pem
--embed-certs=true
--kubeconfig=${instance}.kubeconfig
kubectl config set-context default
--cluster=kubernetes-the-hard-way
--user=system:node:${instance}
--kubeconfig=${instance}.kubeconfig
kubectl config use-context default --kubeconfig=${instance}.kubeconfig
done
生成kube-proxy kubeconfig
kubectl config set-cluster kubernetes-the-hard-way
--certificate-authority=ca.pem
--embed-certs=true
--server=https://172.26.203.201:6443
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy
--client-certificate=kube-proxy.pem
--client-key=kube-proxy-key.pem
--embed-certs=true
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default
--cluster=kubernetes-the-hard-way
--user=system:kube-proxy
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
生成kube-controller-manager kubeconfig
kubectl config set-cluster kubernetes-the-hard-way
--certificate-authority=ca.pem
--embed-certs=true
--server=https://127.0.0.1:6443
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager
--client-certificate=kube-controller-manager.pem
--client-key=kube-controller-manager-key.pem
--embed-certs=true
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default
--cluster=kubernetes-the-hard-way
--user=system:kube-controller-manager
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
产生kube-scheduler kubeconfig
kubectl config set-cluster kubernetes-the-hard-way
--certificate-authority=ca.pem
--embed-certs=true
--server=https://127.0.0.1:6443
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler
--client-certificate=kube-scheduler.pem
--client-key=kube-scheduler-key.pem
--embed-certs=true
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default
--cluster=kubernetes-the-hard-way
--user=system:kube-scheduler
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
生成admin的kubeconfig
kubectl config set-cluster kubernetes-the-hard-way
--certificate-authority=ca.pem
--embed-certs=true
--server=https://127.0.0.1:6443
--kubeconfig=admin.kubeconfig
kubectl config set-credentials admin
--client-certificate=admin.pem
--client-key=admin-key.pem
--embed-certs=true
--kubeconfig=admin.kubeconfig
kubectl config set-context default
--cluster=kubernetes-the-hard-way
--user=admin
--kubeconfig=admin.kubeconfig
kubectl config use-context default --kubeconfig=admin.kubeconfig
复制kubeconfig到各个节点
for i in kubernetes-node-02 kubernetes-node-01; do
scp $i.kubeconfig kube-proxy.kubeconfig root@$i:/root/
done
for i in kubernetes-master-03 kubernetes-master-02 kubernetes-master-01 ; do
scp etcd-v3.4.10-linux-amd64.tar.gz root@$i:/opt/
done
加密config
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers: - aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY} - identity: {}
EOF
for i in kubernetes-master-03 kubernetes-master-02 kubernetes-master-01 ; do
scp encryption-config.yaml root@$i:/opt/
done
- secrets
部署etcd集群
wget https://mirrors.huaweicloud.com/etcd/v3.4.10/etcd-v3.4.10-linux-amd64.tar.gz
mv etcd-v3.4.10-linux-amd64/etcd* /usr/local/bin/
mkdir -p /etc/etcd /var/lib/etcd
cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
ETCD_NAME=hostname
INTERNAL_IP=hostname -i
INITIAL_CLUSTER=kubernetes-master-01=https://172.26.203.203:2380,kubernetes-master-02=https://172.26.203.199:2380,kubernetes-master-03=https://172.26.203.204:2380
cat << EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/local/bin/etcd \
--name ${ETCD_NAME} \
--cert-file=/etc/etcd/kubernetes.pem \
--key-file=/etc/etcd/kubernetes-key.pem \
--peer-cert-file=/etc/etcd/kubernetes.pem \
--peer-key-file=/etc/etcd/kubernetes-key.pem \
--trusted-ca-file=/etc/etcd/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ca.pem \
--peer-client-cert-auth \
--client-cert-auth \
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \
--listen-peer-urls https://${INTERNAL_IP}:2380 \
--listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \
--advertise-client-urls https://${INTERNAL_IP}:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster ${INITIAL_CLUSTER} \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
启动ETCD
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
部署Master节点
安装必要插件
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
kubernetes API 服务器配置
mkdir -p /var/lib/kubernetes/
mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem
service-account-key.pem service-account.pem
encryption-config.yaml /var/lib/kubernetes/
启动kube-apiservice
CONTROLLER0_IP=172.26.203.203
CONTROLLER1_IP=172.26.203.199
CONTROLLER2_IP=172.26.203.204
INTERNAL_IP=hostname -i
cat << EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--advertise-address=${INTERNAL_IP} \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/audit.log \
--authorization-mode=Node,RBAC \
--bind-address=0.0.0.0 \
--client-ca-file=/var/lib/kubernetes/ca.pem \
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--etcd-cafile=/var/lib/kubernetes/ca.pem \
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \
--etcd-servers=https://$CONTROLLER0_IP:2379,https://$CONTROLLER1_IP:2379,https://$CONTROLLER2_IP:2379 \
--event-ttl=1h \
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \
--kubelet-https=true \
--runtime-config=api/all=true \
--service-account-key-file=/var/lib/kubernetes/service-account.pem \
--service-cluster-ip-range=10.32.0.0/24 \
--service-node-port-range=30000-32767 \
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \
--v=2 \
--kubelet-preferred-address-types=InternalIP,InternalDNS,Hostname,ExternalIP,ExternalDNS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
创建kube-controller-manager服务器的服务配置文件
mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--address=0.0.0.0 \
--cluster-cidr=10.200.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \
--leader-elect=true \
--root-ca-file=/var/lib/kubernetes/ca.pem \
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \
--service-cluster-ip-range=10.32.0.0/24 \
--use-service-account-credentials=true \
--allocate-node-cidrs=true
--cluster-cidr=10.100.0.0/16
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
kube-scheduler 配置
mv kube-scheduler.kubeconfig /var/lib/kubernetes/
mkdir -p /etc/kubernetes/config
cat <<EOF | tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
leaderElect: true
EOF
cat <<EOF | tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--config=/etc/kubernetes/config/kube-scheduler.yaml \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
启动服务
$ systemctl daemon-reload
$ systemctl enable kube-apiserver kube-controller-manager kube-scheduler
$ systemctl start kube-apiserver kube-controller-manager kube-scheduler
HTTP健康检查
kubectl get componentstatuses --kubeconfig admin.kubeconfig
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
kubele授权
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources: - nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs: - "*"
EOF
- ""
kube-api服务认证kubelet
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
配置负载均衡
yum install haproxy -y
cat <<EOF | tee /etc/haproxy/haproxy.cfg
frontend k8s-api
bind 192.168.20.116:6443
bind 192.168.20.116:443
mode tcp
option tcplog
default_backend k8s-api
backend k8s-api
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-api-1 192.168.20.111:6443 check
server k8s-api-2 192.168.20.112:6443 check
server k8s-api-3 192.168.20.113:6443 check
EOF
启动服务
systemctl start haproxy
systemctl enable haproxy
如果配置完全Ok,应该会看到如下信息
curl --cacert ca.pem https://172.26.203.201:6443/version
{
"major": "1",
"minor": "17",
"gitVersion": "v1.17.0",
"gitCommit": "70132b0f130acc0bed193d9ba59dd186f0e634cf",
"gitTreeState": "clean",
"buildDate": "2019-12-07T21:12:17Z",
"goVersion": "go1.13.4",
"compiler": "gc",
"platform": "linux/amd64"
}
部署Node节点
安装Docker
step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Step 3: 更新并安装Docker-CE
sudo yum makecache fast
sudo yum -y install docker-ce
Step 4: 开启Docker服务
sudo service docker start
kubelet配置
mkdir -p /var/lib/kubelet
mkdir -p /var/lib/kubernetes
mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
mv ca.pem /var/lib/kubernetes/
设置kubelet配置文件
cat <<EOF | tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
podCIDR: "10.100.0.0/16"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/c720114.xiodi.cn.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/c720114.xiodi.cn-key.pem"
EOF
创建kubelet服务配置文件
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet
--config=/var/lib/kubelet/kubelet-config.yaml
--docker=unix:///var/run/docker.sock
--docker-endpoint=unix:///var/run/docker.sock
--image-pull-progress-deadline=2m
--network-plugin=cni
--kubeconfig=/var/lib/kubelet/kubeconfig
--register-node=true
--cgroup-driver=systemd
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
kube-proxy配置
mkdir /var/lib/kube-proxy -p
mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: ""
clusterCIDR: "10.100.0.0/16"
EOF
kube-proxy服务配置
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
启动kubelet服务及Kube-proxy服务并校验
systemctl daemon-reload
systemctl enable kubelet kube-proxy
systemctl start kubelet kube-proxy
部署网络插件
部署CNI网络
mkdir /opt/cni/bin /etc/cni/net.d -p
cd /opt/cni/bin
wget https://github.com/containernetworking/plugins/releases/download/v0.8.3/cni-plugins-linux-amd64-v0.8.3.tgz
tar zxvf cni-plugins-linux-amd64-v0.8.3.tgz -C /opt/cni/bin
安装网络插件
在主节点下执行
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ym
部署DNS插件
部署DNS插件
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml
校验DNS pods的状态
kubectl get pods -l k8s-app=kube-dns -n kube-system
kubeadm安装
Master节点安装
cat <
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
Node节点安装
cat <
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubect
设置开机自启动
systemctl enable docker.service kubelet.service
初始化
kubeadm init
--image-repository=registry.aliyuncs.com/google_containers
--service-cidr=10.96.0.0/12
--pod-network-cidr=10.244.0.0/16
如下图即为安装成功
根据命令提示执行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u)??(id -g) $HOME/.kube/config
注:命令提示
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
安装网络插件
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
注:如果镜像拉不下来,可以使用这个方法
docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64; docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
加入集群
创建TOKEN
kubeadm token create --print-join-command
加入集群
kubeadm join 10.0.0.50:6443 --token 038qwm.hpoxkc1f2fkgti3r --discovery-token-ca-cert-hash sha256:edcd2c212be408f741e439abe304711ffb0adbb3bedbb1b93354bfdc3dd13b04
四十八 阅举报