Kubernetes集群搭建之Master配置篇

Kubernetes集群搭建之Master配置篇

本次系列使用的所需部署包版本都使用的目前最新的或最新稳定版,安装包地址请到公众号内回复【K8s实战】获取

今天终于到正题了~~

生成kubernets证书与私钥

1. 制作kubernetes ca证书

[root@master-01 ~]# cd /etc/kubernetes/ssl/[root@master-01 ~]#cat << EOF | tee ca-config.json{"signing": {"default": {"expiry": "87600h"},"profiles": {"kubernetes": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]}}}}EOF
[root@master-01 ~]#cat << EOF | tee ca-csr.json{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Hangzhou","ST": "Hangzhou","O": "k8s","OU": "System"}]}EOF
[root@master-01 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca2019/03/11 13:31:25 [INFO] generating a new CA key and certificate from CSR2019/03/11 13:31:25 [INFO] generate received request2019/03/11 13:31:25 [INFO] received CSR2019/03/11 13:31:25 [INFO] generating key: rsa-20482019/03/11 13:31:25 [INFO] encoded CSR2019/03/11 13:31:25 [INFO] signed certificate with serial number 389496824246932488061213260650987853035508114869

2. 制作api证书

cat << EOF | tee server-csr.json{"CN": "kubernetes","hosts": ["10.254.0.1","127.0.0.1","192.168.209.130","192.168.209.131","192.168.209.132","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Hangzhou","ST": "Hangzhou","O": "k8s","OU": "System"}]}EOF
[root@master-01 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server2019/03/11 13:37:15 [INFO] generate received request2019/03/11 13:37:15 [INFO] received CSR2019/03/11 13:37:15 [INFO] generating key: rsa-20482019/03/11 13:37:15 [INFO] encoded CSR2019/03/11 13:37:15 [INFO] signed certificate with serial number 68544682773675117056543742114835667888997082622019/03/11 13:37:15 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements").

3. 制作kube-proxy证书

[root@master-01 ssl]# cat << EOF | tee kube-proxy-csr.json{"CN": "system:kube-proxy","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Hangzhou","ST": "Hangzhou","O": "k8s","OU": "System"}]}EOF
[root@master-01 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy2019/03/11 13:38:28 [INFO] generate received request2019/03/11 13:38:28 [INFO] received CSR2019/03/11 13:38:28 [INFO] generating key: rsa-20482019/03/11 13:38:29 [INFO] encoded CSR2019/03/11 13:38:29 [INFO] signed certificate with serial number 732603751396221146326217111143810303159789921742019/03/11 13:38:29 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements").[root@master-01 ssl]# lsca-config.json ca-csr.json ca.pem kube-proxy-csr.json kube-proxy.pem server-csr.json server.pemca.csr ca-key.pem kube-proxy.csr kube-proxy-key.pem server.csr server-key.pem

部署Master组件


kubernetes master 节点运行如下组件: kube-apiserver kube-scheduler kube-controller-manager kube-scheduler 和 kube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式,master三节点高可用模式下可用

部署api-server

1. 解压缩

[root@master-01 ~]# tar xf kubernetes-server-linux-amd64.tar.gz [root@master-01 ~]# cd kubernetes/server/bin/[root@master-01 bin]# cp kube-scheduler kube-apiserver kube-controller-manager kubectl kube-proxy kubelet /usr/bin/

2. 生成api-server所需要的TLS Token

[root@master-01 ~]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '88b877ad90a3be4ac34711893262a014[root@master-01 ~]# cat /etc/kubernetes/token.csv88b877ad90a3be4ac34711893262a014,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

3. 创建api-server配置文件

[root@master-01 ~]# cat /etc/kubernetes/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \--v=4 \--etcd-servers=https://192.168.209.130:2379,https://192.168.209.131:2379,https://192.168.209.132:2379 \--bind-address=192.168.209.130 \--secure-port=6443 \--advertise-address=192.168.209.130 \--allow-privileged=true \--service-cluster-ip-range=10.254.0.0/16 \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \--authorization-mode=RBAC,Node \--enable-bootstrap-token-auth \--token-auth-file=/etc/kubernetes/token.csv \--service-node-port-range=30000-50000 \--tls-cert-file=/etc/kubernetes/ssl/server.pem \--tls-private-key-file=/etc/kubernetes/ssl/server-key.pem \--client-ca-file=/etc/kubernetes/ssl/ca.pem \--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \--etcd-cafile=/etc/etcd/ssl/ca.pem \--etcd-certfile=/etc/etcd/ssl/server.pem \--etcd-keyfile=/etc/etcd/ssl/server-key.pem"

4. 创建apiserver启动文件

[root@master-01 ~]# cat /usr/lib/systemd/system/kube-apiserver.service [Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/etc/kubernetes/kube-apiserverExecStart=/usr/bin/kube-apiserver $KUBE_APISERVER_OPTSRestart=on-failure[Install]WantedBy=multi-user.target

5. 启动服务

[root@master-01 ~]# systemctl daemon-reload[root@master-01 ~]# systemctl enable kube-apiserverCreated symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.[root@master-01 ~]# systemctl start kube-apiserver[root@master-01 ~]# systemctl status kube-apiserver● kube-apiserver.service - Kubernetes API ServerLoaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)Active: active (running) since 一 2019-03-11 14:04:02 CST; 8s agoDocs: https://github.com/kubernetes/kubernetesMain PID: 86650 (kube-apiserver)Tasks: 10Memory: 343.7MCGroup: /system.slice/kube-apiserver.service└─86650 /usr/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.209.130:2379,https://192.168.209.1...[root@master-01 ~]# ps -ef|grep kube-apiserver root 86650 1 13 14:04 ? 00:00:08 /usr/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.209.130:2379,https://192.168.209.131:2379,https://192.168.209.132:2379 --bind-address=192.168.209.130 --secure-port=6443 --advertise-address=192.168.209.130 --allow-privileged=true --service-cluster-ip-range=10.254.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/etc/kubernetes/ssl/server.pem --tls-private-key-file=/etc/kubernetes/ssl/server-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --etcd-cafile=/etc/etcd/ssl/ca.pem --etcd-certfile=/etc/etcd/ssl/server.pem --etcd-keyfile=/etc/etcd/ssl/server-key.pemroot 86742 26572 0 14:05 pts/0 00:00:00 grep --color=auto kube-apiserver[root@master-01 ~]# netstat -tulpn |grep kube-apiservetcp 0 0 192.168.209.130:6443 0.0.0.0:* LISTEN 86650/kube-apiserve tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 86650/kube-apiserve

部署kube-scheduler组件

1. 创建kube-scheduler配置文件

[root@master-01 ~]# cat /etc/kubernetes/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"

2. 创建kube-scheduler启动文件

[root@master-01 ~]# cat /usr/lib/systemd/system/kube-scheduler.service [Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/etc/kubernetes/kube-schedulerExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_OPTSRestart=on-failure[Install]WantedBy=multi-user.target

3. 启动服务

[root@master-01 ~]# systemctl daemon-reload[root@master-01 ~]# systemctl enable kube-scheduler.service Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.[root@master-01 ~]# systemctl start kube-scheduler.service[root@master-01 ~]# systemctl status kube-scheduler.service● kube-scheduler.service - Kubernetes SchedulerLoaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)Active: active (running) since 一 2019-03-11 14:13:06 CST; 4s agoDocs: https://github.com/kubernetes/kubernetesMain PID: 87372 (kube-scheduler)Tasks: 9Memory: 46.4MCGroup: /system.slice/kube-scheduler.service└─87372 /usr/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect

部署kube-controller-manager组件

1. 创建kube-controller-manager配置

[root@master-01 ~]# cat /etc/kubernetes/kube-controller-managerKUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \--v=4 \--master=127.0.0.1:8080 \--leader-elect=true \--address=127.0.0.1 \--service-cluster-ip-range=10.254.0.0/16 \--cluster-name=kubernetes \--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \--root-ca-file=/etc/kubernetes/ssl/ca.pem \--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem"

2. 创建kube-controller-manager启动文件

[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/etc/kubernetes/kube-controller-managerExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTSRestart=on-failure[Install]WantedBy=multi-user.target

3. 启动服务

[root@master-01 ~]# systemctl daemon-reload[root@master-01 ~]# systemctl enable kube-controller-managerCreated symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.[root@master-01 ~]# systemctl start kube-controller-manager[root@master-01 ~]# systemctl status kube-controller-manager● kube-controller-manager.service - Kubernetes Controller ManagerLoaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)Active: active (running) since 一 2019-03-11 14:35:36 CST; 6s agoDocs: https://github.com/kubernetes/kubernetesMain PID: 89052 (kube-controller)Tasks: 8Memory: 51.1MCGroup: /system.slice/kube-controller-manager.service└─89052 /usr/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=12.

同步文件到master-02、master-03

1. 同步二进制文件

[root@master-01 bin]# scp kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler 192.168.209.132:/usr/bin/ [root@master-01 bin]# scp kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler 192.168.209.133:/usr/bin/

2. 同步配置文件

[root@master-01 kubernetes]# scp -r kube-apiserver kube-controller-manager kube-scheduler ssl/ token.csv 192.168.209.131:/etc/kubernetes/[root@master-01 kubernetes]# scp -r kube-apiserver kube-controller-manager kube-scheduler ssl/ token.csv 192.168.209.132:/etc/kubernetes/

3. 同步启动文件

[root@master-01 system]# scp kube-apiserver.service kube-controller-manager.service kube-scheduler.service 192.168.209.131:/usr/lib/systemd/system/[root@master-01 system]# scp kube-apiserver.service kube-controller-manager.service kube-scheduler.service 192.168.209.132:/usr/lib/systemd/system/

注意:需要注意的是拷贝过去的文件系统修改对应的IP才行

3. 启动 master01、master-02的服务

# master02[root@master-02 ~]# systemctl daemon-reload[root@master-02 ~]# systemctl enable kube-apiserverCreated symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.[root@master-02 ~]# systemctl start kube-apiserver[root@master-02 ~]# systemctl daemon-reload[root@master-02 ~]# systemctl enable kube-scheduler.service Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.[root@master-02 ~]# systemctl start kube-scheduler.service[root@master-02 ~]# systemctl daemon-reload[root@master-02 ~]# systemctl enable kube-controller-managerCreated symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.[root@master-02 ~]# systemctl start kube-controller-manager# master03[root@master-03 yum.repos.d]# systemctl daemon-reload[root@master-03 yum.repos.d]# systemctl enable kube-apiserverCreated symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service[root@master-03 ~]# systemctl start kube-apiserver[root@master-03 ~]# systemctl daemon-reload[root@master-03 ~]# systemctl enable kube-scheduler.service Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.[root@master-03 ~]# systemctl start kube-scheduler.service[root@master-03 ~]# systemctl daemon-reload[root@master-03 ~]# systemctl enable kube-controller-managerCreated symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.[root@master-03 ~]# systemctl start kube-controller-manager

检查集群状态

# master-01[root@master-01 kubernetes]# kubectl get csNAME STATUS MESSAGE ERRORcomponentstatus/controller-manager Healthy ok componentstatus/scheduler Healthy ok componentstatus/etcd-0 Healthy {"health":"true"} componentstatus/etcd-1 Healthy {"health":"true"} componentstatus/etcd-2 Healthy {"health":"true"}# master-02[root@master-02 ~]# kubectl get csNAME STATUS MESSAGE ERRORcomponentstatus/scheduler Healthy ok componentstatus/controller-manager Healthy ok componentstatus/etcd-1 Healthy {"health":"true"} componentstatus/etcd-0 Healthy {"health":"true"} componentstatus/etcd-2 Healthy {"health":"true"} # master-03[root@master-03 yum.repos.d]# kubectl get csNAME STATUS MESSAGE ERRORcomponentstatus/controller-manager Healthy ok componentstatus/scheduler Healthy ok componentstatus/etcd-1 Healthy {"health":"true"} componentstatus/etcd-2 Healthy {"health":"true"} componentstatus/etcd-0 Healthy {"health":"true"}

进行到这一步,master就部署完毕了,下面开始部署node组件,笔者这里也会在三台主控部署上node组件,即为主控也为node节点

部署node组件


Master apiserver启用TLS认证后,Node节点kubelet组件想要加入集群,必须使用CA签发的有效证书才能与apiserver通信,当Node节点很多时,签署证书是一件很繁琐的事情,因此有了TLS Bootstrapping机制,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。

认证大致工作流程如图所示:

Kubernetes集群搭建之Master配置篇

将kube-bootstrap用户绑定到系统集群角色

master上操作(只需要执行这一次)

[root@master-01 kubernetes]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrapclusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

创建kubelet bootstrap kubeconfig文件, 通过脚本实现

[root@master-01 kubernetes]# cat environment.sh #!/bin/bash#创建kubelet bootstrapping kubeconfig BOOTSTRAP_TOKEN=88b877ad90a3be4ac34711893262a014KUBE_APISERVER="https://192.168.209.130:6443"#设置集群参数kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/ssl/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=bootstrap.kubeconfig#设置客户端认证参数kubectl config set-credentials kubelet-bootstrap \--token=${BOOTSTRAP_TOKEN} \--kubeconfig=bootstrap.kubeconfig# 设置上下文参数kubectl config set-context default \--cluster=kubernetes \--user=kubelet-bootstrap \--kubeconfig=bootstrap.kubeconfig# 设置默认上下文kubectl config use-context default --kubeconfig=bootstrap.kubeconfig#----------------------# 创建kube-proxy kubeconfig文件kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/ssl/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=kube-proxy.kubeconfigkubectl config set-credentials kube-proxy \--client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \--client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \--embed-certs=true \--kubeconfig=kube-proxy.kubeconfigkubectl config set-context default \--cluster=kubernetes \--user=kube-proxy \--kubeconfig=kube-proxy.kubeconfigkubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

执行创建操作

[root@master-01 kubernetes]# sh environment.sh Cluster "kubernetes" set.User "kubelet-bootstrap" set.Context "default" created.Switched to context "default".Cluster "kubernetes" set.User "kube-proxy" set.Context "default" created.Switched to context "default".

注意: 将生成的bootstrap.kubeconfig  kube-proxy.kubeconfig两个文件拷贝到Node节点/etc/kubernetes/目录下。(只要装node组件都需要这个文件)

创建kubelet.config文件

[root@master-01 kubernetes]# cat kubelet.config kind: KubeletConfigurationapiVersion: kubelet.config.k8s.io/v1beta1address: 192.168.209.130port: 10250readOnlyPort: 10255cgroupDriver: cgroupfsclusterDNS: ["10.254.0.10"]clusterDomain: cluster.local.failSwapOn: false"authentication": {"anonymous": {"enabled": true},"webhook": {"enabled": false}}

1. 部署kubelet

创建kubelet配置文件

[root@master-01 kubernetes]# cat kubeletKUBELET_OPTS="--logtostderr=true \--v=4 \--hostname-override=192.168.209.130 \--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \--bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \--config=/etc/kubernetes/kubelet.config \--cert-dir=/etc/kubernetes/ssl \--pod-infra-container-image=hub.test.tech/library/pod-infrastructure:latest"

这里的管理Pod网络的镜像使用的内网Harbor上的镜像(需要推送到私有仓库中),镜像包可以在公众号内回复“k8s实战"获取

参数说明:

  • —hostname-override 在集群中显示的主机名

  • —kubeconfig 指定kubeconfig文件位置,会自动生成

  • —bootstrap-kubeconfig 指定刚才生成的bootstrap.kubeconfig文件

  • —cert-dir 颁发证书存放位置

  • —pod-infra-container-image 管理Pod网络的镜像

创建kubelet启动文件

[root@master-01 kubernetes]# cat /usr/lib/systemd/system/kubelet.service [Unit]Description=Kubernetes KubeletAfter=docker.serviceRequires=docker.service[Service]EnvironmentFile=/etc/kubernetes/kubeletExecStart=/usr/bin/kubelet $KUBELET_OPTSRestart=on-failureKillMode=process[Install]WantedBy=multi-user.target

启动服务

[root@master-01 kubernetes]# systemctl daemon-reload[root@master-01 kubernetes]# systemctl enable kubelet Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.[root@master-01 kubernetes]# systemctl start kubelet

启动服务后并没有立即加入集群,需要api-server允许证书请求

查看证书请求

[root@master-01 kubernetes]# kubectl get csrNAME AGE REQUESTOR CONDITIONnode-csr-n7I-bungHuexT06s8OcuXQmkeGZFU5N4I18m0yFzDpM 17s kubelet-bootstrap Pending

接受node请求

[root@master-01 kubernetes]# kubectl certificate approve node-csr-n7I-bungHuexT06s8OcuXQmkeGZFU5N4I18m0yFzDpMcertificatesigningrequest.certificates.k8s.io/node-csr-n7I-bungHuexT06s8OcuXQmkeGZFU5N4I18m0yFzDpM approved

再次查看csr

[root@master-01 kubernetes]# kubectl get csrNAME AGE REQUESTOR CONDITIONnode-csr-n7I-bungHuexT06s8OcuXQmkeGZFU5N4I18m0yFzDpM 2m7s kubelet-bootstrap Approved,Issued

查看节点信息

[root@master-01 kubernetes]# kubectl get node -o wideNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME192.168.209.130 Ready <none> 7m23s v1.13.0 192.168.209.130 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.3

**2. 部署kube-proxy **

创建kube-proxy配置文件

KUBE_PROXY_OPTS="--logtostderr=true \--v=4 \--hostname-override=192.168.209.130 \--cluster-cidr=10.254.0.0/16 \--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig"

创建kube-proxy启动文件

[root@master-01 kubernetes]# cat /usr/lib/systemd/system/kube-proxy.service [Unit]Description=Kubernetes ProxyAfter=network.target[Service]EnvironmentFile=-/etc/kubernetes/kube-proxyExecStart=/usr/bin/kube-proxy $KUBE_PROXY_OPTSRestart=on-failure[Install]WantedBy=multi-user.target

启动服务

[root@master-01 kubernetes]# systemctl daemon-reload [root@master-01 kubernetes]# systemctl enable kube-proxy Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.[root@master-01 kubernetes]# systemctl start kube-proxy[root@master-01 kubernetes]# systemctl status kube-proxy● kube-proxy.service - Kubernetes ProxyLoaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)Active: active (running) since 一 2019-03-11 15:26:13 CST; 4s agoMain PID: 93067 (kube-proxy)Tasks: 0Memory: 6.6MCGroup: /system.slice/kube-proxy.service‣ 93067 /usr/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.209.130 --cluster-cidr=10.254.0.0/16 --ku

同步配置

同步配置及文件到master-02、master-03、node-01

同步配置

# master-02[root@master-01 kubernetes]# scp kubelet kubelet.config kubelet.kubeconfig kube-proxy bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.209.131:/etc/kubernetes/# master-03[root@master-01 kubernetes]# scp kubelet kubelet.config kubelet.kubeconfig kube-proxy bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.209.132:/etc/kubernetes/# node-01[root@master-01 kubernetes]# scp kubelet kubelet.config kubelet.kubeconfig kube-proxy bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.209.133:/etc/kubernetes/

同步启动文件

# master-02[root@master-01 kubernetes]# cd /usr/lib/systemd/system/[root@master-01 system]# scp kube 192.168.209.131:/usr/lib/systemd/system/# master-03[root@master-01 system]# scp kube 192.168.209.132:/usr/lib/systemd/system/# node-01[root@master-01 system]# scp kube 192.168.209.132:/usr/lib/systemd/system/

注意: 需要根据情况修改配置文件的IP地址

启动master-02、master-03、node-01上的服务

再次查看认证请求信息

[root@master-01 kubernetes]# kubectl get csrNAME AGE REQUESTOR CONDITIONnode-csr-S0rYvGBacHcoFPmJrYsqHKTz4d_Yz6KJfpD9yKqsMVY 2m20s kubelet-bootstrap Pendingnode-csr-giUaUWQtl39ZfcrfWmQmKtm5tUjMUddEUX-qJX198Mk 4m9s kubelet-bootstrap Pendingnode-csr-rXqLehzmkMQlXuql_NLVm_MlTxnow9c5QoZLpwS283g 27s kubelet-bootstrap Pending

可以发现有三条新请求记录

同意请求

[root@master-01 kubernetes]# kubectl certificate approve node-csr-S0rYvGBacHcoFPmJrYsqHKTz4d_Yz6KJfpD9yKqsMVYcertificatesigningrequest.certificates.k8s.io/node-csr-S0rYvGBacHcoFPmJrYsqHKTz4d_Yz6KJfpD9yKqsMVY approved[root@master-01 kubernetes]# kubectl certificate approve node-csr-giUaUWQtl39ZfcrfWmQmKtm5tUjMUddEUX-qJX198Mkcertificatesigningrequest.certificates.k8s.io/node-csr-giUaUWQtl39ZfcrfWmQmKtm5tUjMUddEUX-qJX198Mk approved[root@master-01 kubernetes]# kubectl certificate approve node-csr-rXqLehzmkMQlXuql_NLVm_MlTxnow9c5QoZLpwS283gcertificatesigningrequest.certificates.k8s.io/node-csr-rXqLehzmkMQlXuql_NLVm_MlTxnow9c5QoZLpwS283g approved

查看csr详情

[root@master-01 ~]# kubectl describe csr node-csr-S0rYvGBacHcoFPmJrYsqHKTz4d_Yz6KJfpD9yKqsMVYName: node-csr-S0rYvGBacHcoFPmJrYsqHKTz4d_Yz6KJfpD9yKqsMVYLabels: <none>Annotations: <none>CreationTimestamp: Mon, 11 Mar 2019 17:20:08 +0800Requesting User: kubelet-bootstrapStatus: Approved,IssuedSubject:Common Name: system:node:192.168.209.132Serial Number: Organization: system:nodesEvents: <none>

Requesting User :请求 CSR 的用户,kube-apiserver 对它进行认证和授权;

Subject :请求签名的证书信息。

证书的 CN 是 system:192.168.209.132,Organization是system:nodes,kubeapiserver

的 Node 授权模式会授予该证书的相关权限

查看节点信息

[root@master-01 kubernetes]# kubectl get nodeNAME STATUS ROLES AGE VERSION192.168.209.130 Ready <none> 136m v1.13.0192.168.209.131 Ready <none> 104s v1.13.0192.168.209.132 Ready <none> 69s v1.13.0192.168.209.133 Ready <none> 15s v1.13.0

开启自动签发证书请求

创建三个 ClusterRoleBinding,分别用于自动 approve client、renew client、renew

server 证书:

[root@master-01 ~]# cat csr-crb.yaml # Approve all CSRs for the group "system:bootstrappers"kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: auto-approve-csrs-for-groupsubjects:- kind: Groupname: system:bootstrappersapiGroup: rbac.authorization.k8s.ioroleRef:kind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:nodeclientapiGroup: rbac.authorization.k8s.io---# To let a node of the group "system:nodes" renew its own credentialskind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: node-client-cert-renewalsubjects:- kind: Groupname: system:nodesapiGroup: rbac.authorization.k8s.ioroleRef:kind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:selfnodeclientapiGroup: rbac.authorization.k8s.io---# A ClusterRole which instructs the CSR approver to approve a node requesting a# serving cert matching its client cert.kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:name: approve-node-server-renewal-csrrules:- apiGroups: ["certificates.k8s.io"]resources: ["certificatesigningrequests/selfnodeserver"]verbs: ["create"]---# To let a node of the group "system:nodes" renew its own server credentialskind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: node-server-cert-renewalsubjects:- kind: Groupname: system:nodesapiGroup: rbac.authorization.k8s.ioroleRef:kind: ClusterRolename: approve-node-server-renewal-csrapiGroup: rbac.authorization.k8s.io

创建角色绑定

[root@master-01 ~]# kubectl apply -f csr-srb.yaml clusterrolebinding.rbac.authorization.k8s.io/auto-approve-csrs-for-group createdclusterrolebinding.rbac.authorization.k8s.io/node-client-cert-renewal createdclusterrole.rbac.authorization.k8s.io/approve-node-server-renewal-csr createdclusterrolebinding.rbac.authorization.k8s.io/node-server-cert-renewal created

一般等待(1-10 分钟),节点的 CSR如果还没被签发都 都被自动 approve。

至此,master和node节点组件已部署完成,整个集群状态正常。

小试牛刀

我们在跑个nginx 试试效果

# run一个podkubectl run nginx --image=nginx --port=80 --replicas=1# 为这个pod开一个对外服务端口kubectl expose deploy nginx --port=80 --target-port=80 --type=NodePort

查看pod and svc

[root@master-01 kubernetes]# kubectl get pod,svcNAME READY STATUS RESTARTS AGEpod/nginx-7899755b7-7s8fl 1/1 Running 0 13mNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 4h45mservice/nginx NodePort 10.254.58.237 <none> 80:34542/TCP 6m23s

这时候访问任意一个节点IP的34542端口

Kubernetes集群搭建之Master配置篇

至此,Kubernetes集群的基本安装就完结了,接下来会分享K8s在周边组件等等,敬请期待,谢谢!

END

如果您觉得不错,请别忘了转发、分享、点赞让更多的人去学习, 您的举手之劳,就是对小编最好的支持,非常感谢!

Kubernetes集群搭建之Master配置篇

上一篇:POJ 2785


下一篇:Hadoop集群+Spark集群搭建(一篇文章就够了)