现在换了份工作,新环境使用k8s容器环境管理,心思着把以前的相关文档汇总起来编写成博文方便自己复习
k8s概要
由于图片有上一家信息在上面,相关图片就不放上面了。
kubernetes,简称K8s,是用8代替8个字符“ubernete”而成的缩写。是一个开源的,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效(powerful),Kubernetes提供了应用部署,规划,更新,维护的一种机制
Kubernetes是Google开源的一个容器编排引擎,它支持自动化部署、大规模可伸缩、应用容器化管理。在生产环境中部署一个应用程序时,通常要部署该应用的多个实例以便对应用请求进行负载均衡。
在Kubernetes中,我们可以创建多个容器,每个容器里面运行一个应用实例,然后通过内置的负载均衡策略,实现对这一组应用实例的管理、发现、访问,而这些细节都不需要运维人员去进行复杂的手工配置和处理。
kubernetes的特点:
可移植: 支持公有云,私有云,混合云,多重云(multi-cloud)
可扩展: 模块化,插件化,可挂载,可组合
自动化: 自动部署,自动重启,自动复制,自动伸缩/扩展
kubernetes的体系架构
Kubernetes集群包含有节点代理kubelet和Master组件(APIs, scheduler, etc),一切都基于分布式的存储系统。
kubernetes 组件特点
我们把服务分为运行在工作节点上的服务和组成集群级别控制板的服务。
Kubernetes节点有运行应用容器必备的服务,而这些都是受Master的控制。
每次个节点上当然都要运行Docker。Docker来负责所有具体的映像下载和容器运行。
Kubernetes主要由以下几个核心组件组成:
etcd保存了整个集群的状态;
apiserver提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制;
controller manager负责维护集群的状态,比如故障检测、自动扩展、滚动更新等;
scheduler负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上;
kubelet负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理;
Container runtime负责镜像管理以及Pod和容器的真正运行(CRI);
kube-proxy负责为Service提供cluster内部的服务发现和负载均衡;
除了核心组件,还有一些推荐的Add-ons:
kube-dns负责为整个集群提供DNS服务
Ingress Controller为服务提供外网入口
Heapster提供资源监控
Dashboard提供GUI
Federation提供跨可用区的集群
Fluentd-elasticsearch提供集群日志采集、存储与查询
这些官网或者中文网站都可以查阅到资料,后面主要介绍我自己手动部署安装k8s
手动部署安装
etcd:3.3.11
kubectl:
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
kubernetes-dashboard:v1.6.3
nginx-ingress-controller:0.9.0
gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
calico:v3.2.6
初始化环境
配置hosts地址
这里我只在test环境进行测试,生产环境一样配置只是环境地址不同。
在每台服务器上执行
#编辑每台服务器的 /etc/hosts 文件,配置hostname 通信
vi /etc/hosts
172.16.16.86 incubator-dc-016
172.16.16.246 incubator-dc-002
172.16.16.250 incubator-dc-003
关闭防火墙
在每台服务器上执行
systemctl stop firewalld.service
#停止firewall
systemctl disable firewalld.service
#禁止firewall开机启动
firewall-cmd --state
#查看默认防火墙状态(关闭后显示notrunning,开启后显示running)
关闭selinux
在每台服务器上执行
$ setenforce 0
$ vim /etc/selinux/config
SELINUX=disabled
关闭swap
在每台服务器上执行
K8s需使用内存,而不用swap
$ swapoff -a
$ vim /etc/fstab
注释掉SWAP分区项,即可
安装go 语言环境(按需)
https://golang.org/dl/
下载 linux版本go,解压后配置环境变量即可
vi /etc/profile
export GOROOT=/usr/local/go
export PATH=$GOROOT/bin:$PATH
$ source profile
创建K8s集群验证
安装cfssl
这里使用 CloudFlare 的 PKI 工具集 cfssl 来生成 Certificate Authority (CA) 证书和秘钥文件。
mkdir /opt/k8s
cd /opt/k8s
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
mv cfssl_linux-amd64 cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
mv cfssljson_linux-amd64 cfssljson
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 cfssl-certinfo
chmod +x *
创建CA证书配置
cd /opt/k8s
config.json 文件
vi config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
csr.json 文件
vi csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Nanjing",
"L": "Nanjing",
"O": "k8s",
"OU": "System"
}
]
}
生成CA证书和私钥
master
cd /opt/k8s
./cfssl gencert -initca csr.json | ./cfssljson -bare ca
会生成3个文件ca.csr、ca-key.pem、ca.pem
分发证书
创建证书目录mastermkdir -p /etc/kubernetes/ssl
拷贝所有文件到目录下
cp *.pem /etc/kubernetes/ssl
这里要将文件拷贝到所有的k8s 机器上
scp -P53742 -P53742 *.pem 172.16.16.246:/etc/kubernetes/ssl/
scp -P53742 -P53742 *.pem 172.16.16.250:/etc/kubernetes/ssl/
chmod 777 /etc/kubernetes/ssl/*.pem
安装docker
安装 yum-config-manageryum -y install yum-utils
导入
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
更新 repo
yum makecache
删除已安装的docker
[root@incubator-dc-016 k8s]# yum remove -y doceker
Loaded plugins: fastestmirror
No Match for argument: doceker
No Packages marked for removal
[root@incubator-dc-016 k8s]# rpm -qa | grep docker
docker-ce-19.03.4-3.el7.x86_64
docker-ce-cli-19.03.4-3.el7.x86_64
[root@incubator-dc-016 k8s]# rpm -e --nodeps docker-ce-19.03.4-3.el7.x86_64
[root@incubator-dc-016 k8s]# rpm -e --nodeps docker-ce-cli-19.03.4-3.el7.x86_64
[root@incubator-dc-016 k8s]# rpm -qa | grep docker
安装docker
yum install docker-ce –y
更改docker配置
修改配置
前面文件先备份。
mv /usr/lib/systemd/system/docker.service /usr/lib/systemd/system/docker.service.bak
#修改
vi /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS $DOCKER_OPTS $DOCKER_DNS_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
修改其他配置
mkdir -p /usr/lib/systemd/system/docker.service.d/
/*注释
#添加如下 : (注意 environment 必须在同一行,如果出现换行会无法加载)
#iptables=false 会使 docker run 的容器无法连网,false 是因为 calico 有一些高级的应用,需要限制容器互通。
#建议 一般情况 不添加 --iptables=false,calico需要添加
vi /usr/lib/systemd/system/docker.service.d/docker-options.conf
暂时未加
[Service]
Environment="DOCKER_OPTS=--insecure-registry=10.254.0.0/16 --graph=/opt/docker --registry-mirror=http://b438f72b.m.daocloud.io --disable-legacy-registry --iptables=false"
重新读取配置,启动 docker
systemctl daemon-reload
systemctl start docker
systemctl enable docker
安装etcd集群
在每台上服务器上执行yum -y install etcd
cd /opt/k8s
vi etcd-csr.json
{
"CN": "etcd",
"hosts": [
"172.16.16.86",
"172.16.16.246",
"172.16.16.250"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Nanjing",
"L": "Nanjing",
"O": "k8s",
"OU": "System"
}
]
}
生成 etcd 密钥
/opt/k8s/cfssl gencert -ca=/opt/k8s/ca.pem \
-ca-key=/opt/k8s/ca-key.pem \
-config=/opt/k8s/config.json \
-profile=kubernetes etcd-csr.json | /opt/k8s/cfssljson -bare etcd
查看生成
[root@k8s-master ssl]# ls etcd*
etcd.csr etcd-csr.json etcd-key.pem etcd.pem
拷贝到etcd服务器
etcd-1
cp etcd*.pem /etc/kubernetes/ssl/
etcd-2
scp -P53742 etcd*.pem 172.16.16.246:/etc/kubernetes/ssl/
etcd-3
scp -P53742 etcd*.pem 172.16.16.250:/etc/kubernetes/ssl/
如果 etcd 非 root 用户,读取证书会提示没权限chmod 644 /etc/kubernetes/ssl/etcd-key.pem
修改etcd配置
#etcd-1
mv /usr/lib/systemd/system/etcd.service /usr/lib/systemd/system/etcd.service.bak
vi /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
User=etcd
#set GOMAXPROCS to number of processors
ExecStart=/usr/bin/etcd \
--name=etcd1 \
--cert-file=/etc/kubernetes/ssl/etcd.pem \
--key-file=/etc/kubernetes/ssl/etcd-key.pem \
--peer-cert-file=/etc/kubernetes/ssl/etcd.pem \
--peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--initial-advertise-peer-urls=https://172.16.16.86:2380 \
--listen-peer-urls=https://172.16.16.86:2380 \
--listen-client-urls=https://172.16.16.86:2379 \
--advertise-client-urls=https://172.16.16.86:2379 \
--initial-cluster-token=k8s-etcd-cluster \
--initial-cluster=etcd1=https://172.16.16.86:2380,etcd2=https://172.16.16.246:2380,etcd3=https://172.16.16.250:2380 \
--initial-cluster-state=new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
#etcd-2
vi /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
User=etcd
#set GOMAXPROCS to number of processors
ExecStart=/usr/bin/etcd \
--name=etcd2 \
--cert-file=/etc/kubernetes/ssl/etcd.pem \
--key-file=/etc/kubernetes/ssl/etcd-key.pem \
--peer-cert-file=/etc/kubernetes/ssl/etcd.pem \
--peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--initial-advertise-peer-urls=https://172.16.16.246:2380 \
--listen-peer-urls=https://172.16.16.246:2380 \
--listen-client-urls=https://172.16.16.246:2379 \
--advertise-client-urls=https://172.16.16.246:2379 \
--initial-cluster-token=k8s-etcd-cluster \
--initial-cluster=etcd1=https://172.16.16.86:2380,etcd2=https://172.16.16.246:2380,etcd3=https://172.16.16.250:2380 \
--initial-cluster-state=new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
#etcd-3
vi /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
User=etcd
#set GOMAXPROCS to number of processors
ExecStart=/usr/bin/etcd \
--name=etcd3 \
--cert-file=/etc/kubernetes/ssl/etcd.pem \
--key-file=/etc/kubernetes/ssl/etcd-key.pem \
--peer-cert-file=/etc/kubernetes/ssl/etcd.pem \
--peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--initial-advertise-peer-urls=https://172.16.16.250:2380 \
--listen-peer-urls=https://172.16.16.250:2380 \
--listen-client-urls=https://172.16.16.250:2379 \
--advertise-client-urls=https://172.16.16.250:2379 \
--initial-cluster-token=k8s-etcd-cluster \
--initial-cluster=etcd1=https://172.16.16.86:2380,etcd2=https://172.16.16.246:2380,etcd3=https://172.16.16.250:2380 \
--initial-cluster-state=new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
启动etcd
分别启动 所有节点的 etcd 服务
systemctl enable etcd
systemctl start etcd
systemctl status etcd
#如果报错 请使用journalctl -f -t etcd 和 journalctl -u etcd 来定位问题
验证etcd集群状态
查看 etcd 集群状态:
etcdctl --endpoints=https://172.16.16.86:2379 \
--cert-file=/etc/kubernetes/ssl/etcd.pem \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--key-file=/etc/kubernetes/ssl/etcd-key.pem \
cluster-health
查看 etcd 集群成员:
etcdctl --endpoints=https://172.16.16.86:2379 \
--cert-file=/etc/kubernetes/ssl/etcd.pem \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--key-file=/etc/kubernetes/ssl/etcd-key.pem \
member list
查看 etcd 集群成员:
etcdctl --endpoints=https://172.16.16.86:2379 \
--cert-file=/etc/kubernetes/ssl/etcd.pem \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--key-file=/etc/kubernetes/ssl/etcd-key.pem \
member list
安装kubectl 工具
Master节点 172.16.16.86
#首先安装 kubectl
wget https://dl.k8s.io/v1.8.0/kubernetes-client-linux-amd64.tar.gz
(如果连接不上,直接去git上下载二进制文件)
tar -xzvf kubernetes-client-linux-amd64.tar.gz
cp kubernetes/client/bin/* /usr/local/bin/
cp kubernetes/client/bin/* /usr/bin/
chmod a+x /usr/local/bin/kube*
验证安装
$ kubectl version
[root@incubator-dc-016 k8s]# kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
创建 admin 证书
kubectl 与 kube-apiserver 的安全端口通信,需要为安全通信提供 TLS 证书和秘钥。
cd /opt/k8s
vi admin-csr.json
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Nanjing",
"L": "Nanjing",
"O": "system:masters",
"OU": "System"
}
]
}
生成 admin 证书和私钥
cd /opt/k8s
./cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/opt/k8s/config.json \
-profile=kubernetes admin-csr.json | ./cfssljson -bare admin
查看生成
#ls admin*
admin.csr admin-csr.json admin-key.pem admin.pem
cp admin*.pem /etc/kubernetes/ssl/
scp -P53742 admin*.pem 172.16.16.246:/etc/kubernetes/ssl/
scp -P53742 admin*.pem 172.16.16.250:/etc/kubernetes/ssl/
配置 kubectl kubeconfig 文件
server 配置为 本机IP 各自连接本机的 Api
#配置 kubernetes 集群
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=https://172.16.16.86:6443
配置 客户端认证
kubectl config set-credentials admin \
--client-certificate=/etc/kubernetes/ssl/admin.pem \
--embed-certs=true \
--client-key=/etc/kubernetes/ssl/admin-key.pem
kubectl config set-context kubernetes \
--cluster=kubernetes \
--user=admin
kubectl config use-context kubernetes
kubectl config文件
#kubeconfig 文件在如下位置:
/root/.kube
部署 Kubernetes Master 节点
Master 需要部署 kube-apiserver , kube-scheduler , kube-controller-manager 这三个组件。 kube-scheduler 作用是调度pods分配到那个node里,简单来说就是资源调度。 kube-controller-manager 作用是 对 deployment controller , replication controller, endpoints controller, namespace controller, and serviceaccounts controller等等的循环控制,与kube-apiserver交互。
安装Master节点组件
#从github 上下载版本
cd /opt/k8s
wget https://dl.k8s.io/v1.8.3/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz && cd kubernetes
cp -r server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/local/bin/
cp -r server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/bin/
创建kubernetes 证书
cd /opt/k8s
vi kubernetes-csr.json
{
"CN": "kubernetes",
"hosts": [
"172.16.16.86",
"10.254.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Nanjing",
"L": "Nanjing",
"O": "k8s",
"OU": "System"
}
]
}
这里 hosts 字段中 三个 IP 分别为 127.0.0.1 本机, 172.16.16.86为 Master 的IP,多个Master需要写多个 10.254.0.1 为 kubernetes SVC 的 IP, 一般是 部署网络的第一个IP , 如: 10.254.0.1 , 在启动完成后,我们使用 kubectl get svc , 就可以查看到。
生成 kubernetes 证书和私钥
./cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/opt/k8s/config.json \
-profile=kubernetes kubernetes-csr.json | ./cfssljson -bare kubernetes
ls -l kubernetes*
拷贝到目录
cp -r kubernetes*.pem /etc/kubernetes/ssl/
scp -P53742 -r kubernetes*.pem 172.16.16.246:/etc/kubernetes/ssl/
scp -P53742 -r kubernetes*.pem 172.16.16.250:/etc/kubernetes/ssl/
配置 kube-apiserver
kubelet 首次启动时向 kube-apiserver 发送 TLS Bootstrapping 请求,kube-apiserver 验证 kubelet 请求中的 token 是否与它配置的 token 一致,如果一致则自动为 kubelet生成证书和秘钥。
生成 token
[root@incubator-dc-016 k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ''
49d1b983 9aafea9c 90300962 60d51a3d
49d1b9839aafea9c9030096260d51a3d 需记录下来
创建 token.csv 文件
cd /opt/k8s
vi token.csv
49d1b9839aafea9c9030096260d51a3d,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
#拷贝
cp token.csv /etc/kubernetes/
scp -P53742 token.csv 172.16.16.246:/etc/kubernetes/
scp -P53742 token.csv 172.16.16.250:/etc/kubernetes/
创建 kube-apiserver.service 文件
# 1.8 新增 (Node) --authorization-mode=Node,RBAC
# 自定义 系统 service 文件一般存于 /etc/systemd/system/ 下
# 配置为 各自的本地 IP
vi /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
User=root
ExecStart=/usr/local/bin/kube-apiserver \
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--advertise-address=172.16.16.86 \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/lib/audit.log \
--authorization-mode=Node,RBAC \
--bind-address=172.16.16.86 \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--enable-swagger-ui=true \
--etcd-cafile=/etc/kubernetes/ssl/ca.pem \
--etcd-certfile=/etc/kubernetes/ssl/etcd.pem \
--etcd-keyfile=/etc/kubernetes/ssl/etcd-key.pem \
--etcd-servers=https://172.16.16.86:2379,https://172.16.16.246:2379,https://172.16.16.250:2379 \
--event-ttl=1h \
--kubelet-https=true \
--insecure-bind-address=172.16.16.86 \
--runtime-config=rbac.authorization.k8s.io/v1alpha1 \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-cluster-ip-range=10.254.0.0/16 \
--service-node-port-range=30000-32000 \
--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--enable-bootstrap-token-auth \
--token-auth-file=/etc/kubernetes/token.csv \
--v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
#这里面要注意的是 --service-node-port-range=30000-32000
#这个地方是 映射外部端口时 的端口范围,随机映射也在这个范围内映射,指定映射端口必须也在这个范围内。
启动 kube-apiserver
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver
配置 kube-controller-manager
master 配置为 各自 本地 IP
创建 kube-controller-manager.service 文件
vi /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--address=127.0.0.1 \
--master=http://172.16.16.86:8080 \
--allocate-node-cidrs=true \
--service-cluster-ip-range=10.254.0.0/16 \
--cluster-cidr=10.233.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
--root-ca-file=/etc/kubernetes/ssl/ca.pem \
--leader-elect=true \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
启动 kube-controller-manager
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager
配置 kube-scheduler
master 配置为 各自 本地 IP
创建 kube-cheduler.service 文件
vi /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--address=127.0.0.1 \
--master=http://172.16.16.86:8080 \
--leader-elect=true \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
启动 kube-scheduler
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler
验证 Master 节点
[root@incubator-dc-016 k8s]# kubectl get componentstatuses
NAME STATUS MESSAGE
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
部署 Master节点的 Node 部分
Node 部分 需要部署的组件有 docker calico kubectl kubelet kube-proxy 这几个组件。
配置 kubelet
kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要先将 bootstrap token 文件中的 kubelet-bootstrap 用户赋予 system:node-bootstrapper 角色,然后 kubelet 才有权限创建认证请求(certificatesigningrequests)。
先创建认证请求
#user 为 master 中 token.csv 文件里配置的用户
#只需创建一次就可以
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
创建 kubelet kubeconfig 文件
server 配置为 master 本机 IP
#配置集群
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=https://172.16.16.86:6443 \
--kubeconfig=bootstrap.kubeconfig
#配置客户端认证
kubectl config set-credentials kubelet-bootstrap \
--token=49d1b9839aafea9c9030096260d51a3d \
--kubeconfig=bootstrap.kubeconfig
#配置关联
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
#配置默认关联kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
#拷贝生成的 bootstrap.kubeconfig 文件mv bootstrap.kubeconfig /etc/kubernetes
创建 kubelet.service 文件
创建 kubelet 目录
> 配置为 node 本机 IP
mkdir /var/lib/kubelet
vi /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
--address=172.16.16.86 \
--hostname-override=172.16.16.86 \
--pod-infra-container-image=jicki/pause-amd64:3.0 \
--experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--cert-dir=/etc/kubernetes/ssl \
--cluster_dns=10.254.0.2 \
--cluster_domain=doone.com. \
--hairpin-mode promiscuous-bridge \
--allow-privileged=true \
--fail-swap-on=false \
--serialize-image-pulls=false \
--logtostderr=true \
--max-pods=512 \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
#如上配置:
172.16.16.86 为本机的IP
10.254.0.2 预分配的 dns 地址
cluster.local. 为 kubernetes 集群的 domain
jicki/pause-amd64:3.0 这个是 pod 的基础镜像,既 gcr 的 gcr.io/googlecontainers/pause-amd64:3.0 镜像, 下载下来修改为自己的仓库中的比较快。
启动 kubelet
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
#如果报错 请使用
journalctl -f -t kubelet 和 journalctl -u kubelet 来定位问题
配置 TLS 认证
#查看 csr 的名称
[root@incubator-dc-016 kubelet]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-EBjoD_bmtunjaDMTUmlph04kLO9Kz8-jdUhh6GDhb7w 12s kubelet-bootstrap Pending
增加认证
kubectl certificate approve node-csr-Sg6CRaxXhdIEJP0hxMHtE2Xoh9fpeFl6OVtocqGeV34
验证 nodes
[root@incubator-dc-016 kubelet]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
172.16.16.86 Ready <none> 30s v1.8.3
#成功以后会自动生成配置文件与密钥
配置文件
ls /etc/kubernetes/kubelet.kubeconfig
/etc/kubernetes/kubelet.kubeconfig
秘钥文件
ls /etc/kubernetes/ssl/kubelet*
/etc/kubernetes/ssl/kubelet-client.crt /etc/kubernetes/ssl/kubelet.crt
/etc/kubernetes/ssl/kubelet-client.key /etc/kubernetes/ssl/kubelet.key
配置 kube-proxy
创建 kube-proxy 证书
#证书方面由于我们node端没有装 cfssl
#我们回到 master 端 机器 去配置证书,然后拷贝过来
cd /opt/k8s
vi kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Nanjing",
"L": "Nanjing",
"O": "k8s",
"OU": "System"
}
]
}
生成 kube-proxy 证书和私钥
./cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/opt/k8s/config.json \
-profile=kubernetes kube-proxy-csr.json | ./cfssljson -bare kube-proxy
查看生成
ls kube-proxy*
kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem
拷贝到目录
cp kube-proxy*.pem /etc/kubernetes/ssl/
scp -P53742 kube-proxy*.pem 172.16.16.246:/etc/kubernetes/ssl/
scp -P53742 kube-proxy*.pem 172.16.16.250:/etc/kubernetes/ssl/
创建 kube-proxy kubeconfig 文件
server 配置为各自 本机IP
#配置集群
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=https://172.16.16.86:6443 \
--kubeconfig=kube-proxy.kubeconfig
配置客户端认证
kubectl config set-credentials kube-proxy \
--client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
--client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
配置关联
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
配置默认关联
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
拷贝到目录
mv kube-proxy.kubeconfig /etc/kubernetes/
创建 kube-proxy.service 文件
配置为 各自的 IP
#创建 kube-proxy 目录
mkdir -p /var/lib/kube-proxy
vi /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
--bind-address=172.16.16.86 \
--hostname-override=172.16.16.86 \
--cluster-cidr=10.254.0.0/16 \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
--logtostderr=true \
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
启动 kube-proxy
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy
#如果报错 请使用
journalctl -f -t kube-proxy 和 journalctl -u kube-proxy 来定位问题
部署Kubernetes Node节点
Node 节点 基于 Nginx 负载 API 做 Master HA。172.16.16.246,172.16.16.250。
#master 之间除 api server 以外其他组件通过 etcd 选举,api server 默认不作处理;在每个 node 上启动一个 nginx,每个 nginx 反向代理所有 api server,node 上 kubelet、kube-proxy 连接本地的 nginx 代理端口,当 nginx 发现无法连接后端时会自动踢掉出问题的 api server,从而实现 api server 的 HA。
安装Node节点组件
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes
cp -r server/bin/{kube-proxy,kubelet,kubectl} /usr/local/bin/
cp -r server/bin/{kube-proxy,kubelet,kubectl} /usr/bin/
#ALL node
mkdir -p /etc/kubernetes/ssl/
scp -P53742 ca.pem kube-proxy.pem kube-proxy-key.pem 172.16.16.246:/etc/kubernetes/ssl/
scp -P53742 ca.pem kube-proxy.pem kube-proxy-key.pem 172.16.16.250:/etc/kubernetes/ssl/
创建 kubelet kubeconfig 文件
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://172.16.16.86:6443 --kubeconfig=bootstrap.kubeconfig
#必须是server-master节点
配置客户端认证
kubectl config set-credentials kubelet-bootstrap \
--token=49d1b9839aafea9c9030096260d51a3d \
--kubeconfig=bootstrap.kubeconfig
配置关联
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
配置默认关联
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
拷贝生成的 bootstrap.kubeconfig 文件
mv bootstrap.kubeconfig /etc/kubernetes/
创建 kubelet.service 文件
mkdir -p /var/lib/kubelet
vi /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
--address=172.16.16.246 \
--hostname-override=172.16.16.246 \
--pod-infra-container-image=jicki/pause-amd64:3.0 \
--experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--cert-dir=/etc/kubernetes/ssl \
--cluster_dns=10.254.0.2 \
--cluster_domain=doone.com. \
--hairpin-mode promiscuous-bridge \
--allow-privileged=true \
--fail-swap-on=false \
--serialize-image-pulls=false \
--logtostderr=true \
--max-pods=512 \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
启动 kubelet
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
创建 kube-proxy kubeconfig 文件
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=https://172.16.16.86:6443 \
--kubeconfig=kube-proxy.kubeconfig
配置客户端认证
kubectl config set-credentials kube-proxy \
--client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
--client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
配置关联
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
配置默认关联
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
拷贝到目录
mv kube-proxy.kubeconfig /etc/kubernetes/
创建 kube-proxy.service 文件
mkdir -p /var/lib/kube-proxy
vi /etc/systemd/system/kube-proxy.service
#node节点ip地址要改
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
--bind-address=172.16.16.246 \
--hostname-override=172.16.16.246 \
--cluster-cidr=10.254.0.0/16 \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
--logtostderr=true \
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
启动 kube-proxy
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy
创建Nginx 代理
在每个 node 都必须创建一个 Nginx 代理, 这里特别注意, 当 Master 也做为 Node 的时候 不需要配置 Nginx-proxy
在node上创建
创建配置目录
mkdir -p /etc/nginx
####写入代理配置
cat << EOF > /etc/nginx/nginx.conf
error_log stderr notice;
worker_processes auto;
events {
multi_accept on;
use epoll;
worker_connections 1024;
}
stream {
upstream kube_apiserver {
least_conn;
server 172.16.16.86:6443;
}
server {
listen 0.0.0.0:6443;
proxy_pass kube_apiserver;
proxy_timeout 10m;
proxy_connect_timeout 1s;
}
}
EOF
配置 Nginx 基于 docker 进程,然后配置 systemd 来启动
cat << EOF > /etc/systemd/system/nginx-proxy.service
[Unit]
Description=kubernetes apiserver docker wrapper
Wants=docker.socket
After=docker.service
[Service]
User=root
PermissionsStartOnly=true
ExecStart=/usr/bin/docker run -p 127.0.0.1:6443:6443 \\
-v /etc/nginx:/etc/nginx \\
--name nginx-proxy \\
--net=host \\
--restart=on-failure:5 \\
--memory=512M \\
nginx:1.13.5-alpine
ExecStartPre=-/usr/bin/docker rm -f nginx-proxy
ExecStop=/usr/bin/docker stop nginx-proxy
Restart=always
RestartSec=15s
TimeoutStartSec=30s
[Install]
WantedBy=multi-user.target
EOF
启动 Nginx
systemctl daemon-reload
systemctl start nginx-proxy
systemctl enable nginx-proxy
systemctl status nginx-proxy
重启 Node 的 kubelet 与 kube-proxy
systemctl restart kubelet
systemctl status kubelet
systemctl restart kube-proxy
systemctl status kube-proxy
在Master 配置通过 TLS 认证
#查看 csr 的名称
kubectl get csr
增加认证
kubectl certificate approve NAME
[root@incubator-dc-016 cx]# kubectl certificate approve node-csr-EBjoD_bmtunjaDMTUmlph04kLO9Kz8-jdUhh6GDhb7w
certificatesigningrequest "node-csr-EBjoD_bmtunjaDMTUmlph04kLO9Kz8-jdUhh6GDhb7w" approved
[root@incubator-dc-016 cx]# kubectl certificate approve node-csr-v-UvG2zhPQRMf3hDTMUqSq_wvsurSlNFc7CHjl1v3ss
certificatesigningrequest "node-csr-v-UvG2zhPQRMf3hDTMUqSq_wvsurSlNFc7CHjl1v3ss" approved
[root@incubator-dc-016 cx]# kubectl certificate approve node-csr-Sg6CRaxXhdIEJP0hxMHtE2Xoh9fpeFl6OVtocqGeV34 "node-csr-z2sRlOk0UKbsaB_8J9ZhjtnS7gt886GVZBAYESiuf10" approved
[root@incubator-dc-016 cx]#
部署Calico网络
修改 kubelet.service
在每个节点
增加 如下配置
vi /etc/systemd/system/kubelet.service
--network-plugin=cni \
重新加载配置
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet.service
获取Calico 配置
Calico 部署仍然采用 “混搭” 方式,即 Systemd 控制 calico node,cni 等由 kubernetes daemonset 安装。
#获取 calico.yaml master机器上执行
cat <<EOF > calico-controller.yml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: calico-kube-controllers
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-kube-controllers
subjects:
- kind: ServiceAccount
name: calico-kube-controllers
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: calico-kube-controllers
namespace: kube-system
rules:
- apiGroups:
- ""
- extensions
resources:
- pods
- namespaces
- networkpolicies
verbs:
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-kube-controllers
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: calico-policy-controller
namespace: kube-system
labels:
k8s-app: calico-policy
spec:
strategy:
type: Recreate
template:
metadata:
name: calico-policy-controller
namespace: kube-system
labels:
k8s-app: calico-policy
spec:
hostNetwork: true
serviceAccountName: calico-kube-controllers
containers:
- name: calico-policy-controller
image: quay.io/calico/kube-controllers:v1.0.0
env:
- name: ETCD_ENDPOINTS
value: "https://172.16.16.86:2379,https://172.16.16.246:2379,https://172.16.16.250:2379"
- name: ETCD_CA_CERT_FILE
value: "/etc/kubernetes/ssl/ca.pem"
- name: ETCD_CERT_FILE
value: "/etc/kubernetes/ssl/etcd.pem"
- name: ETCD_KEY_FILE
value: "/etc/kubernetes/ssl/etcd-key.pem"
volumeMounts:
- mountPath: /etc/kubernetes/ssl/
name: etcd-ca-certs
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/ssl/
type: DirectoryOrCreate
name: etcd-ca-certs
EOF
kubectl apply -f calico-controller.yml
kubectl -n kube-system get po -l k8s-app=calico-policy
需修改yaml文件内ETCD集群的IP地址
在所有节点下载 Calico
cd /usr/local/bin/
curl -O -L https://github.com/projectcalico/calicoctl/releases/download/v3.2.6/calicoctl
以下操作在三个节点上都需要实现
cd /usr/local/bin/
curl -O -L https://github.com/projectcalico/calicoctl/releases/download/v3.2.6/calicoctl
chmod +x calicoctl
scp -P53742 calicoctl root@172.16.16.246:/usr/local/bin/
scp -P53742 calicoctl root@172.16.16.250:/usr/local/bin/
下载calico、calico-ipam
wget -N -P /opt/cni/bin/ https://github.com/projectcalico/calico-cni/releases/download/v3.1.6/calico
wget -N -P /opt/cni/bin/ https://github.com/projectcalico/calico-cni/releases/download/v3.1.6/calico-ipam
mkdir -p /opt/cni/bin/
cp -rf /opt/k8s/calico /opt/cni/bin/
cp -rf /opt/k8s/calico-ipam /opt/cni/bin/
scp -P53742 calico root@172.16.16.246:/opt/cni/bin/
scp -P53742 calico root@172.16.16.250:/opt/cni/bin/
scp -P53742 calico-ipam root@172.16.16.246:/opt/cni/bin/
scp -P53742 calico-ipam root@172.16.16.250:/opt/cni/bin/
chmod +x /opt/cni/bin/calico /opt/cni/bin/calico-ipam
在所有节点下载 CNI plugins配置文件
vi /etc/cni/net.d/10-calico.conf
{
"name": "calico-k8s-network",
"cniVersion": "0.1.0",
"type": "calico",
"etcd_endpoints": "https://172.16.16.86:2379,https://172.16.16.246:2379,https://172.16.16.250:2379",
"etcd_ca_cert_file": "/etc/kubernetes/ssl/ca.pem",
"etcd_cert_file": "/etc/kubernetes/ssl/etcd.pem",
"etcd_key_file": "/etc/kubernetes/ssl/etcd-key.pem",
"log_level": "info",
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "/etc/kubernetes/kubelet.kubeconfig"
}
}
部署 KubeDNS
下载kubeDNS镜像
#官方镜像
gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
#国内镜像
jicki/k8s-dns-sidecar-amd64:1.14.7
jicki/k8s-dns-kube-dns-amd64:1.14.7
jicki/k8s-dns-dnsmasq-nanny-amd64:1.14.7
下载yaml文件
curl -O https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/kube-dns.yaml.base
#修改后缀
mv kube-dns.yaml.base kube-dns.yaml
系统预定义的 RoleBinding
预定义的 RoleBinding system:kube-dns 将 kube-system 命名空间的 kube-dns ServiceAccount 与 system:kube-dns Role 绑定, 该 Role 具有访问 kube-apiserver DNS 相关 API 的权限;
[root@k8s-master kubedns]# kubectl get clusterrolebindings system:kube-dns -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: 2017-09-29T04:12:29Z
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-dns
resourceVersion: "78"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system%3Akube-dns
uid: 688627eb-a4cc-11e7-9f6b-44a8420b9988
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-dns
subjects:
- kind: ServiceAccount
name: kube-dns
namespace: kube-system
Kube-dns yaml文件
cat <<EOF > kube-dns.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-dns
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.254.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
dnsPolicy: Default
serviceAccountName: kube-dns
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: kube-dns-config
configMap:
name: kube-dns
optional: true
containers:
- name: kubedns
image: registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-kube-dns-amd64:1.14.7
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- "--domain=cluster.local"
- --dns-port=10053
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
volumeMounts:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
livenessProbe:
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- "-v=2"
- "-logtostderr"
- "-configDir=/etc/k8s/dns/dnsmasq-nanny"
- "-restartDnsmasq=true"
- "--"
- "-k"
- "--cache-size=1000"
- "--log-facility=-"
- "--server=/cluster.local/127.0.0.1#10053"
- "--server=/in-addr.arpa/127.0.0.1#10053"
- "--server=/ip6.arpa/127.0.0.1#10053"
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
image: registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-sidecar-amd64:1.14.7
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- "--v=2"
- "--logtostderr"
- "--probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A"
- "--probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A"
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 20Mi
cpu: 10m
EOF
导入yaml文件
[root@incubator-dc-016 k8s]# kubectl create -f kube-dns.yml
serviceaccount "kube-dns" created
service "kube-dns" created
deployment "kube-dns" created
[root@incubator-dc-016 k8s]#
查看kubedns服务
[root@incubator-dc-016 k8s]# kubectl get all --namespace=kube-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/calico-policy-controller 1 1 1 1 4h
deploy/kube-dns 1 1 1 0 19s
NAME DESIRED CURRENT READY AGE
rs/calico-policy-controller-5586b678b5 0 0 0 1h
rs/calico-policy-controller-57dd959cc9 0 0 0 4h
rs/calico-policy-controller-6d94579b6b 1 1 1 56m
rs/kube-dns-794845bc6f 1 1 0 19s
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/calico-policy-controller 1 1 1 1 4h
deploy/kube-dns 1 1 1 0 19s
NAME DESIRED CURRENT READY AGE
rs/calico-policy-controller-5586b678b5 0 0 0 1h
rs/calico-policy-controller-57dd959cc9 0 0 0 4h
rs/calico-policy-controller-6d94579b6b 1 1 1 56m
rs/kube-dns-794845bc6f 1 1 0 19s
NAME READY STATUS RESTARTS AGE
po/calico-policy-controller-6d94579b6b-vksgv 1/1 Running 0 56m
po/kube-dns-794845bc6f-464d8 0/3 ContainerCreating 0 19s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kube-dns ClusterIP 10.254.0.2 <none> 53/UDP,53/TCP 19s
创建 calico-node.service 文件
上一步注释了 calico.yaml 中 Calico Node 相关内容,为了防止自动获取 IP 出现问题,将其移动到 Systemd,Systemd service 配置如下,每个节点都要安装 calico-node 的 Service,其他节点请自行修改 ip。
cat <<EOF > /lib/systemd/system/calico-node.service
[Unit]
Description=calico node
After=docker.service
Requires=docker.service
[Service]
User=root
PermissionsStartOnly=true
ExecStart=/usr/bin/docker run --net=host --privileged --name=calico-node \
-e ETCD_ENDPOINTS=https://172.16.16.86:2379,https://172.16.16.246:2379,https://172.16.16.250:2379 \
-e ETCD_CA_CERT_FILE=/etc/kubernetes/ssl/ca.pem \
-e ETCD_CERT_FILE=/etc/kubernetes/ssl/etcd.pem \
-e ETCD_KEY_FILE=/etc/kubernetes/ssl/etcd-key.pem \
-e NODENAME=${HOSTNAME} \
-e IP= \
-e NO_DEFAULT_POOLS= \
-e AS= \
-e CALICO_LIBNETWORK_ENABLED=true \
-e IP6= \
-e CALICO_NETWORKING_BACKEND=bird \
-e FELIX_DEFAULTENDPOINTTOHOSTACTION=ACCEPT \
-e FELIX_HEALTHENABLED=true \
-e CALICO_IPV4POOL_CIDR=10.233.0.0/16 \
-e CALICO_IPV4POOL_IPIP=always \
-e IP_AUTODETECTION_METHOD=interface=eth0 \
-e IP6_AUTODETECTION_METHOD=interface=eth0 \
-v /etc/kubernetes/ssl:/etc/kubernetes/ssl \
-v /var/run/calico:/var/run/calico \
-v /lib/modules:/lib/modules \
-v /run/docker/plugins:/run/docker/plugins \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/log/calico:/var/log/calico \
quay.io/calico/node:v2.6.2
ExecStop=/usr/bin/docker rm -f calico-node
Restart=on-failure
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
启动 Calico Node
Calico Node 采用 Systemd 方式启动,在每个节点配置好 Systemd service后,每个节点修改对应的 calico-node.service 中的 IP 和节点名称,然后启动即可
systemctl daemon-reload
systemctl restart calico-node
systemctl restart kubelet
在所有节点启动 Calico-node
systemctl enable calico-node.service && systemctl start calico-node.service
在master查看 Calico nodes
cat <<EOF > ~/calico-rc
export ETCD_ENDPOINTS="https://172.16.16.86:2379,https://172.16.16.246:2379,https://172.16.16.250:2379"
export ETCD_CA_CERT_FILE="/etc/kubernetes/ssl/ca.pem"
export ETCD_CERT_FILE="/etc/kubernetes/ssl/etcd.pem"
export ETCD_KEY_FILE="/etc/kubernetes/ssl/etcd-key.pem"
EOF
. ~/calico-rc
calicoctl get node -o wide
查看 pending 的 pod 是否已执行
kubectl -n kube-system get po
部署 Ingress
Kubernetes 暴露服务的方式目前只有三种:LoadBlancer Service、NodePort Service、Ingress; 什么是 Ingress ? Ingress 就是利用 Nginx Haproxy 等负载均衡工具来暴露 Kubernetes 服务
配置 调度 node
#ingress 有多种方式 1. deployment *调度 replicas 2. daemonset 全局调度 分配到所有node里
#deployment *调度过程中,由于我们需要 约束 controller 调度到指定的 node 中,所以需要对 node 进行 label 标签
#默认如下:
[root@incubator-dc-016 k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
172.16.16.246 Ready <none> 4h v1.8.3
172.16.16.250 Ready <none> 4h v1.8.3
172.16.16.86 Ready <none> 5h v1.8.3
[root@incubator-dc-016 k8s]#
# 对 86 打上 label
kubectl label nodes 172.16.16.86 ingress=proxy
# 打完标签以后
NAME STATUS ROLES AGE VERSION LABELS
172.16.16.246 Ready <none> 4h v1.8.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=172.16.16.246
172.16.16.250 Ready <none> 4h v1.8.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=172.16.16.250
172.16.16.86 Ready <none> 5h v1.8.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ingress=proxy,kubernetes.io/hostname=172.16.16.86
下载Ingress镜像
#官方镜像
gcr.io/google_containers/defaultbackend:1.0
gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.17
#国内镜像
jicki/defaultbackend:1.0
jicki/nginx-ingress-controller:0.9.0-beta.17
下载yaml文件
#Ingress yaml 文件模板
#default-backend.yaml
cat <<EOF >default-backend.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: default-http-backend
namespace: kube-system
spec:
replicas: 1
selector:
k8s-app: default-http-backend
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
image: registry.cn-qingdao.aliyuncs.com/kube8s/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 8080
selector:
k8s-app: default-http-backend
EOF
#rbac.yaml
cat <<EOF >rbac.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
hostNetwork: true
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
ingress: proxy
containers:
- name: nginx-ingress-controller
image: jicki/nginx-ingress-controller:0.9.0-beta.17
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --apiserver-host=http://172.16.16.86:8080
#- --configmap=$(POD_NAMESPACE)/nginx-configuration
#- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
#- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: KUBERNETES_MASTER
value: http://172.16.16.86:8080
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
EOF
#with-rbac.yaml
cat <<EOF >with-rbac.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
hostNetwork: true
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
ingress: proxy
containers:
- name: nginx-ingress-controller
image: jicki/nginx-ingress-controller:0.9.0-beta.17
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --apiserver-host=http://172.16.16.86:8080
#- --configmap=$(POD_NAMESPACE)/nginx-configuration
#- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
# - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: KUBERNETES_MASTER
value: http://172.16.16.86:8080
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
EOF
导入yaml文件
kubectl apply -f default-backend.yaml
kubectl apply -f rbac.yaml
kubectl apply -f with-rbac.yaml
[root@incubator-dc-016 Ingress]# curl http://172.16.16.86:8080/healthz
查看ingress服务
kubectl get svc -n kube-system
部署 Dashboard
###下载dashboard镜像
#官方镜像
gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3
#国内镜像
jicki/kubernetes-dashboard-amd64:v1.6.3
下载yaml文件
curl -O https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dashboard/dashboard-controller.yaml
curl -O https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dashboard/dashboard-service.yaml
#因为开启了 RBAC 所以这里需要创建一个 RBAC 认证
vi dashboard-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: dashboard
subjects:
- kind: ServiceAccount
name: dashboard
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
#Dashboard yaml文件模板
#dashboard-controller.yaml
cat <<EOF >dashboard-controller.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
serviceAccountName: dashboard
containers:
- name: kubernetes-dashboard
image: jicki/kubernetes-dashboard-amd64:v1.6.3
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 9090
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
EOF
#dashboard-service.yaml
cat <<EOF >dashboard-service.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 80
targetPort: 9090
EOF
导入yaml文件
kubectl apply -f .
deployment "kubernetes-dashboard" created
serviceaccount "dashboard" created
clusterrolebinding "dashboard" created
service "kubernetes-dashboard" created
查看Dashboard服务
kubectl get svc -n kube-system
这里部署结束,看的比较复杂,但也是用心一笔笔操作的来的,现在都是kuberadmin部署k8s,但至少也要手动去部署一遍,这样才能了解里面的意思和原理,排查错误也方便,附上报错问题及解决方法
报错汇总
无法重启docker
Registered Authentication Agent for unix-process:26237:1270527351解决:echo 1 > /proc/sys/vm/drop_caches
etcd启动不了
open /etc/kubernetes/ssl/etcd.pem: permission denied
解决:
chmod +x /etc/kubernetes/ssl/etcd.pem
chmod 755 /etc/kubernetes/ssl/
Etcd 127.0.0.1错误
Apr 07 14:44:59 incubator-dc-016 etcd[490]: The scheme of client url http://127.0.0.1:2379 is HTTP while peer key/cert files are presented. Ignored key/cert files.
Apr 07 14:44:59 incubator-dc-016 etcd[490]: listening for client requests on 127.0.0.1:2379
Apr 07 14:44:59 incubator-dc-016 etcd[490]: listening for client requests on 172.16.16.86:2379
Apr 07 14:44:59 incubator-dc-016 etcd[490]: create snapshot directory error: mkdir /var/lib/etcd/member/snap: permission denied
解决:
解决方法和思路:
删除所有etcd数据,重新初始化.
rm -rf /var/lib/etcd/*
systemctl daemon-reload && systemctl restart etcd
systemctl status etcd.service
Api启动不了
端口被占用错误
failed to listen on 172.16.16.86:6443: listen tcp 172.16.16.86:6443: bind: address already in use
解决:
发现是docker占用,停止docker nginx使用
然后重启api服务
Proxy启动不了
Failed at step CHDIR spawning /usr/local/bin/kube-proxy: No such file or directory
解决:需要mkdir -p /var/lib/kube-proxy
calico启动不了
calico版本以3.2.6版本为基础,原则上现在的安装不能低于3.1,否则会出现各种问题,已踩过相关的坑。
当出现Kubernetes Calico node ‘XXXXXXXXXXX’ already using IPv4 Address XXXXXXXXX, CrashLoopBackOff错误时,可能是calico的版本过低
ERROR: Error accessing the Calico datastore: open /etc/kubernetes/ssl/etcd.pem: no such file or directoryCalico node failed to start
-v /etc/kubernetes/ssl:/etc/kubernetes/ssl \
Apr 21 16:06:34 incubator-dc-002 docker[22733]: ERROR: Couldn't autodetect a management IPv4 address:
Apr 21 16:06:34 incubator-dc-002 docker[22733]: - provide an IPv4 address by configuring one in the node resource, or
Apr 21 16:06:34 incubator-dc-002 docker[22733]: - provide an IPv4 address using the IP environment, or
Apr 21 16:06:34 incubator-dc-002 docker[22733]: - if auto-detecting, use a different autodetection method.
-e IP_AUTODETECTION_METHOD=interface=eth0 \
-e IP6_AUTODETECTION_METHOD=interface=eth0 \
希望大家共同进步,一起学习,往更优秀的方向走!