kubernetes(K8S) 集群部署之ETCD数据库部署、flannel网络组件安装
一、单 master 集群部署介绍
搭建k8s集群所使用的安装包:(我用的安装包版本)
搭建节点服务器:(三个节点)
Master: 192.168.66.130/24
需要安装的软件: Kube-apiserver kube-controller-manager kube-scheduler etcd
Node01: 192.168.66.132/24
需要安装的软件: kubelet kube-proxy docker flannel etcd
Node02: 192.168.66.133/24
需要安装的软件:kubelet kube-proxy docker flannel etcd
二、环境准备
1、每个虚拟机配置相对应的静态IP地址
vi /etc/sysconfig/network-scripts/ifcfg-ens33
2、防止重启虚拟机IP地址变化
systemctl stop NetworkManager
systemctl enable NetworkManager
service network restart #重启网络
ping www.baidu.com #要实现网络通信
3、防火墙不要关闭。
systemctl start firewalld #开启防火墙
iptables -F #清空防火墙规则
setenforce 0 #关闭核心防护
三、部署ETCD集群
ETCD之间通信都是经过加密的,所以要创建CA证书使用TLS加密通讯。
3.1、安装制作证书的工具cfssl
master节点:
[root@localhost ~]# mkdir k8s
[root@localhost ~]# cd k8s/
//编写cfssl.sh脚本,从官网下载制作证书的工具cfssl,直接放在/usr/local/bin目录下,方便系统识别,最后给工具加执行权限
[root@localhost k8s]# vi cfssl.sh
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
#执行脚本等待安装下载软件
[root@localhost k8s]# bash cfssl.sh
[root@localhost k8s]# ls /usr/local/bin/
#可以看到三个制作证书的工具
cfssl cfssl-certinfo cfssljson
#cfssl:生成证书工具
#cfssl-certinfo:查看证书信息
#cfssljson:通过传入json文件生成证书
3.2、制作CA证书
[root@localhost k8s]# mkdir etcd-cert // 所有证书存放的位置
[root@localhost k8s]# mv etcd-cert.sh etcd-cert //生成证书的素材
[root@localhost k8s]# cd etcd-cert/
1、创建生成ca证书的配置文件
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
2、创建ca证书的签名证书
cat > ca-csr.json <<EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
3、用ca签名证书生成ca证书,得到ca-key.pem ca.pem
cfssl gencert -initca ca-csr.json | cfssljson -bare ca –
4、指定etcd三个节点之间的通信验证—需要服务器签名证书 server-csr.json
// IP地址修改成自己的节点
cat > server-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"192.168.66.130",
"192.168.66.132",
"192.168.66.133"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
5、用ca-key.pem、ca.pem、服务器签名证书 生成ETCD证书 ----server-key.pem、server.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
3.3、使用证书、etcd脚本搭建ETCD集群
上传一个生成ETCD配置文件的脚本etcd.sh到 /root/k8s 目录下
[root@localhost k8s]# vim etcd.sh
#!/bin/bash
# example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380
ETCD_NAME=$1
ETCD_IP=$2
ETCD_CLUSTER=$3
WORK_DIR=/opt/etcd
# 创建节点的配置文件模板
cat <<EOF >$WORK_DIR/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
# 创建节点的启动脚本模板
cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd --name=\${ETCD_NAME} --data-dir=\${ETCD_DATA_DIR} --listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} --listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 --advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} --initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} --initial-cluster=\${ETCD_INITIAL_CLUSTER} --initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} --initial-cluster-state=new --cert-file=${WORK_DIR}/ssl/server.pem --key-file=${WORK_DIR}/ssl/server-key.pem --peer-cert-file=${WORK_DIR}/ssl/server.pem --peer-key-file=${WORK_DIR}/ssl/server-key.pem --trusted-ca-file=${WORK_DIR}/ssl/ca.pem --peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
# 重启服务,并设置开机自启
systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd
把下载好的三个软件包上传到k8s目录下
先解压 etcd软件包到当前目录下,再创建etcd集群的工作目录
[root@localhost k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz // 解压
[root@localhost k8s]# ls etcd-v3.3.10-linux-amd64
Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md
#稍后使用源码包中的etcd、etcdctl 应用程序命令
[root@localhost k8s]# mkdir /opt/etcd/{cfg,bin,ssl} -p //配置文件,命令文件,证书
[root@localhost k8s]# ls /opt/etcd/
bin cfg ssl
1、把etcd、etcdctl 执行文件放在/opt/etcd/bin/
[root@localhost k8s]# mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin
2、拷贝证书到/opt/etcd/ssl/目录下
[root@localhost k8s]# cp etcd-cert/*.pem /opt/etcd/ssl/
执行 etcd.sh 脚本产生etcd集群的配置脚本和服务启动脚本,进入卡住状态等待其他节点加入
// 注意修改IP地址
[root@localhost k8s]# bash etcd.sh etcd01 192.168.66.130 etcd02=https://192.168.66.132:2380,etcd03=https://192.168.66.133:2380
//使用另外一个会话窗口,会发现etcd进程已经开启
[root@localhost ~]# ps aux | grep etcd
3.4、node节点加入ETCD集群(实现内部通信)
1、在master节点上拷贝证书去其他node节点
[root@localhost k8s]# scp -r /opt/etcd/ root@192.168.66.132:/opt
[root@localhost k8s]# scp -r /opt/etcd/ root@192.168.66.133:/opt
2、把master节点的启动脚本拷贝其他节点
[root@localhost k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.66.132:/usr/lib/systemd/system
[root@localhost k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.66.133:/usr/lib/systemd/system
3、在node01 节点上修改配置文件
[root@localhost system]# cd /opt/etcd/cfg/
[root@localhost cfg]# ls
etcd
[root@localhost cfg]# vim etcd
4、在node02 节点上修改配置文件
[root@localhost system]# cd /opt/etcd/cfg/
[root@localhost cfg]# ls
etcd
[root@localhost cfg]# vim etcd
5、在master节点输入bash等待node节点加入集群
[root@localhost k8s]# bash etcd.sh etcd01 192.168.66.130 etcd02=https://192.168.66.132:2380,etcd03=https://192.168.66.133:2380
6、同时快速启动 node01、node02节点
[root@localhost ~]# systemctl start etcd
[root@localhost ~]# systemctl status etcd
3.5、检查集群状态
在master节点上执行,注意:etcd-cert/下执行检查集群的命令
[root@localhost k8s]# cd etcd-cert/
[root@localhost etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.66.130:2379,https://192.168.66.132:2379,https://192.168.66.133:2379" cluster-health
四、docker引擎部署
所有node节点必须要部署docker引擎,docker安装部署可以参考我之前的博客:[Docker部署与镜像加速、网络优化
五、部署flannel网络组件
5.1、建立ETCD集群与外部的通信
1、在master节点上,将分配的子网段写入到ETCD中,供flannel使用
[root@localhost etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.66.130:2379,https://192.168.66.132:2379,https://192.168.66.133:2379" set /coreos.com/network/config ‘{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}‘
查看写入的信息
[root@localhost etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.66.130:2379,https://192.168.66.132:2379,https://192.168.66.133:2379" get /coreos.com/network/config
2、两个node节点:上传软件包flannel并解压到宿主目录下 。
//拷贝到所有node节点(只需要部署在node节点即可)
[root@localhost k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.66.132:/root
[root@localhost k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.66.133:/root
//所有node节点操作解压
[root@localhost ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
flanneld
mk-docker-opts.sh
README.md
3、在两个node节点上创建k8s工作目录
[root@localhost ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[root@localhost ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
上传可以生成配置文件和启动文件的脚本flannel.sh。
[root@localhost ~]# vim flannel.sh
#!/bin/bash
ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
cat <<EOF >/opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
4、两个node节点开启flannel网络功能
[root@localhost ~]# bash flannel.sh https://192.168.66.130:2379,https://192.168.66.132:2379,https://192.168.66.133:2379
查看网络状态是否运行
[root@localhost ~]# systemctl status flanneld
5.2、配置Docker连接flannel网络
两个node节点:修改docker的配置文件
[root@localhost ~]# vim /usr/lib/systemd/system/docker.service
//修改添加两处:
EnvironmentFile=/run/flannel/subnet.env
$DOCKER_NETWORK_OPTIONS
查看 flanne网络分配的子网段
[root@localhost ~]# cat /run/flannel/subnet.env
重启docker服务
[root@localhost ~]# systemctl daemon-reload
[root@localhost ~]# systemctl restart docker
5.3、验证flannel网络互通
1、两个node节点分别创建并自动进入centos:7容器。
[root@localhost ~]# docker run -it centos:7 /bin/bash
[root@a57795cdc6ef /]# yum install net-tools -y
#安装后可以使用ifconfig命令
2、ifconfig查看IP地址,用ping命令检测网络是否互通
经验证,可以互通,flannel网络搭建完成!
kubernetes(K8S) 集群部署之ETCD数据库部署、flannel网络组件安装