CentOS 8.0部署k8s

一、Master节点、Node节点准备工作

1.关闭firewalld、selinux

systemctl stop firewalld

systemctl disable firewalld

setenforce 0

sed -i ‘s/^SELINUX=.*/SELINUX=disabled/‘ /etc/selinux/config

2.设置系统时区,同步系统时间

timedatectl set-timezone Asia/Shanghai

systemctl enable --now chronyd

chronyc makestep

3.配置主机互信

ssh-keygen

ssh-copy-id

二、禁用swap

swapoff -a

sed -i ‘/swap/s/^/#/g‘ /etc/fstab

三、部署docker

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum list docker-ce --showduplicates | sort -r |tail -1

yum install docker-ce-19.03.13-3.el8 docker-ce-cli-19.03.13-3.el8 containerd.io

systemctl enable --now docker

docker info

curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

chmod +x /usr/local/bin/docker-compose

ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

docker-compose --version

mkdir -p /etc/systemd/system/docker.service.d

cat >/etc/systemd/system/docker.service.d/http-proxy.conf <<EOF

[Service]

Environment="HTTP_PROXY=http://" "HTTPS_PROXY=http://" "NO_PROXY=localhost,127.0.0.1"

EOF

systemctl daemon-reload;systemctl restart docker

四、配置容器运行时

运行时:为了在 Pod 中运行容器,Kubernetes 使用 容器运行时(Container Runtime)。默认情况下,Kubernetes 使用 容器运行时接口(Container Runtime Interface,CRI) 来与你所选择的容器运行时交互。需要在集群内每个节点上安装一个 容器运行时 以使 Pod 可以运行在上面,如果同时检测到 Docker 和 containerd,则优先选择 Docker。

配置的先决条件:

cat <<EOF | tee /etc/modules-load.d/docker.conf

overlay

br_netfilter

EOF

modprobe overlay

modprobe br_netfilter

# 设置必需的 sysctl 参数,这些参数在重新启动后仍然存在。

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf

net.bridge.bridge-nf-call-iptables? = 1

net.ipv4.ip_forward? = 1

net.bridge.bridge-nf-call-ip6tables = 1

EOF

# 应用 sysctl 参数而无需重新启动

sudo sysctl --system

五、配置 Docker 守护程序,尤其是使用 systemd 来管理容器的 cgroup,

(Cgroup 驱动程序:默认情况下,CRI-O 使用 systemd cgroup 驱动程序,控制组用来约束分配给进程的资源)。

mkdir /etc/docker

cat <<EOF | tee /etc/docker/daemon.json

{

? "exec-opts": ["native.cgroupdriver=systemd"],

? "log-driver": "json-file",

? "log-opts": {

??? "max-size": "100m"

? },

? "storage-driver": "overlay2",

? "storage-opts": [

??? "overlay2.override_kernel_check=true"

? ],

? "registry-mirrors": ["https://7uuu3esz.mirror.aliyuncs.com"],

? "insecure-registries" : ["myregistrydomain.com:5000"]

}

EOF

systemctl daemon-reload

systemctl restart docker

# 对于运行 Linux 内核版本 4.0 或更高版本,或使用 3.10.0-51 及更高版本的 RHEL 或 CentOS 的系统,overlay2是首选的存储驱动程序。

六、安装 kubeadm、kubelet 和 kubectl

需要在每台机器上安装以下的软件包:

kubeadm:用来初始化集群的指令。

kubelet:在集群中的每个节点上用来启动 Pod 和容器等。

kubectl:用来与集群通信的命令行工具。

cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enable=1

gpgcheck=0

exclude=kubelet kubeadm kubectl

EOF

yum -y install kubeadm-1.19.0 kubectl-1.19.0 kubelet-1.19.0 --disableexcludes=kubernetes

systemctl enable --now kubelet

kubelet 现在每隔几秒就会重启,因为它陷入了一个等待 kubeadm 指令的死循环。

七、配置自动补全命令

# 安装bash自动补全插件

yum install bash-completion -y

# 设置kubectl与kubeadm命令补全,下次login生效

kubectl completion bash >/etc/bash_completion.d/kubectl

kubeadm completion bash > /etc/bash_completion.d/kubeadm??????????

八、预拉取kubernetes镜像

由于国内网络因素,kubernetes镜像需要从mirrors站点或通过dockerhub用户推送的镜像拉取。

kubeadm config images list --kubernetes-version v1.19.0

脚本:pull.sh

#!/bin/bash

# Script For Quick Pull K8S Docker Images

KUBE_VERSION=v1.19.0

PAUSE_VERSION=3.2

CORE_DNS_VERSION=1.7.0

ETCD_VERSION=3.4.9-1

# pull kubernetes images from hub.docker.com

docker pull kubeimage/kube-proxy-amd64:$KUBE_VERSION

docker pull kubeimage/kube-controller-manager-amd64:$KUBE_VERSION

docker pull kubeimage/kube-apiserver-amd64:$KUBE_VERSION

docker pull kubeimage/kube-scheduler-amd64:$KUBE_VERSION

# pull aliyuncs mirror docker images

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION

# retag to k8s.gcr.io prefix

docker tag kubeimage/kube-proxy-amd64:$KUBE_VERSION? k8s.gcr.io/kube-proxy:$KUBE_VERSION

docker tag kubeimage/kube-controller-manager-amd64:$KUBE_VERSION k8s.gcr.io/kube-controller-manager:$KUBE_VERSION

docker tag kubeimage/kube-apiserver-amd64:$KUBE_VERSION k8s.gcr.io/kube-apiserver:$KUBE_VERSION

docker tag kubeimage/kube-scheduler-amd64:$KUBE_VERSION k8s.gcr.io/kube-scheduler:$KUBE_VERSION

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION k8s.gcr.io/coredns:$CORE_DNS_VERSION

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION k8s.gcr.io/etcd:$ETCD_VERSION

# untag origin tag, the images won‘t be delete.

docker rmi kubeimage/kube-proxy-amd64:$KUBE_VERSION

docker rmi kubeimage/kube-controller-manager-amd64:$KUBE_VERSION

docker rmi kubeimage/kube-apiserver-amd64:$KUBE_VERSION

docker rmi kubeimage/kube-scheduler-amd64:$KUBE_VERSION

docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION

docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION

docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION

执行脚本后:7个镜像:proxy apiserver controller scheduler etcd dns pause

[root@k8smaster ~]# docker images |grep "k8s.gcr.io"

k8s.gcr.io/kube-proxy???????????????????????????????????????? v1.19.0???????????? bc9c328f379c??????? 10 months ago?????? 118MB

k8s.gcr.io/kube-apiserver???????????????????????????????????? v1.19.0???????????? 1b74e93ece2f??????? 10 months ago?????? 119MB

k8s.gcr.io/kube-controller-manager??????????????????????????? v1.19.0???????????? 09d665d529d0??????? 10 months ago?????? 111MB

k8s.gcr.io/kube-scheduler???????????????????????????????????? v1.19.0???????????? cbdc8369d8b1??????? 10 months ago?????? 45.7MB

k8s.gcr.io/etcd?????????????????????????????????????????????? 3.4.9-1???????????? d4ca8726196c??????? 12 months ago?????? 253MB

k8s.gcr.io/coredns??????????????????????????????????????????? 1.7.0?????????????? bfe3a36ebd25??????? 12 months ago?????? 45.2MB

k8s.gcr.io/pause????????????????????????????????????????????? 3.2???????????????? 80d28bedfe5d??????? 17 months ago?????? 683kB

九、初始化k8s的master节点

# master节点服务器执行

kubeadm config print init-defaults >init.yaml

init.yaml内容:需修改 需增加

apiVersion: kubeadm.k8s.io/v1beta2

bootstrapTokens:

- groups:

? - system:bootstrappers:kubeadm:default-node-token

? token: abcdef.0123456789abcdef

? ttl: 24h0m0s

? usages:

? - signing

? - authentication

kind: InitConfiguration

localAPIEndpoint:

? advertiseAddress: 192.168.23.10

? bindPort: 6443

nodeRegistration:

? criSocket: /var/run/dockershim.sock

? name: node

? taints: null

---

apiServer:

? timeoutForControlPlane: 4m0s

apiVersion: kubeadm.k8s.io/v1beta2

certificatesDir: /etc/kubernetes/pki

clusterName: kubernetes

controllerManager: {}

dns:

? type: CoreDNS

etcd:

? local:

??? dataDir: /var/lib/etcd

imageRepository: k8s.gcr.io

kind: ClusterConfiguration

kubernetesVersion: v1.19.0

networking:

? dnsDomain: cluster.local

? serviceSubnet: 10.96.0.0/12

? podSubnet: "10.244.0.0/16" (内网某网段的子网,例如某网段172.16.0.0/16, 分化出子网:172.16.1.0/24,172.16.2.0/24等,保证每个node节点上pod分配网络地址的时候只能从这个子网范围内分配,避免了IP地址冲突)

scheduler: {}

kubeadm init phase preflight (测试)

WARNING是正常的。

10.244.0.0/16(能修改??)是flannel固定使用的IP段,设置取决于网络组件要求。对应了kube-flannel.yml中以下:
?

? net-conf.json: |

??? {

????? "Network": "10.244.0.0/16",? (能修改??)

????? "Backend": {

??????? "Type": "vxlan"

????? }

kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.19.0 |tee init.log

init.log信息有用,保存好。

记下node节点加入master的命令:

?kubeadm join 192.168.23.10:6443 --token 2ax0m9.qbu5gri5c9rare3i???? --discovery-token-ca-cert-hash sha256:ea68c3242205dfddb052d60b0d79dc552f5dda5aa9e6e367b6075b53a59dabc2

如果没有记下可以用以下命令重新生成:

kubeadm token create --print-join-command 2>&1|tail -n 1

十、配置master认证

# master节点服务器执行

echo ‘export KUBECONFIG=/etc/kubernetes/admin.conf‘ >> /etc/profile

. /etc/profile

十一、安装网络组件

# master节点服务器执行

yum install -y wget

#下载最新的flannel配置文件

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kubectl apply -f kube-flannel.yml

十二、备份镜像供其他节点使用

docker save `docker images |egrep "(proxy|apiserver|controller-manager|scheduler|etcd|coredns|pause)"|awk ‘/k8s.gcr.io/{printf"%s ",$1}‘` >k8s_imagesv1.19.0.tar

十三、拷贝镜像到node节点,导入镜像

docker load < k8s_imagesv1.19.0.tar

十四、在node*节点上执行加入集群命令

kubeadm join 192.168.23.10:6443 --token 2ax0m9.qbu5gri5c9rare3i???? --discovery-token-ca-cert-hash sha256:ea68c3242205dfddb052d60b0d79dc552f5dda5aa9e6e367b6075b53a59dabc2

十五、在该集群上面执行业务层面的镜像操作

新建nginx.yml

apiVersion: apps/v1

kind: Deployment

metadata:

? name: www

spec:

? selector:

??? matchLabels:

????? app: nginx

? replicas: 3

? template:

??? metadata:

????? labels:

??????? app: nginx

??? spec:

????? containers:

????? - name: nginx

??????? image: nginx

??????? imagePullPolicy: IfNotPresent

??????? ports:

??????? - containerPort: 80

?

?kubectl apply -f nginx.yaml

deployment.apps/www configured

?

扩容:

kubectl scale --current-replicas=3 --replicas=6 deployment/www

deployment.apps/www scaled

?

?

?

?

?

?

?

????????

上一篇:Cephadm全功能安装Ceph pacific


下一篇:KVM和XEN虚拟化