debian 12 arm架构安装k8s集群

安装k8s集群,首先要熟悉宿主机的环境,个人拥护特别要看pc的cpu, 目前苹果电脑大部分都是arm架构的m1-4芯片,而intel芯片一般都是x86架构。本文介绍的是在苹果电脑中虚拟二个节点安装k8s集群。

一、集群规划

master01 192.168.123.150

node01    192.168.123.151

二、前期配置 配置各主节点的主机名称: --两台同时操作

hostnamectl set-hostname master01 
hostnamectl set-hostname node01

配置各节点的Host文件,为了初始化集群节点间的通信:--两台同时操作

vim /etc/hosts

192.168.123.150 master01

192.168.123.151 node01

三、永久禁用各节点的交换分区,为了提高各节点的性能

swapoff -a && sed -i 's/.*swap.*/#&/' /etc/fstab

四、同步各节点的时区

timedatectl set-timezone Asia/Shanghai
sudo apt install -y chrony
sudo systemctl restart chrony
sudo systemctl status chrony
chronyc sources

五、安装ipset及ipvsadm,安装加载桥接模块,不然后期kubeadm init报错

apt update
apt install ipset ipvsadm
#配置ipvsadm模块加载
cat << EOF |tee /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
ip_conntrack
EOF
cat << EOF |tee ipvs.sh
#!/bin/sh
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- ip_conntrack
EOF
#执行sh
sh ipvs.sh
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
EOF
modprobe br_netfilter
sysctl --system

开机加载模块

systemctl enable --now systemd-modules-load.service

前期准备工作完毕,开始安装集群:注意避坑:看清楚哦

安装较新版本的集群一般需要安装containerd, 而arm架构服务器对containerd不友好,安装以后各种报错,故障很难排除。所以采用docker容器,而对较新版本的k8s集群来说,还要安装cri-docker,就能支持到新版本的k8s集群了。而这个cri-docker也是集群初始化成功的关键。

六、现在开始安装docker和cri-docker

#先卸载非官方的docker
for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do  apt-get remove $pkg; done
 
apt-get update
apt-get install ca-certificates curl
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
chmod a+r /etc/apt/keyrings/docker.asc
 
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  tee /etc/apt/sources.list.d/docker.list > /dev/null
  
apt-get update
#这是安装最新版本
apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
#查看是否启动
service docker status
service docker start #启动
#设置开机自启
systemctl enable docker && systemctl restart docker && systemctl status docker
docker info #基本信息

为了方便后期pull镜像不出现网络错误,必须配置镜像加速

cat > /etc/docker/daemon.json <<-EOF
{
    "registry-mirrors": [
        "https://docker.mirrors.ustc.edu.cn",
        "https://registry.docker-cn.com",
        "https://registry.docker-cn.com",
        "https://docker-cf.registry.cyou",
        "https://dockercf.jsdelivr.fyi",
        "https://docker.jsdelivr.fyi",
        "https://dockertest.jsdelivr.fyi",
        "https://mirror.aliyuncs.com",
        "https://dockerproxy.com",
        "https://mirror.baidubce.com",
        "https://docker.m.daocloud.io",
        "https://docker.nju.edu.cn",
        "https://docker.mirrors.sjtug.sjtu.edu.cn",
        "https://docker.mirrors.ustc.edu.cn",
        "https://mirror.iscas.ac.cn",
        "https://docker.rainbond.cc"
    ],
    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-driver": "json-file",
    "log-opts": {
        "max-size": "100m"
    },
    "storage-driver": "overlay2"
}
EOF
systemctl daemon-reload
systemctl restart docker
systemctl enable docker
docker images

安装成功这一步注意到是你的网络是否正常访问,一般看/etc/resolv.conf

下面开始安装cri-docker,特别注意如果遇到不能下载到github的源码,先在你的电脑上用浏览器下载,然后在虚拟机上执行wget就可以下载成功了。

k8s版本1.24以上就不在支持docker,需要手动安装

1、在GitHub下载二进制文件
链接:https://github.com/Mirantis/cri-dockerd/releases

打开这个链接后可以看到相应的版本列表,一定要找arm64位的版本,我这里用的是0.3.15

wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.15/cri-dockerd-0.3.15.arm64.tgz
 tar -xzvf cri-dockerd-0.3.15.arm64.tgz 
 cp cri-dockerd/cri-dockerd /usr/bin/
 cat > /etc/systemed/system/cri-docker.service <<-EOF
[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3
# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
cat > /etc/systemed/system/cri-docker.socket <<-EOF
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service
[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
EOF
systemctl daemon-reload
systemctl start cri-docker
systemctl enable cri-docker
systemctl status cri-docker

上述代码两个节点都需要安装

七、下面开始安装kubeadm kubelet kubectl. ---两个节点同时

apt-get update
 添加 Kubernetes 官方 GPG 密钥并添加稳定版仓库源
apt-get install -y apt-transport-https ca-certificates curl gpg
 
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
添加 Kubernetes apt 仓库
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list
 
更新包
apt update
安装 kubelet、kubeadm 和 kubectl
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl #锁定版本,标记软件包不被自动更新
 
systemctl enable kubelet   #添加 kubelet 开机自启
systemctl restart kubelet && systemctl status kubelet # 重启并查看状态

安装完毕后开始最关键的初始化集群,如果前期准备工作都完好无缺就不会报错了,注意是在master01节点初始化,node01节点不需要

kubeadm init --image-repository  registry.aliyuncs.com/google_containers --kubernetes-version v1.29.2 --apiserver-advertise-address 192.168.123.150 --pod-network-cidr=10.244.0.0/16  --cri-socket unix:///var/run/cri-dockerd.sock

初始化成功后执行环境变量

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
 
export KUBECONFIG=/etc/kubernetes/admin.conf
echo  "export KUBECONFIG=/etc/kubernetes/admin.conf"  >> /etc/profile
source /etc/profile
 
#查看Token
kubeadm token list
#集群初始化成功后看下容器pos名称列表
docker ps | awk '{print $NF}'
#在主节点查看是否加入成功,此时 NotReady 是正常的
kubectl get nodes

注意:默认没有tab键没有命令补齐,配置命令补齐

kubectl  completion bash > /etc/bash_completion.d/kubelet
kubeadm  completion bash > /etc/bash_completion.d/kubeadm
#重新登录,启动bash生效
exit

上述成功后有节点加入集群的提示,这一步在node01节点上操作,注意加上cri-socket参数,不然运行报错,不要原封不动照抄,要看master01上集群初始化成功后的提示。

kubeadm join 192.168.123.150:6443 --token zqia6y.15piu123zsx3tjvi         --discovery-token-ca-cert-hash sha256:be7e74890ac905ac3d40749622ea39ded135db7e84263ecc2855fe9525fd580f --cri-socket unix:///var/run/cri-dockerd.sock

可以看到kubectl get nodes 后节点是not ready, 这里需要安装网络通信套件

八、下一步开始安装网络插件flannel和配置文件:注意跟上次下载cri-docker一样,如果下载不了,先在虚拟机宿主机上下载。----两个节点同时安装

下载地址:https://github.com/flannel-io/flannel/releases/

wget https://github.com/flannel-io/flannel/releases/download/v0.25.7/flannel-v0.25.7-linux-arm.tar.gz
wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
grep image kube-flannel.yml
root@master01:~/flannel# pwd
/root/flannel

root@master01:~/flannel# grep image kube-flannel.yml

image: docker.io/flannel/flannel:v0.25.7

image: docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel2

image: docker.io/flannel/flannel:v0.25.7

前面配置了镜像加速,这里两个节点可以快速的拉取镜像了

docker pull docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel2
 docker pull docker.io/flannel/flannel:v0.25.7

拉取镜像成功后,可以应用flannel配置文件了,这里在master01运行

 kubectl apply -f kube-flannel.yml 
 kubectl get nodes -n kube-flannel

然后运行kubectl get nodes 发现节点已经get ready

root@master01:~/flannel# kubectl get nodes
NAME       STATUS   ROLES           AGE     VERSION
master01   Ready    control-plane   6h11m   v1.29.9
node01     Ready    <none>          6h7m    v1.29.9

九、最后我们部署一个服务测试一下,暴露一个外网端口,在宿主机打开是否正常访问

 kubectl create deployment nginx --image=nginx
 kubectl expose deployment nginx --port=80 --type=NodePort
 kubectl get svc -A
root@master01:~/flannel# kubectl get svc -A
NAMESPACE     NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP                  6h14m
default       nginx        NodePort    10.107.107.146   <none>        80:30973/TCP             149m
kube-system   kube-dns     ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   6h14m

可以看到nginx的80端口暴露外网的端口是30973

访问http://master01ip:30973就能看到nginx页面了

到这一步,集群安装成功了,后期还有问题会持续更新

上一篇:DolphinScheduler快速上手:基于Docker Compose的安装与配置全攻略


下一篇:注释与关键字:代码的隐秘语言!