准备两台机器
主机名 | IP | 角色 | |
test160 | 192.168.100.122 | master | |
test161 | 192.168.100.125 | node |
一、准备工作
1、两台机器关闭selinux,防火墙,交换分区
2、添加hosts
[root@test160 ~]# vim /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.100.125 test161
[root@test161 ~]# vim /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 185.199.110.153 kubernetes.github.io 192.168.100.122 test160
3、两台机添加kubelet 、docker yum源
[root@test160 ~]# cat /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg [root@test160 ~]# yum list| grep kube cockpit-kubernetes.x86_64 195.12-1.el7.centos extras cri-tools.x86_64 1.13.0-0 kubernetes kubeadm.x86_64 1.20.5-0 kubernetes kubectl.x86_64 1.20.5-0 kubernetes kubelet.x86_64 1.20.5-0 kubernetes kubernetes.x86_64 1.5.2-0.7.git269f928.el7 extras kubernetes-client.x86_64 1.5.2-0.7.git269f928.el7 extras kubernetes-cni.x86_64 0.8.7-0 kubernetes kubernetes-master.x86_64 1.5.2-0.7.git269f928.el7 extras kubernetes-node.x86_64 1.5.2-0.7.git269f928.el7 extras rkt.x86_64 1.27.0-1 kubernetes rsyslog-mmkubernetes.x86_64 8.24.0-57.el7_9 updates [root@test160 k8s]# cat /etc/yum.repos.d/docker-ce.repo [docker-ce-stable] name=Docker CE Stable - $basearch baseurl=https://download.docker.com/linux/centos/$releasever/$basearch/stable enabled=1 gpgcheck=1 gpgkey=https://download.docker.com/linux/centos/gpg
4、安装,各节点
yum install -y cri-tools kubeadm kubectl kubelet kubernetes-cni rkt.x86_64 docker-ce cat >/etc/sysconfig/kubelet<<EOF KUBELET_EXTRA_ARGS="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2" EOF systemctl daemon-reload
5、启动
[root@test160 ~]# systemctl enable docker Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. [root@test160 ~]# systemctl start docker [root@test160 ~]# systemctl start kubelet.service [root@test160 ~]# systemctl enable kubelet.service Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
6、所有节点配置k8s内核参数
cat <<EOF > /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 EOF sysctl --system
7、在master初始化集群
[root@test160 ~]# cat new.yaml apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: 7t2weq.bjbawausm0jaxury ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.100.122 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: test160 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: certSANs: - 192.168.100.122 timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: 192.168.100.122:6443 controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.20.4 networking: dnsDomain: cluster.local podSubnet: 172.168.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {} [root@test160 ~]# kubeadm init --config new.yaml --upload-certs [init] Using Kubernetes version: v1.20.4 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03 [WARNING Hostname]: hostname "test160" could not be reached [WARNING Hostname]: hostname "test160": lookup test160 on 202.96.128.86:53: no such host [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local test160] and IPs [10.96.0.1 192.168.100.122] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost test160] and IPs [192.168.100.122 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost test160] and IPs [192.168.100.122 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 67.501960 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: 9f2632e16029303fffc5f56e53c29b1d0dd55a444b12e2f49e93bd6f6e7361b4 [mark-control-plane] Marking the node test160 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)" [mark-control-plane] Marking the node test160 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: 7t2weq.bjbawausm0jaxury [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: kubeadm join 192.168.100.122:6443 --token 7t2weq.bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:4e1bd4a557221e73a0d3b7d168a2e02522475ec9a775602577cf4a1117fcfb9d \ --control-plane --certificate-key 9f2632e16029303fffc5f56e53c29b1d0dd55a444b12e2f49e93bd6f6e7361b4 Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.100.122:6443 --token 7t2weq.bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:4e1bd4a557221e73a0d3b7d168a2e02522475ec9a775602577cf4a1117fcfb9d [root@test160 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
记录下kubeadm join 这段,node加入集群会用到
配置环境变量,加入这段export KUBECONFIG=/etc/kubernetes/admin.conf
[root@test160 ~]# vim /root/.bashrc export KUBECONFIG=/etc/kubernetes/admin.conf [root@test160 ~]# source .bashrc [root@test160 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION test160 NotReady control-plane,master 5m14s v1.20.5
8、节点加入集群
[root@test161 ~]# kubeadm join 192.168.100.122:6443 --token 7t2weq.bjbawausm0jaxury \ > --discovery-token-ca-cert-hash sha256:4e1bd4a557221e73a0d3b7d168a2e02522475ec9a775602577cf4a1117fcfb9d [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03 [WARNING Hostname]: hostname "test161" could not be reached [WARNING Hostname]: hostname "test161": lookup test161 on 202.96.128.86:53: no such host [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
在master查看结果,加入成功了,虽然状态还是NotReady的,是网络还没有
[root@test160 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION test160 NotReady control-plane,master 84m v1.20.5 test161 NotReady <none> 42s v1.20.5
9、配置网络
下载脚本,不用改什么,直接安装即可
[root@test160 ~]# curl https://docs.projectcalico.org/manifests/canal.yaml -O % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 184k 100 184k 0 0 33609 0 0:00:05 0:00:05 --:--:-- 51158 [root@test160 ~]# ll total 196 -rw-------. 1 root root 1289 Mar 10 00:41 anaconda-ks.cfg -rw-r--r--. 1 root root 188519 Apr 2 15:28 canal.yaml -rw-r--r--. 1 root root 979 Apr 2 13:56 new.yaml
安装,现在去看个节点都ready了
[root@test160 ~]# kubectl apply -f canal.yaml configmap/canal-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/canal-flannel created clusterrolebinding.rbac.authorization.k8s.io/canal-calico created daemonset.apps/canal created serviceaccount/canal created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created poddisruptionbudget.policy/calico-kube-controllers created [root@test160 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION test160 Ready control-plane,master 107m v1.20.5 test161 Ready <none> 24m v1.20.5