虚拟机上部署k8s集群

部署k8s集群

一、环境准备:

  • Ubuntuserver版本
Ubuntu 18.10 (GNU/Linux 4.18.0-10-generic x86_64)

下载链接:

暂无

  • docker版本
Docker version 18.06.1-ce, build e68fc7a

下载链接:

暂无

  • k8s版本
v1.13.1

下载链接:

暂无

  • 辅助工具Xshell、vmware
Xshell 7
VMware® Workstation 16 Pro  16.1.1 build-17801498

下载链接:

暂无

二、安装Ubuntu 18.10 server

安装过程,可以一直Done,其中关键的步骤:

设置源:(错过也没关系,可以从系统内更改)

安装过程其中一步配置: http://mirrors.aliyun.com/ubuntu

三、安装docker

1.基础准备

  1. Docker 要求 Ubuntu 系统的内核版本高于 3.10 ,查看本页面的前提条件来验证你的 Ubuntu 版本是否支持 Docker。
uname -r 
4.18.0-21-generic(主版本必须保持一致)
  1. 安装curl
sudo apt-get update && apt-get install -y curl telnet wget man \
apt-transport-https \
ca-certificates \
software-properties-common vim

  1. 查看新版本号

    • Ubuntu 18.10
    $ lsb_release -c
    Codename:	cosmic
    
  2. 查看确认国内源

    	$ cp /etc/apt/sources.list /etc/apt/sources.list.bak
    	修改 sudo vim /etc/apt/sources.list
    	输入 deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
    deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
    deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
    deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
    deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
    deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
    deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
    deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
    deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
    deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
    
    	$ cat /etc/apt/sources.list
    	
    

2.在线安装Docker-ce(不推荐,未实践)

(建议下面的手动安装方式,因为在线可能会出现版本不一致)

注意: 该国内源目前提供 18.09版本,与k8s不符。k8s推荐安装Docker ce 18.06

  1. 安装GPG秘钥和添加国内镜像

    $ curl -fsSL https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
    

    添加国内源头

    $ add-apt-repository \
        "deb https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu \
        $(lsb_release -cs) \
        stable"
    
  2. 更新国内源路径

sudo apt update

3.安装查看版本指令

sudo apt-get install -y apt-show-versions

4.查看docker-ce版本号

sudo apt-show-versions -a docker-ce
  1. 在线安装Docker-ce
sudo apt-get update && apt-get install -y docker-ce
注意到当前安装的版本是 `docker-ce_5%3a18.09.6~3-0~ubuntu-cosmic_amd64.deb`

3. 手动安装Docker(离线安装)【推荐,我一次过】

  1. 下载docker-ce_18.06.1\~ce\~3-0\~ubuntu_amd64.deb
  2. 上传到上述文件到待安装服务器master
  3. 登录待安装服务器,切换到root账户
  4. dpkg -i docker-ce_18.06.1\~ce\~3-0\~ubuntu_amd64.deb

如果提示错误

dpkg: error: dpkg frontend is locked by another process

说明已经有其他进程在使用dpkg安装程序

sudo rm /var/lib/dpkg/lock

即可。

如果提示错误

master@master:~/package$ sudo dpkg -i docker-ce_18.06.1~ce~3-0~ubuntu_amd64.deb
 
[sudo] password for master: 
Selecting previously unselected package docker-ce.
(Reading database ... 100647 files and directories currently installed.)
Preparing to unpack docker-ce_18.06.1~ce~3-0~ubuntu_amd64.deb ...
Unpacking docker-ce (18.06.1~ce~3-0~ubuntu) ...
dpkg: dependency problems prevent configuration of docker-ce:
 docker-ce depends on libltdl7 (>= 2.4.6); however:
  Package libltdl7 is not installed.

dpkg: error processing package docker-ce (--install):
 dependency problems - leaving unconfigured
Processing triggers for man-db (2.8.4-2) ...
Processing triggers for systemd (239-7ubuntu10) ...
Errors were encountered while processing:
 docker-ce

表示当前docker-ce 依赖系统libltd17库,安装就可以了

$ apt-get install -y libltdl7
  1. docker version
Client:
 Version:           18.06.1-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        e68fc7a
 Built:             Tue Aug 21 17:24:56 2018
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.06.1-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       e68fc7a
  Built:            Tue Aug 21 17:23:21 2018
  OS/Arch:          linux/amd64
  Experimental:     false

确保版本号是 18.06

4.启动Docker

  1. 启动docker
sudo systemctl enable docker 
sudo systemctl start docker 
  1. 登录确认docker已经运行
master@ubuntu:~$ sudo docker ps 
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

四、安装k8s

1、 k8s安装环境准备

  1. 创建配置文件

    sudo touch /etc/apt/sources.list.d/kubernetes.list
    
    
  2. 添加写权限

    sudo chmod 666 /etc/apt/sources.list.d/kubernetes.list 
    

    再添加,内容如下:

    sudo vim /etc/apt/sources.list.d/kubernetes.list 
    
    deb http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial main
    
  3. 执行sudo apt update 更新操作系统源,开始会遇见如下错误

tcast@master:~$ sudo apt update
Get:1 http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial InRelease [8,993 B]
Err:1 http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial InRelease
The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB
Hit:2 http://mirrors.aliyun.com/ubuntu cosmic InRelease
Hit:3 http://mirrors.aliyun.com/ubuntu cosmic-updates InRelease
Hit:4 http://mirrors.aliyun.com/ubuntu cosmic-backports InRelease
Hit:5 http://mirrors.aliyun.com/ubuntu cosmic-security InRelease
Err:6 https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu cosmic InRelease
Could not wait for server fd - select (11: Resource temporarily unavailable) [IP: 202.141.176.110 443]
Reading package lists… Done
W: GPG error: http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial InRelease: The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB
E: The repository ‘http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial InRelease’ is not signed.
N: Updating from such a repository can’t be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.




其中:

```bash
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB

签名认证失败,需要重新生成。记住上面的NO_PUBKEY 6A030B21BA07F4FB

  1. 添加认证key

    运行如下命令,添加错误中对应的key(错误中NO_PUBKEY后面的key的后8位)

    gpg --keyserver keyserver.ubuntu.com --recv-keys BA07F4FB
    

    接着运行如下命令,确认看到OK,说明成功,之后进行安装:

gpg --export --armor BA07F4FB | sudo apt-key add -


5. 再次重新`sudo apt update`更新系统下载源数据列表

```bash
master@master:~$ sudo apt update
Hit:1 https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu cosmic InRelease                  
Hit:2 http://mirrors.aliyun.com/ubuntu cosmic InRelease                                    
Hit:3 http://mirrors.aliyun.com/ubuntu cosmic-updates InRelease                            
Hit:4 http://mirrors.aliyun.com/ubuntu cosmic-backports InRelease                          
Hit:5 http://mirrors.aliyun.com/ubuntu cosmic-security InRelease                           
Get:6 http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial InRelease [8,993 B]      
Ign:7 http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial/main amd64 Packages
Get:7 http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial/main amd64 Packages [26.6 kB]
Fetched 26.6 kB in 42s (635 B/s)    
Reading package lists... Done
Building dependency tree       
Reading state information... Done
165 packages can be upgraded. Run 'apt list --upgradable' to see them.

以上没有报和错误异常,表示成功。

2、 禁止基础设施

  1. 禁止防火墙

    $ sudo ufw disable
    Firewall stopped and disabled on system startup
    
  2. 关闭swap

    # 成功
    $ sudo swapoff -a 
    # 永久关闭swap分区
    $ sudo sed -i 's/.*swap.*/#&/' /etc/fstab
    
  3. 禁止selinux

# 安装操控selinux的命令
$ sudo apt install -y selinux-utils
# 禁止selinux
$ setenforce 0
# 重启操作系统
$ shutdown -r now
# 查看selinux是否已经关闭
$ sudo getenforce
Disabled(表示已经关闭)

3、 k8s系统网络配置

(1) 配置内核参数,将桥接的IPv4流量传递到iptables的链

创建/etc/sysctl.d/k8s.conf文件

sudo touch /etc/sysctl.d/k8s.conf

添加内容如下:

sudo vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0

(2) 执行命令使修改生效

# 【候选】建议执行下面的命令
$ sudo modprobe br_netfilter
$ sudo sysctl -p /etc/sysctl.d/k8s.conf

5、安装k8s

注意: 切换到root用户 $ su

  1. 安装Kubernetes 目前安装版本 v1.13.1

    $ apt update && apt-get install -y kubelet=1.13.1-00 kubernetes-cni=0.6.0-00 kubeadm=1.13.1-00 kubectl=1.13.1-00
    
  2. 设置为开机重启

    $ sudo systemctl enable kubelet && systemctl start kubelet
    $ sudo shutdown -r now
    

6、验证k8s

  1. 使用root用户登录Master主机

  2. 执行如下个命令

    kubectl get nodes 
    

输出如下:

$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
  1. 查看当前k8s版本 (说明安装完成)

    $ kubectl version
    
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

五、部署集群

1、集群环境准备

  1. 在VMWare中创建2份完整克隆(对已经执行过一、二、三、四部分内容的主机master克隆),分别命名为UbuntuNode1UbuntuNode2

  2. 分别对两个完整克隆的虚拟机进行如下操作,修改主机名称和静态IP

  3. 1. 使用root用户登录
    2. 打开配置文件`vim /etc/cloud/cloud.cfg`
    3. 修改配置`preserve_hostname: true`
    
  4. 修改/etc/hostname,只有一行 node1node2

2、 master和node基础配置

给node配置hostname

node1主机

sudo vim /etc/hostname

node1

node2主机

sudo vim /etc/hostname

node2

2.确认配置的三台机器的主机名称

$ cat /etc/hosts

配置IP地址

  • master

sudo vim /etc/netplan/50-cloud-init.yaml

如果需要虚拟机联网,把dhcp4设置为true

network:
    ethernets:
        ens33:
            addresses: [192.168.236.177/24]
            dhcp4: false
            gateway4: 192.168.236.2
            nameservers:
                       addresses: [192.168.236.2]
            optional: true
    version: 2

重启ip配置

netplan apply

坑:配置dhcp4: true后ping不通百度

sudo vim /etc/netplan/50-cloud-init.yaml

gateway4: 255.255.255.0

netplan apply
  • node1

sudo vim /etc/netplan/50-cloud-init.yaml

network:
    ethernets:
        ens33:
            addresses: [192.168.236.178/24]
            dhcp4: false
            gateway4: 192.168.236.2
            nameservers:
                       addresses: [192.168.236.2]
            optional: true
    version: 2

重启ip配置

netplan apply

坑:配置dhcp4: true后ping不通百度

sudo vim /etc/netplan/50-cloud-init.yaml

gateway4: 255.255.255.0

netplan apply
  • node2

sudo vim /etc/netplan/50-cloud-init.yaml

network:
    ethernets:
        ens33:
            addresses: [192.168.236.179/24]
            dhcp4: false
            gateway4: 192.168.236.2
            nameservers:
                       addresses: [192.168.236.2]
            optional: true
    version: 2

重启ip配置

netplan apply

坑:配置dhcp4: true后ping不通百度

sudo vim /etc/netplan/50-cloud-init.yaml

gateway4: 255.255.255.0

netplan apply

修改hosts文件

注意: (Master、Node1、Node2都需要配置成下面的内容)

使用root用户登录

  1. 打开hosts文件 vim /etc/hosts

    sudo vim /etc/hosts
    
  2. 输入追加如下内容

    192.168.236.177 master
    192.168.236.178 node1
    192.168.236.179 node2
    
  3. 重启机器shutdown -r now(可不重启)

3、配置Master节点

创建工作目录


$ mkdir /home/master/working
$ cd /home/master/working/

创建kubeadm.conf配置文件

  1. 创建k8s的管理工具kubeadm对应的配置文件,候选操作在home/master/working/目录下

使用kubeadm配置文件,通过在配置文件中指定docker仓库地址,便于内网快速部署。

生成配置文件

kubeadm config print init-defaults ClusterConfiguration > kubeadm.conf
  1. 修改kubeadm.conf中的如下两项:
  • imageRepository

  • kubernetesVersion

    sudo vim/home/master/working/
    
vim kubeadm.conf
# 修改 imageRepository: k8s.gcr.io
# 改为 registry.cn-beijing.aliyuncs.com/imcto
imageRepository: registry.cn-beijing.aliyuncs.com/imcto
# 修改kubernetes版本kubernetesVersion: v1.13.0
# 改为kubernetesVersion: v1.13.1
kubernetesVersion: v1.13.1
  1. 修改kubeadm.conf中的API服务器地址,后面会频繁使用这个地址。
  • localAPIEndpoint:
localAPIEndpoint:
  advertiseAddress: 192.168.236.177
  bindPort: 6443

注意: 192.168.236.177是master主机的ip地址

  1. 配置子网网络
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}

这里的10.244.0.0/1610.96.0.0/12分别是k8s内部pods和services的子网网络,最好使用这个地址,后续flannel网络需要用到。

拉取K8s必备的模块镜像

  1. 查看一下都需要哪些镜像文件需要拉取
$ kubeadm config images list --config kubeadm.conf
registry.cn-beijing.aliyuncs.com/imcto/kube-apiserver:v1.13.1
registry.cn-beijing.aliyuncs.com/imcto/kube-controller-manager:v1.13.1
registry.cn-beijing.aliyuncs.com/imcto/kube-scheduler:v1.13.1
registry.cn-beijing.aliyuncs.com/imcto/kube-proxy:v1.13.1
registry.cn-beijing.aliyuncs.com/imcto/pause:3.1
registry.cn-beijing.aliyuncs.com/imcto/etcd:3.2.24
registry.cn-beijing.aliyuncs.com/imcto/coredns:1.2.6
  1. 拉取镜像
#下载全部当前版本的k8s所关联的镜像
kubeadm config images pull --config ./kubeadm.conf

初始化kubernetes环境

#初始化并且启动
$ sudo kubeadm init --config ./kubeadm.conf
kubeadm join 192.168.63.2:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:0c5432b503a32ef2f08efc2e5daaa3ab3ff113adede39be4e76eea8fdb66ba4a
把此段保存下来

更多kubeadm配置文件参数详见(不需要看)

kubeadm config print-defaults

k8s启动成功输出内容较多,但是记住末尾的内容

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.236.177:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:e778d3665e52f5a680a87b00c6d54df726c2eda601c0db3bfa4bb198af2262a8

按照官方提示,执行以下操作。

  1. 执行如下命令

    $ mkdir -p $HOME/.kube
    $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
  2. 创建系统服务并启动

    # 启动kubelet 设置为开机自启动
    $ sudo systemctl enable kubelet
    # 启动k8s服务程序
    $ sudo systemctl start kubelet
    

验证kubernetes启动结果

  1. 验证输入,注意显示master状态是NotReady,证明初始化服务器成功(切换成root用户)
$ kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
master   NotReady   master   12m   v1.13.1
  1. 查看当前k8s集群状态
$ kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health": "true"}

目前只有一个master,还没有node,而且是NotReady状态,那么我们需要将node加入到master管理的集群中来。在加入之前,我们需要先配置k8s集群的内部通信网络,这里采用的是flannel网络。

部署集群内部通信flannel网络,此处有坑:没办法*下载kube-flannel.yml。

$cd $HOME/working
$wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

编辑这个文件,确保flannel网络是对的,找到net-conf.json标记的内容是否正确。

 net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }

使用手动方式配置:

1.修改/etc/hosts

wget kube-flannel.yml的时候显示连接失败
是因为网站被墙了,建议在/etc/hosts文件添加一条
199.232.68.133 raw.githubusercontent.com

2.下载,但是下载的是有问题的文件,需要修改

sudo wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

3.修改,保存


---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - ppc64le
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - s390x
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
EOF

这个"10.244.0.0/16"和 ./kubeadm.conf中的podsubnet的地址要一致。**

应用当前flannel配置文件

master@master:~/working$ kubectl apply -f kube-flannel.yml 

输出结果如下

root@master:~/working# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

安装flannel网络前 执行kubectl get nodes输出结果如下

master@master:~/working$ kubectl get node
NAME     STATUS     ROLES    AGE   VERSION
master   NotReady   master   10m   v1.13.1

安装flannel网络后 执行kubectl get nodes输出结果如下

master@master:~/working$ kubectl get node
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   10m   v1.13.1

此时master已经是Ready状态了,表示已经配置成功了,那么我们就需要配置node来加入这个集群。

3、配置node1和node2

配置Node

1 确认外部环境

  1. 确认关闭swap

    apt install -y selinux-utils
    sudo swapoff -a
    
  2. 禁止selinux

    sudo setenforce 0
    
  3. 确认关闭防火墙

    sudo ufw disable
    

2 配置k8s集群的Node主机环境

  1. 启动k8s后台服务

    # 启动kubelet 设置为开机自启动
    $ sudo systemctl enable kubelet
    # 启动k8s服务程序
    $ sudo systemctl start kubelet
    
  2. 将master机器的/etc/kubernetes/admin.conf传到到node1和node2

    登录master终端,/home/master/ master:是主机名称

    #将admin.conf传递给node1
    sudo scp /etc/kubernetes/admin.conf master@192.168.236.178:/home/master/
    
    #将admin.conf传递给node2
    sudo scp /etc/kubernetes/admin.conf master@192.168.236.179:/home/master/
    
  3. 登录node1终端,创建基础kube配置文件环境/

$ mkdir -p $HOME/.kube
$ sudo cp -i $HOME/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
  1. 登录node2终端,创建基础kube配置文件环境
$ mkdir -p $HOME/.kube
$ sudo cp -i $HOME/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
  1. node1node2分别连接master加入master集群。这里用的是kubeadm join指令
//获取join
kubeadm token create --print-join-command
//
kubeadm join 192.168.6.1:6443 --token 9j3u9g.cecwftu8ywal0sjl --discovery-token-ca-cert-hash sha256:a2600aa5707de58b49a6a6e41e52ab1aa50a5f48a138783a8f2ef6e8d8c38315

  1. 应用两个node主机分别应用flannel网络

master中的kube-flannel.yml分别传递给两个node节点.

#将kube-flannel.yml传递给node1
sudo scp $HOME/working/kube-flannel.yml master@192.168.236.178:/home/master/

#将kube-flannel.yml传递给node2
sudo scp $HOME/working/kube-flannel.yml master@192.168.236.179:/home/master/

分别启动flannel网络

master@node1:~$ kubectl apply -f kube-flannel.yml 
master@node2:~$ kubectl apply -f kube-flannel.yml
  1. 查看node是否已经加入到k8s集群中(需要等一段时间才能ready)
master@node2:~$ kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   35m     v1.13.1
node1    Ready    <none>   2m23s   v1.13.1
node2    Ready    <none>   40s     v1.13.1

六、可能的坑

1.开启ssh,连接Xshell

  • 安装openssh-server
sudo apt-get install openssh-server
  • 修改配置文件
cd /etc/ssh
sudo vim sshd_config
	PermitRootLogin yes
	StrictModes yes
  • 启动服务

初始化:sudo /etc/init.d/ssh restart(安装后第一次启动需要)

启动服务:sudo service ssh start

  • 检查是否启动
ps -aux | grep ssh 

出现sshd服务即表示启动成功

2.虚拟机动态IP与克隆主机动态IP一致

先停止虚拟机,在虚拟机上右击,设置,网络适配器,高级,mac地址生成,重启虚拟机,如果还一致,再操作一次

3.启用root账号

sudo passwd root
输入两次密码即可

命令su root 切换root账户

4.开启集群的准备操作

sudo swapoff -a
sudo setenforce 0
sudo ufw disable

5.虚拟机访问不了外网

未完

上一篇:k8s1.19.9安装metrics-server和kube-state-metrics


下一篇:Android连接到配对的蓝牙耳机