kubeadm安装kubernetes1.18.5

前言

尝试安装helm3,kubernetes1.18,istio1.6是否支持现有集群平滑迁移

版本

Centos7.6 升级内核4.x

kubernetes:v1.18.5

helm:v3.2.4

istio:v1.6.4

安装

添加kubernetes

cat /etc/yum.repos.d/kubernetes.repo 
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl

查看历史版本有那些

查看历史版本有那些
#yum list --showduplicates kubeadm|grep 1.18
本地下载rpm包
#yum install  kubectl-1.18.5-0.x86_64 kubeadm-1.18.5-0.x86_64 kubelet-1.18.5-0.x86_64  --downloadonly --downloaddir=./rpmKubeadm

查看kubernetes1.18需要的镜像列表

#kubeadm config images list --kubernetes-version=v1.18.5
W0709 15:55:40.303088    7778 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.5
k8s.gcr.io/kube-controller-manager:v1.18.5
k8s.gcr.io/kube-scheduler:v1.18.5
k8s.gcr.io/kube-proxy:v1.18.5
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7

下载镜像并导出

#使用脚本
#/bin/bash
url=k8s.gcr.io
version=v1.18.5
images=(`kubeadm config images list --kubernetes-version=$version|awk -F ‘/‘ ‘{print $2}‘`)
for imagename in ${images[@]} ; do
  #echo $imagename
  echo "docker pull $url/$imagename"
  docker pull $url/$imagename
done

images=(`kubeadm config images list --kubernetes-version=$version`)
echo "docker save ${images[@]} -o kubeDockerImage$version.tar"
docker save ${images[@]} -o kubeDockerImage$version.tar

最终文件目录如下

[root@dev-k8s-master-1-105 v1-18]# tree ./
./
├── images.sh
├── kubeDockerImagev1.18.5.tar
└── rpmKubeadm
    ├── cri-tools-1.13.0-0.x86_64.rpm
    ├── kubeadm-1.18.5-0.x86_64.rpm
    ├── kubectl-1.18.5-0.x86_64.rpm
    ├── kubelet-1.18.5-0.x86_64.rpm
    └── kubernetes-cni-0.8.6-0.x86_64.rpm

1 directory, 7 files
[root@dev-k8s-master-1-105 v1-18]#

  

 

安装rpm并导入镜像

yum install -y ./rpmKubeadm/*.rpm
docker load -i kubeDockerImagev1.18.5.tar

初始化集群

 kubeadm init --kubernetes-version=1.18.5 --apiserver-advertise-address=192.168.1.105 --pod-network-cidr=10.81.0.0/16

部分信息

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.105:6443 --token xnlao7.5qsgih5vft0n2li6     --discovery-token-ca-cert-hash sha256:1d341b955245da64a5e28791866e0580a5e223a20ffaefdc6b2729d9fb1739b4 

安装网络

 wget https://docs.projectcalico.org/v3.14/manifests/calico.yaml
 kuberctl apply -f calico.yaml

安装完成

[root@dev-k8s-master-1-105 v1-18]# kubectl get pods -n kube-system
NAME                                           READY   STATUS    RESTARTS   AGE
calico-kube-controllers-76d4774d89-kp8nz       1/1     Running   0          46h
calico-node-k5m4b                              1/1     Running   0          46h
calico-node-l6hq7                              1/1     Running   0          46h
coredns-66bff467f8-hgbvw                       1/1     Running   0          47h
coredns-66bff467f8-npmxl                       1/1     Running   0          47h
etcd-dev-k8s-master-1-105                      1/1     Running   0          47h
kube-apiserver-dev-k8s-master-1-105            1/1     Running   0          47h
kube-controller-manager-dev-k8s-master-1-105   1/1     Running   0          47h
kube-proxy-6dx95                               1/1     Running   0          46h
kube-proxy-926mq                               1/1     Running   0          47h
kube-scheduler-dev-k8s-master-1-105            1/1     Running   0          47h
[root@dev-k8s-master-1-105 v1-18]# 

安装helm3

主要变化:release不在是全局的,要指定release名称,没有tiller

安装

wget https://get.helm.sh/helm-v3.2.4-linux-amd64.tar.gz
cp helm /usr/local/bin/
初始化可以使用helm init
添加公共仓库
helm repo add  stable https://kubernetes-charts.storage.googleapis.com
查看
helm repo list
搜索
helm search repo nginx

  

 

查看版本信息

[root@dev-k8s-master-1-105 ~]# which helm
/usr/local/bin/helm
[root@dev-k8s-master-1-105 ~]# helm version
version.BuildInfo{Version:"v3.2.4", GitCommit:"0ad800ef43d3b826f31a5ad8dfbb4fe05d143688", GitTreeState:"clean", GoVersion:"go1.13.12"}
[root@dev-k8s-master-1-105 ~]#

查看配置信息

[root@dev-k8s-master-1-105 ~]# helm env
HELM_BIN="helm"
HELM_DEBUG="false"
HELM_KUBEAPISERVER=""
HELM_KUBECONTEXT=""
HELM_KUBETOKEN=""
HELM_NAMESPACE="default"
HELM_PLUGINS="/root/.local/share/helm/plugins"
HELM_REGISTRY_CONFIG="/root/.config/helm/registry.json"
HELM_REPOSITORY_CACHE="/root/.cache/helm/repository"
HELM_REPOSITORY_CONFIG="/root/.config/helm/repositories.yaml"
[root@dev-k8s-master-1-105 ~]# 

  

指定k8s集群

v3版本不再需要Tiller,而是通过ApiServer与k8s交互,可以设置环境变量KUBECONFIG来指定存有ApiServre的地址与token的配置文件地址,默认为~/.kube/config

export KUBECONFIG=/root/.kube/config #可以写到/etc/profile里

安装gitlab插件

[root@dev-k8s-master-1-105 ~]# helm plugin install https://github.com/diwakar-s-maurya/helm-git
[root@dev-k8s-master-1-105 ~]# helm plugin ls
NAME    	VERSION	DESCRIPTION                                     
helm-git	1.0.0  	Let‘s you use private gitlab repositories easily
[root@dev-k8s-master-1-105 ~]#

添加gitlab仓库地址
helm repo add myhelmrepo gitlab://username/project:master/kubernetes/helm-chart
helm repo list

使用istioctl安装istio1.6.4

下载

wget https://github.com/istio/istio/releases/download/1.6.4/istioctl-1.6.4-linux-amd64.tar.gz
cp istioctl /usr/local/bin/

 

默认安装

 

 

 

 

 

kubeadm安装kubernetes1.18.5

上一篇:oracle 备份表数据


下一篇:安全透明方法“System.Web.Http.GlobalConfiguration.get_Configuration()”尝试访问安全关键类型“System.Web.Http.HttpConfiguration”失败