使用kubeadm快速部署一个K8s集群

kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。

这个工具能通过两条指令完成一个kubernetes集群的部署:

# 创建一个 Master 节点
$ kubeadm init

# 将一个 Node 节点加入到当前集群中
$ kubeadm join <Master节点的IP和端口 >

1. 安装要求

在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

  • 一台或多台机器,操作系统 CentOS7.x-86_x64
  • 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多
  • 集群中所有机器之间网络互通
  • 可以访问外网,需要拉取镜像
  • 禁止swap分区

2. 学习目标

  1. 在所有节点上安装Docker和kubeadm
  2. 部署Kubernetes Master
  3. 部署容器网络插件
  4. 部署 Kubernetes Node,将节点加入Kubernetes集群中
  5. 部署Dashboard Web页面,可视化查看Kubernetes资源

3. 准备环境

使用kubeadm快速部署一个K8s集群

角色 IP
k8s-master 192.168.81.57
k8s-node1 192.168.81.58
k8s-node2 192.168…81.59
关闭防火墙:
$ systemctl stop firewalld
$ systemctl disable firewalld

关闭selinux:
$ sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
$ setenforce 0  # 临时

关闭swap:
$ swapoff -a  # 临时
$ vim /etc/fstab  # 永久

设置主机名:
$ hostnamectl set-hostname <hostname>

在master添加hosts:
$ cat >> /etc/hosts << EOF
192.168.81.57 k8s-master
192.168.81.58 k8s-node1
192.168.81.59 k8s-node2
EOF

将桥接的IPv4流量传递到iptables的链:
$ cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
$ sysctl --system  # 生效

时间同步:
$ yum install ntpdate -y
$ ntpdate time.windows.com

4. 所有节点安装Docker/kubeadm/kubelet

Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。

4.1 安装Docker

$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
$ yum -y install docker-ce-18.06.1.ce-3.el7
$ systemctl enable docker && systemctl start docker
$ docker --version
Docker version 18.06.1-ce, build e68fc7a
# cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

4.2 添加阿里云YUM软件源

$ cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

4.3 安装kubeadm,kubelet和kubectl

由于版本更新频繁,这里指定版本号部署:

$ yum install -y kubelet-1.17.0 kubeadm-1.17.0 kubectl-1.17.0
$ systemctl enable kubelet

5. 部署Kubernetes Master

在192.168.81.57(Master)执行。

$ kubeadm init \
  --apiserver-advertise-address=192.168.81.57 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.17.0 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16

由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址。

使用kubectl工具:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get nodes

6. 安装Pod网络插件(CNI)

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

确保能够访问到quay.io这个registery。

如果Pod镜像下载失败,可以改成这个镜像地址:lizhenliang/flannel:v0.11.0-amd64

7. 加入Kubernetes Node

在192.168.31.62/63(Node)执行。

向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:

$ kubeadm join 192.168.31.61:6443 --token esce21.q6hetwm8si29qxwn \
    --discovery-token-ca-cert-hash sha256:00603a05805807501d7181c3d60b478788408cfe6cedefedb1f97569708be9c5

8. 测试kubernetes集群

在Kubernetes集群中创建一个pod,验证是否正常运行:

$ kubectl create deployment nginx --image=nginx
$ kubectl expose deployment nginx --port=80 --type=NodePort
$ kubectl get pod,svc

访问地址:http://NodeIP:Port

# 给 nginx 添加副本集为3个
kubectl scale deployment nginx --replicas=3

9. 部署 Dashboard

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

默认镜像国内无法方位,修改镜像地址为:lizhenliang/kubernetes-dashboard-amd64:v1.10.1

默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort  # 添加
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001  # 添加
  selector:
    k8s-app: kubernetes-dashboard

访问地址:http://NodeIP:30001

创建service account并绑定默认cluster-admin管理员集群角色:

kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

使用输出的token登录Dashboard。

1、项目迁移到K8s平台流程:

1、制作镜像 ---> 2、控制器管理pod ----> 3、暴露应用 ----> 4、对外发布应用 ---> 5、日志监控

1、制作镜像
镜像分层:首先基础镜像(hub、docker)、运行环境镜像(centos、JDK\PHP\GO)、项目镜像(Docker、K8S)

2、控制器管理pod


基本概念

Kubernetes集群架构与组件

使用kubeadm快速部署一个K8s集群

kubectl命令行管理工具

使用kubeadm快速部署一个K8s集群
使用kubeadm快速部署一个K8s集群

Pod

  • 最小部署单元
  • 一组容器的集合
  • 一个Pod中的容器共享网络命名空间(共享存储)
  • Pod是短暂的

Controllers

  • ReplicaSet:确保预期的Pod副本数量
  • Deployment : 无状态应用部署
  • StatefulSet :有状态应用部署
  • DaemonSet:确保所有Node运行同一个Pod
  • Job:一次性任务
  • Cronjob :定时任务
  • 更高级层次对象,部署和管理Pod

Service

  • 防止Pod失联
  • 定义一组Pod的访问策略
  • Label : 标签,附加到某个资源上,用于关联对象、查询和筛选
  • Namespaces : 命名空间,将对象逻辑上隔离

运行pod通常一个应用多个副本分散在每个节点,用户找谁需要添加负载均衡LB ,怎么找到?内部服务发现,发现pod,用户就可通过LB服务发现找到pod SVC(找到所需的pod,定义负载均衡策略)

部署java项目

[root@k8s-master ~]# ls
1.txt            helm-v3.0.0-linux-amd64.tar.gz  kube-flannel.yaml          tomcat-java-demo-master
anaconda-ks.cfg  ingress-controller.yaml         kubernetes-dashboard.yaml  tomcat-java-demo-master.zip
# 解压java-demo项目
[root@k8s-master ~]# unzip tomcat-java-demo-master.zip
[root@k8s-master ~]# cd tomcat-java-demo-master
[root@k8s-master tomcat-java-demo-master]# ls
db  Dockerfile  LICENSE  pom.xml  README.md  src

# src 源代码  
# pom.xml java项目支持的编译环境编辑的依赖包
# Dockerfile  打包的项目镜像
# db  项目的sql文件

# 添加一台myslq机器
192.168.81.60
[root@localhost ~]# hostname mysql
[root@localhost ~]# bash
[root@mysql ~]# yum -y install mariadb  mariadb-server
[root@mysql ~]# systemctl start mariadb && systemctl enable mariadb


# master将java-demo项目的db文件的tables_ly_tomcat.sql导入到数据库机器
[root@k8s-master db]# ls
tables_ly_tomcat.sql
# scp传入sql
[root@k8s-master db]# scp tables_ly_tomcat.sql root@192.168.81.60:
# 源代码数据库配置
[root@k8s-master resources]# pwd
/root/tomcat-java-demo-master/src/main/resources
[root@k8s-master resources]# ls
application.yml  log4j.properties  static  templates
[root@k8s-master resources]# vim application.yml 
server:
  port: 8080
spring:
  datasource:
    url: jdbc:mysql://192.168.81.60:3306/test?characterEncoding=utf-8
    username: test
    password: 123.com
    driver-class-name: com.mysql.jdbc.Driver
  freemarker:
    allow-request-override: false
    cache: true
    check-template-location: true
    charset: UTF-8
    content-type: text/html; charset=utf-8
    expose-request-attributes: false
    expose-session-attributes: false
    expose-spring-macro-helpers: false
    suffix: .ftl
    template-loader-path:
      - classpath:/templates/
# 修改数据连接地址192.168.81.60,查看数据库为test ,username: test password: 123.com

# 192.168.81.70数据库操作
[root@mysql ~]# mysql -uroot 
# 切换test数据库下
MariaDB [(none)]> use test
Database changed
# 查看表
MariaDB [test]> show tables;
Empty set (0.00 sec)
# 导入sql
MariaDB [test]> source /root/tables_ly_tomcat.sql;
Query OK, 1 row affected, 1 warning (0.00 sec)

Database changed
Query OK, 0 rows affected (0.00 sec)

Query OK, 0 rows affected (0.00 sec)

# 再次查看表
MariaDB [test]> show tables;
+----------------+
| Tables_in_test |
+----------------+
| user           |
+----------------+

# 授权test库 test用户密码123.com
MariaDB [test]> grant all on test.* to 'test'@'%' identified by '123.com';


# node节点测试连接数据库
[root@k8s-node2 ~]# yum -y install mariadb
[root@k8s-node2 ~]# mysql -h192.168.81.60 -utest -p123.com
Welcome to the MariaDB monitor.  Commands end with ; or \g.

# 测试通过制作镜像
# 安装 java 编译环境
[root@k8s-master resources]# yum -y install maven java-1.8.0-openjdk

# mvn 编译
[root@k8s-master ~]# cd tomcat-java-demo-master
[root@k8s-master tomcat-java-demo-master]# mvn clean package -Dmaven.test.skip=true 

# 构建完毕生成一个 target 目录,修改完原代码需要重新构建生成war包
[root@k8s-master tomcat-java-demo-master]# ls
db  Dockerfile  LICENSE  pom.xml  README.md  src  target
[root@k8s-master tomcat-java-demo-master]# ls target/
classes  generated-sources  ly-simple-tomcat-0.0.1-SNAPSHOT  ly-simple-tomcat-0.0.1-SNAPSHOT.war  maven-archiver  maven-status
# 构建镜像
[root@k8s-master tomcat-java-demo-master]# docker build -t yexu/java-demo .
-t, --tag list # 镜像名称
-f, --file string # 指定Dockerfile文件位置



# 登陆docekrhub 推送镜像方便拉取
[root@k8s-master tomcat-java-demo-master]# docker login
[root@k8s-master tomcat-java-demo-master]# docker push yexu/java-demo

# kubectl不直接部署间接查看yaml,
--dry-run  #测试不运行
-o  # 指定为yaml格式
[root@k8s-master ~]# kubectl create deployment java-demo --image=yexu/java-demo --dry-run -o yaml 
# 直接运行
[root@k8s-master ~]# kubectl create deployment java-demo --image=yexu/java-demo --dry-run -o yaml >deploy.yaml

# 查看修改yaml
[root@k8s-master ~]# cat deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: java-demo
  name: java-demo
spec:
  replicas: 3
  selector:
    matchLabels:
      app: java-demo
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: java-demo
    spec:
      containers:
      - image: yexu/java-demo
        name: java-demo

# 执行yaml查看
[root@k8s-master ~]# kubectl apply -f deploy.yaml 
deployment.apps/java-demo created
[root@k8s-master ~]# kubectl get pod 
NAME                         READY   STATUS    RESTARTS   AGE
java-demo-6f6668759c-2pkhp   1/1     Running   0          112s
java-demo-6f6668759c-45ccw   1/1     Running   0          112s
java-demo-6f6668759c-wqhh4   1/1     Running   0          112s
nginx-86c57db685-s7dbb       1/1     Running   0          22h


# 暴露端口
kubectl expose  #创建service 类型deployment 名称java-demo service端口用于集群内部间访问:--port=80    容器内跑服务的端口:--target-port=8080     类型nodeport用于集群外部访问 -o 导出yaml --dry-run 尝试运行 
[root@k8s-master ~]# kubectl expose deployment java-demo --port=80 --target-port=8080 --type=NodePort -o yaml --dry-run >svc.yaml

# 查看修改yaml
[root@k8s-master ~]# cat svc.yaml 
apiVersion: v1
kind: Service
metadata:
  labels:
    app: java-demo
  name: java-demo
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: java-demo
  type: NodePort

# 执行
[root@k8s-master ~]# kubectl apply -f svc.yaml 
service/java-demo created

# 查看pod,svc
[root@k8s-master ~]# kubectl get pod,svc
NAME                             READY   STATUS    RESTARTS   AGE
pod/java-demo-6f6668759c-2pkhp   1/1     Running   0          26m
pod/java-demo-6f6668759c-45ccw   1/1     Running   0          26m
pod/java-demo-6f6668759c-wqhh4   1/1     Running   0          26m
pod/nginx-86c57db685-s7dbb       1/1     Running   0          22h

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/java-demo    NodePort    10.96.101.80    <none>        80:32700/TCP   44s
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        24h
service/nginx        NodePort    10.96.248.209   <none>        80:30165/TCP   23h


理解Pod对象

  • 最小部署单元
  • 一组容器的集合
  • 一个Pod中的容器共享网络命名空间与存储
  • Pod是短暂的

Pod存在的意义

Pod为亲密性应用而存在。

气密性应用场景:

  • 两个应用之间发生文件交互
  • 两个应用需要通过127.0.0.1或者socket通信
  • 两个应用需要发生频发的调用

Pod实现机制

  • 共享网络
  • 共享存储

Pod想要做到网络共享需要多个容器都在一个namespaces

# 示例
kubectl get pod 
NAME                         READY   STATUS    RESTARTS   AGE
java-demo-6f6668759c-4bbj7   1/1     Running   0          45h
java-demo-6f6668759c-tct6j   1/1     Running   0          45h
java-demo-6f6668759c-zkqdq   1/1     Running   0          45h
nginx-86c57db685-s7dbb       1/1     Running   0          2d20h

# 导出yaml
kubectl get pod java-demo-6f6668759c-4bbj7 -o yaml >pod.yaml

# 修改yaml
[root@k8s-master ~]# cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: my-pod 
  name: my-pod 
  namespace: default
spec:
  containers:
  - image: nginx
    name: nginx
    image: nginx
  - image: java-demo
    name: java
    image: yexu/java-demo:latest

# kubectl get pod 
NAME                         READY   STATUS    RESTARTS   AGE
java-demo-6f6668759c-4bbj7   1/1     Running   0          45h
java-demo-6f6668759c-tct6j   1/1     Running   0          45h
java-demo-6f6668759c-zkqdq   1/1     Running   0          45h
my-pod                       2/2     Running   0          19m
nginx-86c57db685-s7dbb       1/1     Running   0          2d20h
# 发现my-pod有两个容器2/2

# 查看描述IP相同连个容器为同一个IP
[root@k8s-master ~]# kubectl describe pod my-pod
Name:         my-pod
Namespace:    default
Priority:     0
Node:         k8s-node2/192.168.81.59
Start Time:   Fri, 22 Jan 2021 14:36:26 +0800
Labels:       app=my-pod
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"app":"my-pod"},"name":"my-pod","namespace":"default"},"spec":{"con...
Status:       Running
IP:           10.244.2.13
IPs:
  IP:  10.244.2.13
Containers:
  nginx:
    Container ID:   docker://d22872db08adcb9dfbfaa69ae87e106d2f4062543c02ca30fa2e6378e5c5197c
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:10b8cc432d56da8b61b070f4c7d2543a9ed17c2b23010b43af434fd40e2ca4aa
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Fri, 22 Jan 2021 14:36:34 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-fd8lf (ro)
  java:
    Container ID:   docker://f7b7d906971854013c0a8ff815075ee909c5f12177e4126bc63dfed5ce0727a2
    Image:          yexu/java-demo:latest
    Image ID:       docker-pullable://yexu/java-demo@sha256:8fb25c40ca69834e69b801f373e388feb386bd67c5dc89aa4cb4d21d0ba39e5d
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Fri, 22 Jan 2021 14:36:38 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-fd8lf (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-fd8lf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-fd8lf
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                Message
  ----    ------     ----  ----                -------
  Normal  Scheduled  21m   default-scheduler   Successfully assigned default/my-pod to k8s-node2
  Normal  Pulling    21m   kubelet, k8s-node2  Pulling image "nginx"
  Normal  Pulled     21m   kubelet, k8s-node2  Successfully pulled image "nginx"
  Normal  Created    21m   kubelet, k8s-node2  Created container nginx
  Normal  Started    21m   kubelet, k8s-node2  Started container nginx
  Normal  Pulling    21m   kubelet, k8s-node2  Pulling image "yexu/java-demo:latest"
  Normal  Pulled     21m   kubelet, k8s-node2  Successfully pulled image "yexu/java-demo:latest"
  Normal  Created    21m   kubelet, k8s-node2  Created container java
  Normal  Started    21m   kubelet, k8s-node2  Started container java


Pod Template常用功能字段解析
变量
拉取镜像
资源限制
健康检查


# Serivec 用户访问Service内部发现Pod,转发流量发现Pod,
# 健康检查有两种:
1、检查Pod启的应用程序是否正常,不正常会根据Pod里设置的策略发现重建Pod
2、检查Pod应用程序不正常,告诉serive不转发流量剔除

Pod容器分类与设计模式

Infrastructure Container:基础容器 维护整个Pod网络空间

InitContainers:初始化容器 先于业务容器开始执行

Containers:业务容器 并行启动

Deployment控制器

1、Pod与Controllers的关系

  • controllers:在集群管理和运行容器的对象
  • 通过label-selector相关联
  • Pod通过控制器实现应用的运维,如伸缩,滚动升级等

2、Deployment功能与应用场景

  • 部署无状态应用
  • 管理Pod和ReplicaSet
  • 具有上线部署、副本设定、滚动升级、回滚等功能
  • 提供声明式更新,例如只更新一个新的Image

应用场景:Web服务,微服务

3、YAML字段解析

deployment中yaml分为2部分:控制器定义和被控制对象

使用kubeadm快速部署一个K8s集群

4、使用Deployment部署无状态应用

# 创建示例
[root@k8s-master ~]# kubectl create deployment web --image=nginx --dry-run -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: web
  name: web
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: web
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
status: {}

# 套用yaml
[root@k8s-master ~]# kubectl create deployment web --image=nginx --dry-run -o yaml>web.yaml

# 清除无用的yaml配置,修改镜像为yexu/java-demo
[root@k8s-master ~]# vim web.yaml 
[root@k8s-master ~]# cat web.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: web
  name: web
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - image: yexu/java-demo 
        name: java

# 引用yaml启动
[root@k8s-master ~]# kubectl apply -f web.yaml 
deployment.apps/web created

# 查看Pod
[root@k8s-master ~]# kubectl get pod 
NAME                         READY   STATUS    RESTARTS   AGE
dry-run-85865666c4-bg6lj     1/1     Running   0          22m
java-demo-6f6668759c-4bbj7   1/1     Running   0          4d18h
java-demo-6f6668759c-tct6j   1/1     Running   0          4d18h
java-demo-6f6668759c-zkqdq   1/1     Running   0          4d18h
my-pod                       2/2     Running   0          2d21h
nginx-86c57db685-s7dbb       1/1     Running   0          5d17h
web-685bf56875-fr7td         1/1     Running   0          27s
web-685bf56875-kn9kl         1/1     Running   0          26s
web-685bf56875-tl6lc         1/1     Running   0          26s

# 查看已经部署的
[root@k8s-master ~]# kubectl get deploy
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
dry-run     1/1     1            1           140m
java-demo   3/3     3            3           4d20h
nginx       1/1     1            1           5d20h
web         3/3     3            3           118m

# 开启端口
[root@k8s-master ~]# kubectl expose --name=web deployment web --port=80 --target-port=8080 --type=NodePort

# 查看service,web端口为32713
[root@k8s-master ~]# kubectl get service
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
java-demo    NodePort    10.96.148.245   <none>        80:30793/TCP   4d20h
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        5d21h
nginx        NodePort    10.96.248.209   <none>        80:30165/TCP   5d20h
web          NodePort    10.96.205.107   <none>        80:32713/TCP   31s

# 浏览器访问
http://192.168.81.57:32713/

5、升级与滚动

# 升级(创建一个新的容器,然后新容器为running状态,将旧容器打标记然后删除,一次类推升级第二个容器创建pod启动pod将旧的pod打标记然后删除)
kubectl set image deployment/web nginx=nginx:1.15 
# 查看升级状态 
kubectl rollout status deployment/web 

# 回滚 
kubectl rollout history deployment/web 
kubectl rollout undo deployment/web 
kubectl rollout undo deployment/web --revision=2
# 升级示例
# 升级java为nginx
[root@k8s-master ~]# kubectl set image deployment web java=nginx 
deployment.apps/web image updated

# 查看pod
[root@k8s-master ~]# kubectl get pod
NAME                         READY   STATUS    RESTARTS   AGE
dry-run-85865666c4-bg6lj     1/1     Running   0          169m
java-demo-6f6668759c-4bbj7   1/1     Running   0          4d20h
java-demo-6f6668759c-tct6j   1/1     Running   0          4d20h
java-demo-6f6668759c-zkqdq   1/1     Running   0          4d20h
my-pod                       2/2     Running   0          2d23h
nginx-86c57db685-s7dbb       1/1     Running   0          5d20h
web-557c6dc8d7-mhlds         1/1     Running   0          35s
web-557c6dc8d7-mx5bp         1/1     Running   0          21s
web-557c6dc8d7-w4dmx         1/1     Running   0          28s

# 继续访问浏览器无法访问,因为nginx端口转发不对是tomcat的
http://192.168.81.57:32713/

# 编辑web的svc
[root@k8s-master ~]# kubectl edit svc/web
service/web edited
# targetPort: 8080修改8080为80

# http://192.168.81.57:32713/		//访问成功为nginx版本

# 回滚示例
# 查看deployment滚动更新记录
root@k8s-master ~]# kubectl rollout history deployment web
deployment.apps/web 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

# 回滚版本,默认给回滚上个版本
[root@k8s-master ~]# kubectl rollout undo deployment web
deployment.apps/web rolled back
[root@k8s-master ~]# kubectl get pod 
NAME                         READY   STATUS        RESTARTS   AGE
dry-run-85865666c4-bg6lj     1/1     Running       0          3h6m
java-demo-6f6668759c-4bbj7   1/1     Running       0          4d20h
java-demo-6f6668759c-tct6j   1/1     Running       0          4d20h
java-demo-6f6668759c-zkqdq   1/1     Running       0          4d20h
my-pod                       2/2     Running       0          3d
nginx-86c57db685-s7dbb       1/1     Running       0          5d20h
web-557c6dc8d7-mhlds         0/1     Terminating   0          16m
web-685bf56875-gj6wm         1/1     Running       0          20s
web-685bf56875-jk6rn         1/1     Running       0          25s
web-685bf56875-mmwd6         1/1     Running       0          14s

# 编辑svc
[root@k8s-master ~]# kubectl edit svc/web
service/web edited
# targetPort: 8080		修改80为8080

# http://192.168.81.57:32713/		//访问成功为上个java版本

6、弹性伸缩

# 扩容示例,修改副本集
[root@k8s-master ~]# kubectl scale deployment web --replicas=5
deployment.apps/web scaled

# 副本集3个变为5个
[root@k8s-master ~]# kubectl get pod 
NAME                         READY   STATUS    RESTARTS   AGE
dry-run-85865666c4-bg6lj     1/1     Running   0          3h25m
java-demo-6f6668759c-4bbj7   1/1     Running   0          4d21h
java-demo-6f6668759c-tct6j   1/1     Running   0          4d21h
java-demo-6f6668759c-zkqdq   1/1     Running   0          4d21h
my-pod                       2/2     Running   0          3d
nginx-86c57db685-s7dbb       1/1     Running   0          5d20h
web-685bf56875-gj6wm         1/1     Running   0          19m
web-685bf56875-jk6rn         1/1     Running   0          19m
web-685bf56875-mmwd6         1/1     Running   0          19m
web-685bf56875-ntkvj         1/1     Running   0          32s
web-685bf56875-q9pdp         1/1     Running   0          32s

每创建一个deployment都会生成一个replicaset

[root@k8s-master ~]# kubectl get deploy,rs
NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dry-run     1/1     1            1           3h27m
deployment.apps/java-demo   3/3     3            3           4d21h
deployment.apps/nginx       1/1     1            1           5d21h
deployment.apps/web         3/3     3            3           3h5m

NAME                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/dry-run-85865666c4     1         1         1       3h27m
replicaset.apps/java-demo-6f6668759c   3         3         3       4d21h
replicaset.apps/nginx-86c57db685       1         1         1       5d21h
replicaset.apps/web-557c6dc8d7         0         0         0       38m
replicaset.apps/web-685bf56875         3         3         3       3h5m

# 创建deployment会先创建rs,rs关联Pod,rs判断deployment的replicas副本数,获取当前运行的副本数,多了rs删除一个,少了rs添加一个
# 还有一个功能历史版本的记录
# 版本回滚也是RS负责的,创建新的RS,新的镜像启动Pod,关联到service,用户访问service,新的pod准备就绪,杀掉旧的rs的pod,再次新的就绪再次杀掉旧的直到完全迁到新的pod。

Service统一入口应用

Service存在的意义

  • 防止Pod失联(服务发现)
  • 定义一组Pod的访问策略(负载均衡)
  • 支持ClusterIP,NodePort以及LoadBalancer三种类型

Pod与Service的关系

  • 通过label-selector相关联
  • 通过Service实现Pod的负载均衡(TCP/UDP 4层)

Service几种类型

**ClusterIP:**分配一个内部集群的IP地质,只能在集群内部访问(同Namespace内的Pod),默认ServiceType;

ClusterIP模式的Service为你提供的就是一个Pod的稳定的IP地址,即VIP。

**NodePort:**分配一个内部集群IP地址,并在每个节点上启动一个端口来暴露服务,可以在集群外部访问;

**LoadBalancer:**分配一个内部集群IP地址,并在每个节点上启动一个端口来暴露服务。除此之外,Kubernetes会在请求底层云平台上的负载均衡器,将每个Node([NodoIP]:[NodePord])作为后端添加进去。

使用kubeadm快速部署一个K8s集群

上一篇:Kubeadm修改证书时间


下一篇:kubeadm初始化k8s集群延长证书过期时间