kubernetes之滚动更新
滚动更新
滚动更新是一次只更新一小部分副本,成功后,在更新更多的副本,最终完成所有副本的更新,滚动更新的好处是零停机,整个过程始终有副本再运行,从而保证业务的连续性
下面我们不熟三副本应用,初始镜像为httpd:2.2 然后将其更新到httpd:2.4
httpd:2.2配置文件:
[root@master music]# cat httpd.yml apiVersion: apps/v1 kind: Deployment metadata: name: http-deploy labels: run: apache spec: replicas: 3 selector: matchLabels: run: apache template: metadata: labels: run: apache spec: containers: - name: httpd image: httpd:2.4 ports: - containerPort: 80
查看一下pod:
[root@master music]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES http-deploy-849cf97446-6k8jj 1/1 Running 0 2m28s 10.244.1.54 node1 <none> <none> http-deploy-849cf97446-l987p 1/1 Running 0 2m28s 10.244.1.55 node1 <none> <none> http-deploy-849cf97446-mtsqf 1/1 Running 0 2m28s 10.244.2.42 node2 <none> <none>
在查看一下当前版本:
[root@master music]# kubectl get replicasets.apps -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR http-deploy-849cf97446 3 3 3 10m httpd httpd:2.2 pod-template-hash=849cf97446,run=apache
现在我们来滚动更新: 把配置文件htppd.yml镜像httpd:2.2 更改为 httpd2.4,然后重新执行
现在我们再来看看
[root@master music]# kubectl get replicasets.apps -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR http-deploy-77c8788b9b 3 3 3 39s httpd httpd:2.4 pod-template-hash=77c8788b9b,run=apache http-deploy-849cf97446 0 0 0 13m httpd httpd:2.2 pod-template-hash=849cf97446,run=apache
发现了变化镜像2.2变成了2.4,重新创建了pod 镜像为2.4
[root@master music]# kubectl describe deployment Name: http-deploy Namespace: default CreationTimestamp: Mon, 20 Jul 2020 20:08:32 +0800 Labels: run=apache Annotations: deployment.kubernetes.io/revision: 2 kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"l Selector: run=apache Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: run=apache Containers: httpd: Image: httpd:2.4 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: http-deploy-77c8788b9b (3/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 17m deployment-controller Scaled up replica set http-deploy-849cf974 Normal ScalingReplicaSet 5m9s deployment-controller Scaled up replica set http-deploy-77c8788b Normal ScalingReplicaSet 4m52s deployment-controller Scaled down replica set http-deploy-849cf9 Normal ScalingReplicaSet 4m52s deployment-controller Scaled up replica set http-deploy-77c8788b Normal ScalingReplicaSet 4m35s deployment-controller Scaled down replica set http-deploy-849cf9 Normal ScalingReplicaSet 4m35s deployment-controller Scaled up replica set http-deploy-77c8788b Normal ScalingReplicaSet 4m34s deployment-controller Scaled down replica set http-deploy-849cf9
每次只更新替换一个pod,每次更换的pod数量是可以定制的,kubernetes提供了两个参数maxSurge和 maxUnavailable,来精细更换pod数量
回滚
kubectl apply 每次更新应用时 kubernetes都会记录下当然的配置,,保存为一个 revision(版次),这样就可以回滚到某个指定的revision
就是在执行的时候后面跟上一个参数, --record
下面我们来创建三个配置文件,三个文件版本不一样就可以我们用httpd:2.37,httpd:2.38,httpd:2.39
[root@master music]# cat httpd.yml apiVersion: apps/v1 kind: Deployment metadata: name: http-deploy labels: run: apache spec: replicas: 3 selector: matchLabels: run: apache template: metadata: labels: run: apache spec: containers: - name: httpd image: httpd:2.4.37 ##其余两个在这里就不写在这里了,把镜像版本改了就可以了 ports: - containerPort: 80
执行:
[root@master music]# kubectl apply -f httpd.yml --record deployment.apps/http-deploy created [root@master music]# kubectl apply -f httpd1.yml --record deployment.apps/http-deploy configured [root@master music]# kubectl apply -f httpd2.yml --record deployment.apps/http-deploy configured
通过查看可以看到每一次的更新。
[root@master music]# kubectl get deployments.apps -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR http-deploy 3/3 3 3 5m14s httpd httpd:2.4.39 run=apache
这是由2.4.37更新到2.4.39
--record的作用是将当前的命令记录到revision记录中,这样我们就可以知道每个revision对应的是那个配置文件了,通过
kubectl rollout history deployment 查看revision历史记录
[root@master music]# kubectl rollout history deployment deployment.apps/http-deploy REVISION CHANGE-CAUSE 1 kubectl apply --filename=httpd.yml --record=true 2 kubectl apply --filename=httpd1.yml --record=true 3 kubectl apply --filename=httpd2.yml --record=true
如果想要回到某个版本,比如说最初的2.4.37.可以执行命令
[root@master music]# kubectl rollout history deployment ##先查看一下历史版本 deployment.apps/http-deploy REVISION CHANGE-CAUSE 1 kubectl apply --filename=httpd.yml --record=true 2 kubectl apply --filename=httpd1.yml --record=true 3 kubectl apply --filename=httpd2.yml --record=true [root@master music]# kubectl get deployments.apps -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR http-deploy 3/3 3 3 21m httpd httpd:2.4.39 run=apache [root@master music]# kubectl rollout undo deployment --to-revision=1 deployment.apps/http-deploy rolled back [root@master music]# kubectl get deployments.apps -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR http-deploy 3/3 3 3 22m httpd httpd:2.4.37 run=apache
可以看到我们回到了我们制定的最开始的版本,此时。版本历史也会发生相应变化
[root@master music]# kubectl rollout history deployment deployment.apps/http-deploy REVISION CHANGE-CAUSE 2 kubectl apply --filename=httpd1.yml --record=true 3 kubectl apply --filename=httpd2.yml --record=true 4 kubectl apply --filename=httpd.yml --record=true
之前的1变成了4
Health Check
强大的自愈能力是k8s这类容器编排引擎的一个重要特性,自愈的默认实现方式是自动重启发生故障的容器,除此之外,用户还可以利用liveness和readiness探测机制设置更精细的健康检查,进而实现如下需求
1:0停机部署
2:避免部署无效的镜像
3:更加安全的滚动升级
默认的健康检查
下面我们来模拟一个容器发生故障的场景,pod配置如下
[root@master health]# cat health.yml apiVersion: v1 kind: Pod metadata: labels: test: healthcheck name: healthcheck spec: restartPolicy: OnFailure containers: - name: healthcheck image: busybox args: - /bin/bash - -c - sleep 10;exit 1
pod的restartpolicy 设置为onfailure,默认为always
sleep10;exit1 模拟容器启动10秒后发生故障
执行创建pod 命名为healthcheck
[root@master health]# kubectl get pods NAME READY STATUS RESTARTS AGE healthcheck 0/1 CrashLoopBackOff 6 7m37s
可见容器已经启动了6次
liveness探测
liveness探测让用户可以自定义判断容器是否健康的条件,如果探测失败,k8s就会重启容器
案例
[root@master health]# cat liveness.yml apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness spec: restartPolicy: OnFailure containers: - name: liveness image: busybox args: - /bin/sh - -c - touch /tmp/healthy;sleep 30;rm -rf /tmp/healthy;sleep 600 livenessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 10 periodSeconds: 5
执行以后进程是首先创建文件/tmp/healthy,30秒以后删除,如果文件存在则健康,否则就会认为是故障
可以通过查看日志
kubectl describe pod liveness
[root@master health]# kubectl describe pod liveness Name: liveness Namespace: default Priority: 0 Node: node2/192.168.172.136 Start Time: Mon, 20 Jul 2020 22:01:31 +0800 Labels: test=liveness Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"test":"liveness"},"name":"liveness","namespace":"default"},"spec":... Status: Running IP: 10.244.2.50 IPs: IP: 10.244.2.50 Containers: liveness: Container ID: docker://5a535ca4965f649b90161b72521c4bc75c52097f7a6f0f816dee991a0000156e Image: busybox Image ID: docker-pullable://busybox@sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793 Port: <none> Host Port: <none> Args: /bin/sh -c touch /tmp/healthy;sleep 30;rm -rf /tmp/healthy;sleep 600 State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 137 Started: Mon, 20 Jul 2020 22:10:13 +0800 Finished: Mon, 20 Jul 2020 22:11:27 +0800 Ready: False Restart Count: 6 Liveness: exec [cat /tmp/healthy] delay=10s timeout=1s period=5s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-ptz8b (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-ptz8b: Type: Secret (a volume populated by a Secret) SecretName: default-token-ptz8b Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 12m default-scheduler Successfully assigned default/liveness to node2 Normal Pulled 9m43s (x3 over 12m) kubelet, node2 Successfully pulled image "busybox" Normal Created 9m43s (x3 over 12m) kubelet, node2 Created container liveness Normal Started 9m43s (x3 over 12m) kubelet, node2 Started container liveness Normal Killing 8m58s (x3 over 11m) kubelet, node2 Container liveness failed liveness probe, will be restarted Normal Pulling 8m28s (x4 over 12m) kubelet, node2 Pulling image "busybox" Warning Unhealthy 7m48s (x10 over 12m) kubelet, node2 Liveness probe failed: cat: can‘t open ‘/tmp/healthy‘: No such file or directory Warning BackOff 2m50s (x4 over 3m3s) kubelet, node2 Back-off restarting failed container
[root@master health]# kubectl get pods NAME READY STATUS RESTARTS AGE liveness 1/1 Running 0 27s