Taint
please refer https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
三种taint:
- NoSchedule
- PreferNoSchedule
- NoExecute
为节点打Taint
~ kubectl taint node worker02 role=nginx:NoSchedule
node/worker02 tainted
~ kubectl describe node worker02
Name: worker02
...
Taints: role=nginx:NoSchedule
~ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7cffb9df96-748j6 1/1 Running 0 5m21s 10.244.0.23 worker01 <none> <none>
nginx-7cffb9df96-d2rt5 1/1 Running 0 5m22s 10.244.1.35 worker02 <none> <none>
通过查看pod,发现已经运行的pod并未受影响
扩容pod,看下效果
~ kubectl scale --replicas=5 deployment/nginx
deployment.extensions/nginx scaled
~ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7cffb9df96-4rjhf 0/1 ContainerCreating 0 1s <none> worker01 <none> <none>
nginx-7cffb9df96-748j6 1/1 Running 0 6m8s 10.244.0.23 worker01 <none> <none>
nginx-7cffb9df96-89ngr 0/1 ContainerCreating 0 1s <none> worker01 <none> <none>
nginx-7cffb9df96-d2rt5 1/1 Running 0 6m9s 10.244.1.35 worker02 <none> <none>
nginx-7cffb9df96-zsgmd 0/1 ContainerCreating 0 1s <none> worker01 <none> <none>
通过实验发现由于taint NoSchedule的影响,新pod不会调度到含有污点的节点上
移除节点Taint
~ kubectl taint node worker02 role:NoSchedule-
node/worker02 untainted
再次扩容,查看效果
~ kubectl scale --replicas=10 deployment/nginx
deployment.extensions/nginx scaled
~ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7cffb9df96-2wlln 1/1 Running 0 4s 10.244.1.38 worker02 <none> <none>
nginx-7cffb9df96-4rjhf 1/1 Running 0 3m10s 10.244.0.26 worker01 <none> <none>
nginx-7cffb9df96-748j6 1/1 Running 0 9m17s 10.244.0.23 worker01 <none> <none>
nginx-7cffb9df96-89ngr 1/1 Running 0 3m10s 10.244.0.24 worker01 <none> <none>
nginx-7cffb9df96-9vbdt 1/1 Running 0 4s 10.244.1.36 worker02 <none> <none>
nginx-7cffb9df96-d2rt5 1/1 Running 0 9m18s 10.244.1.35 worker02 <none> <none>
nginx-7cffb9df96-hqvfh 1/1 Running 0 4s 10.244.1.37 worker02 <none> <none>
nginx-7cffb9df96-mhxmn 1/1 Running 0 4s 10.244.1.40 worker02 <none> <none>
nginx-7cffb9df96-xdnzc 1/1 Running 0 4s 10.244.1.39 worker02 <none> <none>
nginx-7cffb9df96-zsgmd 1/1 Running 0 3m10s 10.244.0.25 worker01 <none> <none>
发现新增加的pod已经可以调度到worker02上了
容忍节点Taint
重新为节点增加污点
~ kubectl taint node worker02 role=nginx:NoSchedule
node/worker02 tainted
然后在deploy定义中增加:
tolerations:
- key: "role"
operator: "Equal"
value: "nginx"
effect: "NoSchedule"
通过kubectl diff -f xxx.yaml可以查看变化情况
lab02 kubectl diff -f nginx-deploy.yaml
diff -u -N /tmp/LIVE-021075426/extensions.v1beta1.Deployment.default.nginx /tmp/MERGED-503027673/extensions.v1beta1.Deployment.default.nginx
--- /tmp/LIVE-021075426/extensions.v1beta1.Deployment.default.nginx 2019-06-09 15:53:54.120907334 +0800
+++ /tmp/MERGED-503027673/extensions.v1beta1.Deployment.default.nginx 2019-06-09 15:53:54.128907426 +0800
@@ -4,7 +4,7 @@
annotations:
deployment.kubernetes.io/revision: "8"
creationTimestamp: "2019-06-08T04:27:19Z"
- generation: 23
+ generation: 24
labels:
app: nginx
name: nginx
@@ -14,7 +14,7 @@
uid: b43106be-89a5-11e9-8ec2-080027a62701
spec:
progressDeadlineSeconds: 2147483647
- replicas: 1
+ replicas: 2
revisionHistoryLimit: 2147483647
selector:
matchLabels:
@@ -42,6 +42,11 @@
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
+ tolerations:
+ - effect: NoSchedule
+ key: role
+ operator: Equal
+ value: nginx
status:
availableReplicas: 1
conditions:
exit status 1
再次扩容,查看效果
lab02 kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-859959cc7f-7sx2p 1/1 Running 0 4s 10.244.0.38 worker01 <none> <none>
nginx-859959cc7f-8tpw7 1/1 Running 0 4s 10.244.0.39 worker01 <none> <none>
nginx-859959cc7f-ffh7v 1/1 Running 0 4s 10.244.1.49 worker02 <none> <none>
nginx-859959cc7f-j2xp7 1/1 Running 0 4s 10.244.0.37 worker01 <none> <none>
nginx-859959cc7f-lht8b 1/1 Running 0 4s 10.244.1.47 worker02 <none> <none>
nginx-859959cc7f-mcth6 1/1 Running 0 17s 10.244.0.36 worker01 <none> <none>
nginx-859959cc7f-sx5ln 1/1 Running 0 4s 10.244.1.48 worker02 <none> <none>
nginx-859959cc7f-tlxk5 1/1 Running 0 4s 10.244.1.46 worker02 <none> <none>
nginx-859959cc7f-xkjtq 1/1 Running 0 4s 10.244.1.45 worker02 <none> <none>
nginx-859959cc7f-xltkz 1/1 Running 0 17s 10.244.1.44 worker02 <none> <none>
Affinity
上面通过节点污点的方式,我们可以避免pod被调度到含有污点的节点,同时也可以使某些pod容忍节点的污点,配合下面要说的affinity,可以实现特定节点为特定pod专用的目的
NodeSelector
在pod定义中增加如下:
nodeSelector:
kubernetes.io/hostname: worker02
执行查看效果
lab02 kubectl apply -f nginx-deploy-toleration-nodeselector.yaml
deployment.extensions/nginx configured
lab02 kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-564c745dcd-6dtz7 1/1 Running 0 25s 10.244.1.56 worker02 <none> <none>
nginx-564c745dcd-fw8kt 1/1 Running 0 25s 10.244.1.55 worker02 <none> <none>
nginx-564c745dcd-l8k7m 1/1 Running 0 22s 10.244.1.58 worker02 <none> <none>
nginx-564c745dcd-mvqjd 1/1 Running 0 28s 10.244.1.53 worker02 <none> <none>
nginx-564c745dcd-qn6r9 1/1 Running 0 29s 10.244.1.52 worker02 <none> <none>
nginx-564c745dcd-rbl57 1/1 Running 0 27s 10.244.1.54 worker02 <none> <none>
nginx-564c745dcd-s2jbm 1/1 Running 0 19s 10.244.1.59 worker02 <none> <none>
nginx-564c745dcd-wnd45 1/1 Running 0 30s 10.244.1.50 worker02 <none> <none>
nginx-564c745dcd-zlnfg 1/1 Running 0 30s 10.244.1.51 worker02 <none> <none>
nginx-564c745dcd-zm7qx 1/1 Running 0 22s 10.244.1.57 worker02 <none> <none>
NodeAffinity
NodeAffinity与NodeSelector有相似的作用,但是相比之下有更加灵活的表达语义
please refer: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity