在 Kubernetes Ingress 高可靠部署最佳实践 中介绍了在Kubernetes集群中如何部署一套高可靠的Ingress接入层,文中通过直接修改YAML的方式来完成,今天主要分享下如何通过Helm的方式在阿里云容器服务中依据自身业务场景快速部署更新Ingress Controller组件。
环境准备
- 通过 阿里云容器服务控制台 申请一套Kubernetes集群,并配置本地kubectl连接到远程Kubernetes集群,配置方式请参考这里。
- 安装配置helm客户端,具体可参考这里。
- 添加阿里云 Ingress Controller Helm Repo:
$ helm repo add aliyun-stable https://acs-k8s-ingress.oss-cn-hangzhou.aliyuncs.com/charts
"aliyun-stable" has been added to your repositories
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "aliyun-stable" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
支持多种部署形式
由于不同业务场景的多样性,在创建Kubernetes集群时默认会部署一套单节点的Ingress Controller,这里我们可以通过Helm的方式并依据具体需求场景来支持不同的部署形式。
注意集群默认是通过YAML方式部署的Ingress Controller,故无法直接通过Helm来进行更新,我们可以通过参数--force
来强制更新,但会重建一个新的Ingress SLB实例。
采用Deployment+多Replica的部署形式
集群初始化时默认部署了一套单Replica的Ingress Controller Deployment,我们可以通过下面方式来更新采用多Replica的部署形式:
$ helm upgrade --namespace kube-system --install aliyun-ingress-controller aliyun-stable/nginx-ingress --set controller.kind=Deployment --set controller.replicaCount=2
$ kubectl -n kube-system get pod | grep nginx-ingress-controller
nginx-ingress-controller-786cc55966-cjt8r 1/1 Running 0 7m
nginx-ingress-controller-786cc55966-w8fdm 1/1 Running 0 7m
注:具体变量值需依据您的业务需求进行适当配置。
采用Deployment+HPA的部署形式
同样我们在部署Ingress Controller时可以配合HPA依据负载情况来进行动态扩缩容:
$ helm upgrade --namespace kube-system --install aliyun-ingress-controller aliyun-stable/nginx-ingress --set controller.kind=Deployment --set controller.autoscaling.enabled=true --set controller.autoscaling.minReplicas=1 --set controller.autoscaling.maxReplicas=3 --set controller.autoscaling.targetCPUUtilizationPercentage=80 --set controller.resources.requests.cpu=500m --set controller.resources.limits.cpu=1000m
$ kubectl -n kube-system get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
nginx-ingress-controller Deployment/nginx-ingress-controller 0% / 80% 1 3 1 2m
$ kubectl -n kube-system get pod | grep nginx-ingress-controller
nginx-ingress-controller-648cdccbf-prvdq 1/1 Running 0 3m
注:具体变量值需依据您的业务需求进行适当配置。
采用DaemonSet的部署形式
同样我们可以更新Ingress Controller采用DaemonSet的部署形式:
$ helm upgrade --namespace kube-system --install aliyun-ingress-controller aliyun-stable/nginx-ingress --set controller.kind=DaemonSet
$ kubectl -n kube-system get pod | grep nginx-ingress-controller
nginx-ingress-controller-2j6qj 1/1 Running 0 36s
nginx-ingress-controller-8mw2x 1/1 Running 0 36s
nginx-ingress-controller-zhd4q 1/1 Running 0 36s
$ kubectl -n kube-system get ds | grep nginx-ingress-controller
nginx-ingress-controller 3 3 3 3 3 <none> 46s
注:测试集群有3台Worker节点。
开启监控配置
通过Helm的方式我们也可以很方便地开启Nginx VTS监控模块和导出Prometheus监控指标:
$ helm upgrade --namespace kube-system --install aliyun-ingress-controller aliyun-stable/nginx-ingress --set controller.stats.enabled=true --set controller.metrics.enabled=true
$ kubectl -n kube-system get pod | grep nginx-ingress-controller
nginx-ingress-controller-5fddb6b599-zw2qh 1/1 Running 0 1m
$ kubectl -n kube-system exec -it nginx-ingress-controller-5fddb6b599-zw2qh -- cat /etc/nginx/nginx.conf | grep vhost_traffic_status_display
vhost_traffic_status_display;
vhost_traffic_status_display_format html;
注:这里测试采用的是默认单节点的部署形式。
参数配置说明
通过Helm的方式部署Ingress Controller还支持很多其他配置参数,我们可以很灵活地依据自身业务场景来配置更新Ingress Controller。
Parameter | Description | Default |
---|---|---|
controller.name |
name of the controller component | controller |
controller.image.repository |
controller container image repository | registry.cn-hangzhou.aliyuncs.com/acs/aliyun-ingress-controller |
controller.image.tag |
controller container image tag | 0.12.0-2 |
controller.image.pullPolicy |
controller container image pull policy | IfNotPresent |
controller.config |
nginx ConfigMap entries | {proxy-body-size: "20m"} |
controller.hostNetwork |
If the nginx deployment / daemonset should run on the host's network namespace. Do not set this when controller.service.externalIPs is set and kube-proxy is used as there will be a port-conflict for port 80
|
false |
controller.defaultBackendService |
default 404 backend service; required only if defaultBackend.enabled = false
|
"" |
controller.electionID |
election ID to use for the status update | ingress-controller-leader |
controller.extraEnvs |
any additional environment variables to set in the pods | {} |
controller.extraContainers |
Sidecar containers to add to the controller pod. See LemonLDAP::NG controller as example | {} |
controller.extraVolumeMounts |
Additional volumeMounts to the controller main container | {} |
controller.extraVolumes |
Additional volumes to the controller pod | {} |
controller.ingressClass |
name of the ingress class to route through this controller | nginx |
controller.scope.enabled |
limit the scope of the ingress controller |
false (watch all namespaces) |
controller.scope.namespace |
namespace to watch for ingress |
"" (use the release namespace) |
controller.extraArgs |
Additional controller container arguments | {v: "2", annotations-prefix: "nginx.ingress.kubernetes.io"} |
controller.kind |
install as Deployment or DaemonSet | Deployment |
controller.tolerations |
node taints to tolerate (requires Kubernetes >=1.6) | [] |
controller.affinity |
node/pod affinities (requires Kubernetes >=1.6) | {} |
controller.minReadySeconds |
how many seconds a pod needs to be ready before killing the next, during update | 0 |
controller.nodeSelector |
node labels for pod assignment | {} |
controller.podAnnotations |
annotations to be added to pods | {} |
controller.replicaCount |
desired number of controller pods | 1 |
controller.minAvailable |
minimum number of available controller pods for PodDisruptionBudget | 1 |
controller.resources |
controller pod resource requests & limits | {} |
controller.lifecycle |
controller pod lifecycle hooks | {} |
controller.service.annotations |
annotations for controller service | {} |
controller.service.labels |
labels for controller service | {} |
controller.publishService.enabled |
if true, the controller will set the endpoint records on the ingress objects to reflect those on the service | true |
controller.publishService.pathOverride |
override of the default publish-service name | "" |
controller.service.clusterIP |
internal controller cluster service IP | "" |
controller.service.externalIPs |
controller service external IP addresses. Do not set this when controller.hostNetwork is set to true and kube-proxy is used as there will be a port-conflict for port 80
|
[] |
controller.service.externalTrafficPolicy |
If controller.service.type is NodePort or LoadBalancer , set this to Local to enable source IP preservation
|
"Cluster" |
controller.service.healthCheckNodePort |
If controller.service.type is NodePort or LoadBalancer and controller.service.externalTrafficPolicy is set to Local , set this to the managed health-check port the kube-proxy will expose. If blank, a random port in the NodePort range will be assigned |
"" |
controller.service.loadBalancerIP |
IP address to assign to load balancer (if supported) | "" |
controller.service.loadBalancerSourceRanges |
list of IP CIDRs allowed access to load balancer (if supported) | [] |
controller.service.targetPorts.http |
Sets the targetPort that maps to the Ingress' port 80 | 80 |
controller.service.targetPorts.https |
Sets the targetPort that maps to the Ingress' port 443 | 443 |
controller.service.type |
type of controller service to create | LoadBalancer |
controller.service.nodePorts.http |
If controller.service.type is NodePort and this is non-empty, it sets the nodePort that maps to the Ingress' port 80 |
"" |
controller.service.nodePorts.https |
If controller.service.type is NodePort and this is non-empty, it sets the nodePort that maps to the Ingress' port 443 |
"" |
controller.livenessProbe.initialDelaySeconds |
Delay before liveness probe is initiated | 10 |
controller.livenessProbe.periodSeconds |
How often to perform the probe | 10 |
controller.livenessProbe.timeoutSeconds |
When the probe times out | 5 |
controller.livenessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed. | 1 |
controller.livenessProbe.failureThreshold |
Minimum consecutive failures for the probe to be considered failed after having succeeded. | 3 |
controller.livenessProbe.port |
The port number that the liveness probe will listen on. | 10254 |
controller.readinessProbe.initialDelaySeconds |
Delay before readiness probe is initiated | 10 |
controller.readinessProbe.periodSeconds |
How often to perform the probe | 10 |
controller.readinessProbe.timeoutSeconds |
When the probe times out | 1 |
controller.readinessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed. | 1 |
controller.readinessProbe.failureThreshold |
Minimum consecutive failures for the probe to be considered failed after having succeeded. | 3 |
controller.readinessProbe.port |
The port number that the readiness probe will listen on. | 10254 |
controller.stats.enabled |
if true , enable "vts-status" page |
false |
controller.stats.service.annotations |
annotations for controller stats service | {} |
controller.stats.service.clusterIP |
internal controller stats cluster service IP | "" |
controller.stats.service.externalIPs |
controller service stats external IP addresses | [] |
controller.stats.service.loadBalancerIP |
IP address to assign to load balancer (if supported) | "" |
controller.stats.service.loadBalancerSourceRanges |
list of IP CIDRs allowed access to load balancer (if supported) | [] |
controller.stats.service.type |
type of controller stats service to create | ClusterIP |
controller.metrics.enabled |
if true , enable Prometheus metrics (controller.stats.enabled must be true as well) |
false |
controller.metrics.service.annotations |
annotations for Prometheus metrics service | {} |
controller.metrics.service.clusterIP |
cluster IP address to assign to service | "" |
controller.metrics.service.externalIPs |
Prometheus metrics service external IP addresses | [] |
controller.metrics.service.loadBalancerIP |
IP address to assign to load balancer (if supported) | "" |
controller.metrics.service.loadBalancerSourceRanges |
list of IP CIDRs allowed access to load balancer (if supported) | [] |
controller.metrics.service.servicePort |
Prometheus metrics service port | 9913 |
controller.metrics.service.targetPort |
Prometheus metrics target port | 10254 |
controller.metrics.service.type |
type of Prometheus metrics service to create | ClusterIP |
controller.customTemplate.configMapName |
configMap containing a custom nginx template | "" |
controller.customTemplate.configMapKey |
configMap key containing the nginx template | "" |
controller.headers |
configMap key:value pairs containing the custom headers for Nginx | {} |
controller.updateStrategy |
allows setting of RollingUpdate strategy | {} |
defaultBackend.name |
name of the default backend component | default-backend |
defaultBackend.image.repository |
default backend container image repository | registry.cn-hangzhou.aliyuncs.com/acs/defaultbackend |
defaultBackend.image.tag |
default backend container image tag | 1.4 |
defaultBackend.image.pullPolicy |
default backend container image pull policy | IfNotPresent |
defaultBackend.extraArgs |
Additional default backend container arguments | {} |
defaultBackend.tolerations |
node taints to tolerate (requires Kubernetes >=1.6) | [] |
defaultBackend.affinity |
node/pod affinities (requires Kubernetes >=1.6) | {} |
defaultBackend.nodeSelector |
node labels for pod assignment | {} |
defaultBackend.podAnnotations |
annotations to be added to pods | {} |
defaultBackend.replicaCount |
desired number of default backend pods | 1 |
defaultBackend.minAvailable |
minimum number of available default backend pods for PodDisruptionBudget | 1 |
defaultBackend.resources |
default backend pod resource requests & limits | {} |
defaultBackend.service.annotations |
annotations for default backend service | {} |
defaultBackend.service.clusterIP |
internal default backend cluster service IP | "" |
defaultBackend.service.externalIPs |
default backend service external IP addresses | [] |
defaultBackend.service.loadBalancerIP |
IP address to assign to load balancer (if supported) | "" |
defaultBackend.service.loadBalancerSourceRanges |
list of IP CIDRs allowed access to load balancer (if supported) | [] |
defaultBackend.service.type |
type of default backend service to create | ClusterIP |
imagePullSecrets |
name of Secret resource containing private registry credentials | nil |
rbac.create |
If true, create & use RBAC resources | false |
rbac.serviceAccountName |
ServiceAccount to be used (ignored if rbac.create=true) | default |
revisionHistoryLimit |
The number of old history to retain to allow rollback. | 10 |
tcp |
TCP service key:value pairs | {} |
udp |
UDP service key:value pairs | {} |