Kubernetes弹性伸缩全场景解析(三) - HPA实践手册

前言

在上一篇文章中,给大家介绍和剖析了HPA的实现原理以及演进的思路与历程。在本文中,我们会为大家讲解如何使用HPA以及一些需要注意的细节。

autoscaling/v1实践

v1的模板可能是大家平时见到最多的也是最简单的,v1版本的HPA只支持一种指标 —— CPU。传统意义上,弹性伸缩最少也会支持CPU与Memory两种指标,为什么在Kubernetes中只放开了CPU呢?其实最早的HPA是计划同时支持这两种指标的,但是实际的开发测试中发现,内存不是一个非常好的弹性伸缩判断条件。因为和CPU不同,很多内存型的应用,并不会因为HPA弹出新的容器而带来内存的快速回收,因为很多应用的内存都要交给语言层面的VM进行管理,也就是内存的回收是由VM的GC来决定的。这就有可能因为GC时间的差异导致HPA在不恰当的时间点震荡,因此在v1的版本中,HPA就只支持了CPU一种指标。

一个标准的v1模板大致如下:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: php-apache
  namespace: default
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: php-apache
  minReplicas: 1
  maxReplicas: 10
  targetCPUUtilizationPercentage: 50

其中scaleTargetRef表示当前要操作的伸缩对象是谁,在本例中,伸缩的对象是一个apps/v1版本的DeploymenttargetCPUUtilizationPercentage表示当整体的资源利用率超过50%的时候,会进行扩容。接下来我们做一个简单的Demo来实践下。

  1. 登录容器服务控制台,首先创建一个应用部署,选择使用模板创建,模板内容如下。

      apiVersion: apps/v1beta1
      kind: Deployment
      metadata:
    name: php-apache
    labels:
     app: php-apache
      spec:
    replicas: 1
    selector:
     matchLabels:
       app: php-apache
    template:
     metadata:
       labels:
         app: php-apache
     spec:
       containers:
       - name: php-apache
         image: registry.cn-hangzhou.aliyuncs.com/ringtail/hpa-example:v1.0
         ports:
         - containerPort: 80
         resources:
           requests:
             memory: "300Mi"
             cpu: "250m"
      --- 
      apiVersion: v1
      kind: Service
      metadata:
    name: php-apache
    labels:
     app: php-apache
      spec:
    selector:
     app: php-apache
    ports:
    - protocol: TCP
     name: http
     port: 80 
     targetPort: 80
    type: ClusterIP
  2. 部署压测模组HPA模板

    apiVersion: autoscaling/v1
    kind: HorizontalPodAutoscaler
    metadata:
    name: php-apache
    namespace: default
    spec:
    scaleTargetRef:
      apiVersion: apps/v1beta1
      kind: Deployment
      name: php-apache
    minReplicas: 1
    maxReplicas: 10
    targetCPUUtilizationPercentage: 50 
  3. 开启压力测试
   apiVersion: apps/v1beta1
   kind: Deployment
   metadata:
     name: load-generator 
     labels:
       app: load-generator
   spec:
     replicas: 1
     selector:
       matchLabels:
         app: load-generator
     template:
       metadata:
         labels:
           app: load-generator
       spec:
         containers:
         - name: load-generator
           image: busybox 
           command:
             - "sh"
             - "-c"
             - "while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done"
  1. 检查扩容状态
    Kubernetes弹性伸缩全场景解析(三) - HPA实践手册
  2. 关闭压测应用
    Kubernetes弹性伸缩全场景解析(三) - HPA实践手册
  3. 检查缩容状态
    Kubernetes弹性伸缩全场景解析(三) - HPA实践手册

这样一个使用autoscaling/v1的HPA就完成了,相比而言,这个版本的HPA是目前最简单的,无论是否升级Metrics-Server都可以实现。

autoscaling/v2beta1实践

在前面的内容中为大家讲解了HPA还有autoscaling/v2beta1autoscaling/v2beta2这两个版本,这两个版本的区别是autoscaling/v1beta1支持了Resource MetricsCustom Metrics。而在autoscaling/v2beta2的版本中额外增加了External Metrics的支持。对于External Metrics在本文中就不过多赘述,因为External Metrics目前在社区里面没有太多成熟的实现,比较成熟的实现是Prometheus Custom Metrics

Kubernetes弹性伸缩全场景解析(三) - HPA实践手册

上面这张图为大家展现了开启Metrics Server后HPA是如何使用不同的类型的Metrics的,如果需要使用Custom Metrics则需要配置安装相应的Custom Metrics Adapter,在本篇文章中,主要为大家介绍一个基于QPS来进行弹性伸缩的例子。

  1. 安装Metrics Server并在kube-controller-manager中进行开启
    目前默认的阿里云容器服务Kubernetes集群使用还是Heapster,容器服务计划在1.12中更新Metrics Server,这个地方需要特别说明下,社区虽然已经逐渐开始废弃Heapster,但是目前社区中还有大量的组件是在强依赖Heapster的API,因此阿里云基于Metrics Server进行了Heapster完整的兼容,既可以让开发者使用Metrics Server的新功能,又可以无需担心其他组件的宕机。

在部署新的Metrics Server之前,我们首先要备份一下Heapster中的一些启动参数,因为这些参数稍后会直接用在Metrics Server的模板中,其中重点关心的是两个Sink,如果需要使用Influxdb的开发者,可以保留第一个Sink,如果需要保留云监控集成能力的开发者,则保留第二个Sink。
Kubernetes弹性伸缩全场景解析(三) - HPA实践手册
将这两个参数拷贝到Metrics Server的启动模板中,在本例中是两个都兼容,并下发部署。

apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    kubernetes.io/name: "Metrics-server"
spec:
  selector:
    k8s-app: metrics-server
  ports:
  - port: 443
    protocol: TCP
    targetPort: 443
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
  name: v1beta1.metrics.k8s.io
spec:
  service:
    name: metrics-server
    namespace: kube-system
  group: metrics.k8s.io
  version: v1beta1
  insecureSkipTLSVerify: true
  groupPriorityMinimum: 100
  versionPriority: 100
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
    spec:
      serviceAccountName: admin
      containers:
      - name: metrics-server
        image: registry.cn-hangzhou.aliyuncs.com/ringtail/metrics-server:1.1
        imagePullPolicy: Always
        command:
        - /metrics-server
        - '--source=kubernetes:https://kubernetes.default'
        - '--sink=influxdb:http://monitoring-influxdb:8086'
        - '--sink=socket:tcp://monitor.csk.[region_id].aliyuncs.com:8093?clusterId=[cluster_id]&public=true'

接下来我们修改下HeapsterService,将服务的后端从Heapster转移到Metrics Server
Kubernetes弹性伸缩全场景解析(三) - HPA实践手册
如果此时从控制台的节点页面可以获取到右侧的监控信息的话,说明Metrics Server以及完全兼容Heapster
Kubernetes弹性伸缩全场景解析(三) - HPA实践手册
此时通过kubectl get apiservice,如果可以看到注册的v1beta1.metrics.k8s.io的api,则说明已经注册成功。
Kubernetes弹性伸缩全场景解析(三) - HPA实践手册
接下来我们需要在kube-controller-manager上切换Metrics的数据来源。kube-controller-manger部署在每个master上,是通过Static Pod的托管给kubelet的。因此只需要修改kube-controller-manager的配置文件,kubelet就会自动进行更新。kube-controller-manager在主机上的路径是/etc/kubernetes/manifests/kube-controller-manager.yaml
Kubernetes弹性伸缩全场景解析(三) - HPA实践手册
需要将--horizontal-pod-autoscaler-use-rest-clients=true,这里有一个注意点,因为如果使用vim进行编辑,vim会自动生成一个缓存文件影响最终的结果,所以比较建议的方式是将这个配置文件移动到其他的目录下进行修改,然后再移回原来的目录。至此,Metrics Server已经可以为HPA进行服务了,接下来我们来做自定义指标的部分。

  1. 部署Custom Metrics Adapter
    如集群中未部署Prometheus,可以参考《阿里云容器Kubernetes监控(七) - Prometheus监控方案部署》先部署Prometheus。接下来我们部署Custom Metrics Adapter
kind: Namespace
apiVersion: v1
metadata:
  name: custom-metrics
---
kind: ServiceAccount
apiVersion: v1
metadata:
  name: custom-metrics-apiserver
  namespace: custom-metrics
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: custom-metrics:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: custom-metrics-apiserver
  namespace: custom-metrics
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: custom-metrics-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: custom-metrics-apiserver
  namespace: custom-metrics
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: custom-metrics-resource-reader
rules:
- apiGroups:
  - ""
  resources:
  - namespaces
  - pods
  - services
  verbs:
  - get
  - list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: custom-metrics-apiserver-resource-reader
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: custom-metrics-resource-reader
subjects:
- kind: ServiceAccount
  name: custom-metrics-apiserver
  namespace: custom-metrics
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: custom-metrics-getter
rules:
- apiGroups:
  - custom.metrics.k8s.io
  resources:
  - "*"
  verbs:
  - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: hpa-custom-metrics-getter
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: custom-metrics-getter
subjects:
- kind: ServiceAccount
  name: horizontal-pod-autoscaler
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: custom-metrics-apiserver
  namespace: custom-metrics
  labels:
    app: custom-metrics-apiserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: custom-metrics-apiserver
  template:
    metadata:
      labels:
        app: custom-metrics-apiserver
    spec:
      tolerations:
      - key: beta.kubernetes.io/arch
        value: arm
        effect: NoSchedule
      - key: beta.kubernetes.io/arch
        value: arm64
        effect: NoSchedule
      serviceAccountName: custom-metrics-apiserver
      containers:
      - name: custom-metrics-server
        image: luxas/k8s-prometheus-adapter:v0.2.0-beta.0
        args:
        - --prometheus-url=http://prometheus-k8s.monitoring.svc:9090
        - --metrics-relist-interval=30s
        - --rate-interval=60s
        - --v=10
        - --logtostderr=true
        ports:
        - containerPort: 443
        securityContext:
          runAsUser: 0
---
apiVersion: v1
kind: Service
metadata:
  name: api
  namespace: custom-metrics
spec:
  ports:
  - port: 443
    targetPort: 443
  selector:
    app: custom-metrics-apiserver
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  name: v1beta1.custom.metrics.k8s.io
spec:
  insecureSkipTLSVerify: true
  group: custom.metrics.k8s.io
  groupPriorityMinimum: 1000
  versionPriority: 5
  service:
    name: api
    namespace: custom-metrics
  version: v1beta1
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: custom-metrics-server-resources
rules:
- apiGroups:
  - custom-metrics.metrics.k8s.io
  resources: ["*"]
  verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: hpa-controller-custom-metrics
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: custom-metrics-server-resources
subjects:
- kind: ServiceAccount
  name: horizontal-pod-autoscaler
  namespace: kube-system

3.部署手压测应用与HPA模板

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: sample-metrics-app
  name: sample-metrics-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: sample-metrics-app
  template:
    metadata:
      labels:
        app: sample-metrics-app
    spec:
      tolerations:
      - key: beta.kubernetes.io/arch
        value: arm
        effect: NoSchedule
      - key: beta.kubernetes.io/arch
        value: arm64
        effect: NoSchedule
      - key: node.alpha.kubernetes.io/unreachable
        operator: Exists
        effect: NoExecute
        tolerationSeconds: 0
      - key: node.alpha.kubernetes.io/notReady
        operator: Exists
        effect: NoExecute
        tolerationSeconds: 0
      containers:
      - image: luxas/autoscale-demo:v0.1.2
        name: sample-metrics-app
        ports:
        - name: web
          containerPort: 8080
        readinessProbe:
          httpGet:
            path: /
            port: 8080
          initialDelaySeconds: 3
          periodSeconds: 5
        livenessProbe:
          httpGet:
            path: /
            port: 8080
          initialDelaySeconds: 3
          periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
  name: sample-metrics-app
  labels:
    app: sample-metrics-app
spec:
  ports:
  - name: web
    port: 80
    targetPort: 8080
  selector:
    app: sample-metrics-app
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: sample-metrics-app
  labels:
    service-monitor: sample-metrics-app
spec:
  selector:
    matchLabels:
      app: sample-metrics-app
  endpoints:
  - port: web
---
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
  name: sample-metrics-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: sample-metrics-app
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Object
    object:
      target:
        kind: Service
        name: sample-metrics-app
      metricName: http_requests
      targetValue: 100
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: sample-metrics-app
  namespace: default
  annotations:
    traefik.frontend.rule.type: PathPrefixStrip
spec:
  rules:
  - http:
      paths:
      - path: /sample-app
        backend:
          serviceName: sample-metrics-app
          servicePort: 80

这个压测的应用暴露了一个Prometheus的接口,接口中的数据如下,其中http_requests_total这个指标就是我们接下来伸缩使用的自定义指标。

[root@iZwz99zrzfnfq8wllk0dvcZ manifests]# curl 172.16.1.160:8080/metrics
# HELP http_requests_total The amount of requests served by the server in total
# TYPE http_requests_total counter
http_requests_total 3955684

4.部署压测应用

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: load-generator 
  labels:
    app: load-generator
spec:
  replicas: 1
  selector:
    matchLabels:
      app: load-generator
  template:
    metadata:
      labels:
        app: load-generator
    spec:
      containers:
      - name: load-generator
        image: busybox 
        command:
          - "sh"
          - "-c"
          - "while true; do wget -q -O- http://sample-metrics-app.default.svc.cluster.local; done"

5.查看HPA的状态与伸缩,稍等几分钟,Pod已经伸缩成功了。

  workspace kubectl get hpa
NAME                     REFERENCE                       TARGETS       MINPODS   MAXPODS   REPLICAS   AGE
php-apache               Deployment/php-apache           0%/50%        1         10        1          21d
sample-metrics-app-hpa   Deployment/sample-metrics-app   538133m/100   2         10        10         15h

最后

这篇文章主要是给大家一个对于autoscaling/v1autoscaling/v2beta1的感性的认识和大体的操作方式,对于autoscaling/v1我们不做过多的赘述,对于希望使用支持Custom Metricsautoscaling/v2beta1的开发者也许会认为整体的操作流程过于复杂难以理解,我们会在下一篇文章中为大家详解autoscaling/v2beta1使用Custom Metrics的种种细节,帮助大家跟深入的理解这其中的原理与设计思路。

上一篇:阿里云心选商城——网站建设篇之——云·企业官网定制


下一篇:Spark in action on Kubernetes - 存储篇(一)