使用AKO为TKG提供LoadBalancer

本文记录了使用AKO(AVI)为TKG提供对外LoadBalancer服务的配置过程。

TKG的介绍请参考:Tanzu Kubernetes Grid介绍
TKG以及其安装配置参考:安装 Tanzu Kubernetes Grid
Tanzu Kubernetes Grid基本操作

环境

项目 内容 备注
VMware ESXi 7.0.1 17325551
vCenter Server 7.0.1.00300 17491101
NSX-T 3.11 3.1.1.0.0.17483185
Tanzu Kubernetes Grid CLI 1.21 tkg-linux-amd64-v1.2.1+vmware.1
kubectl cluster cli v1.19.3 kubectl-linux-v1.19.3
Client OS CentOS 7 CentOS-7-x86_64-DVD-2009.iso
AVI Controller 20.1.3 AKO 1.3.3
NSX-T 3.1

安装过程

AVI 配置准备

AVI的环境沿用在前面文:实现VMware Horizo​​n+负载均衡(AVI)中的环境。

IP网段配置

本次使用的地址段是172.20.0.0/24,为此段ip进行预配置:
导航到Infrastructure–>Networks–>Default-Cloud,编辑172.20.0.0/24网段
使用AKO为TKG提供LoadBalancer
增加Subnet。
导航到Templates–>Profiles–>IPAM/DNS Profiles,编辑原有IPAM-VMLAB,在Default-Cloud增加172.20.0.0/24网段。
使用AKO为TKG提供LoadBalancer

新增SE-Group

为了和前面的环境区分,配置新的SE Group,导航到Infrastructure–>Service Engine Group–>CREATE
使用AKO为TKG提供LoadBalancer

TKG Cluster 配置

参数修改

  • 已经安装好TKG 管理Cluster和TKG Cluster:
[root@hop-172 ~]# kubectl config get-contexts
CURRENT   NAME                                    CLUSTER           AUTHINFO                NAMESPACE
          mgmt-admin@mgmt                         mgmt              mgmt-admin              
          tanzu-cluster-admin@tanzu-cluster       tanzu-cluster     tanzu-cluster-admin     
 -         tanzu20-cluster-admin@tanzu20-cluster   tanzu20-cluster   tanzu20-cluster-admin   
  • CentOS跳板机已经安装好Helm
  • 增加repository到 helm CLI
helm repo add ako https://projects.registry.vmware.com/chartrepo/ako

搜索 Charts for AKO:

[root@hop-172 ~]# helm search repo
NAME                    CHART VERSION   APP VERSION     DESCRIPTION                             
ako/ako                 1.3.3           1.3.3           A helm chart for Avi Kubernetes Operator
ako/ako-operator        1.3.1           1.3.1           A Helm chart for Kubernetes AKO Operator
ako/amko                1.3.1           1.3.1           A helm chart for Avi Kubernetes Operator
  • 利用命令获取参数,并根据AVI环境进行编辑。
helm show values ako/ako --version 1.3.3 > values.yaml
  • VALUES修改的内容:
    ClusterName:
KOSettings:
  logLevel: WARN   #enum: INFO|DEBUG|WARN|ERROR
  fullSyncFrequency: '1800' # This frequency controls how often AKO polls the Avi controller to update itself with cloud configurations.
  apiServerPort: 8080 # Internal port for AKO's API server for the liveness probe of the AKO pod default=8080
  deleteConfig: 'false' # Has to be set to true in configmap if user wants to delete AKO created objects from AVI 
  disableStaticRouteSync: 'false' # If the POD networks are reachable from the Avi SE, set this knob to true.
  clusterName: 'tanzu20-cluster'   # A unique identifier for the kubernetes cluster, that helps distinguish the objects for this cluster in the avi controller. // MUST-EDIT

网络参数:

NetworkSettings:
  ## This list of network and cidrs are used in pool placement network for vcenter cloud.
  ## Node Network details are not needed when in nodeport mode / static routes are disabled / non vcenter clouds.
  nodeNetworkList: []
  subnetIP: '172.20.0.0' # Subnet IP of the vip network
  subnetPrefix: '24' # Subnet Prefix of the vip network
  networkName: 'ls-172.20.0' # Network Name of the vip network
  enableRHI: false # This is a cluster wide setting for BGP peering.

AVI参数:

ControllerSettings:
 serviceEngineGroupName: se-group   # Name of the ServiceEngine Group.
 controllerVersion: 20.1.2   # The controller API version
 cloudName: Default-Cloud   # The configured cloud name on the Avi controller.
 controllerHost: '10.105.130.55' # IP address or Hostname of Avi Controller
 tenantsPerCluster: 'false' # If set to true, AKO will map each kubernetes cluster uniquely to a tenant in Avi
 tenantName: admin   # Name of the tenant where all the AKO objects will be created in AVI. // Required only if tenantsPerCluster is set to True

注意:AVI到版本是20.1.3,可是实测不能通过。改为20.1.2可以成功。

AKO安装

利用上文修改好的value.yaml文件,安装AKO

root@hop-172 ~]# helm install  ako/ako  --generate-name --version 1.3.3 -f values.yaml  -n avi-system
NAME: ako-1613991974
LAST DEPLOYED: Mon Feb 22 11:06:15 2021
NAMESPACE: avi-system
STATUS: deployed
REVISION: 1

查看pod情况

[root@hop-172 ~]# kubectl get po -n avi-system 
NAME    READY   STATUS    RESTARTS   AGE
ako-0   1/1     Running   0          11s
[root@hop-172 ~]# helm list -n avi-system
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
ako-1613991974  avi-system      1               2021-02-22 11:06:15.446844648 +0000 UTC deployed        ako-1.3.3       1.3.3    

验证

在Cluster上利用yaml文件创建Deployment,指定Service type为LoadBalancer

apiVersion: v1
kind: Service
metadata:
  name: hello-kubernetes
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-kubernetes
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello-kubernetes
  template:
    metadata:
      labels:
        app: hello-kubernetes
    spec:
      containers:
      - name: hello-kubernetes
        image: paulbouwer/hello-kubernetes:1.5
        ports:
        - containerPort: 8080
        env:
        - name: MESSAGE
          value: I just deployed Web Service via AVI for pod Cluster!!

查看SVC情况:

[root@hop-172 ~]# kubectl get svc
NAME               TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)        AGE
hello-kubernetes   LoadBalancer   111.69.112.27   172.20.0.100   80:30321/TCP   85m
kubernetes         ClusterIP      111.64.0.1      <none>         443/TCP        114m

可以看到这里的外部IP地址172.20.0.100,正是在AVI 172.20.0.0/24中配置的地址段172.20.0.100-172.20.0.149中的一个。
在AVI上查看:
使用AKO为TKG提供LoadBalancer
体验效果
使用AKO为TKG提供LoadBalancer

使用AKO为TKG提供LoadBalancer
参考文档:
avi-helm-charts
以上。

上一篇:SpringCloud之Ribbon源码解析(二)--请求流程


下一篇:Spring Cloud源码分析——Ribbon客户端负载均衡