Antrea-安装与拓扑

本实验为验证Antrea在原生Kubernetes集群的安装和拓扑研究

测试环境

Antrea安装

下载地址:https://github.com/antrea-io/antrea/releases/tag/v0.13.3,以及安装使用该页面上antrea.yml
Antrea-安装与拓扑

[root@master-01 ~]# kubectl exec -it antrea-controller-787747dbb7-fck8l  -n kube-system -- antctl version
antctlVersion: v0.13.3
controllerVersion: v0.13.3

Kubernetes环境集成

Cluster

[root@master-01 ~]# kubectl get node -owide
NAME        STATUS   ROLES                  AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
master-01   Ready    control-plane,master   4d15h   v1.21.2   192.168.110.61   <none>        CentOS Linux 7 (Core)   3.10.0-1160.31.1.el7.x86_64   docker://20.10.7
worker-01   Ready    <none>                 4d15h   v1.21.2   192.168.110.66   <none>        CentOS Linux 7 (Core)   3.10.0-1062.el7.x86_64        docker://20.10.7
worker-02   Ready    <none>                 4d15h   v1.21.2   192.168.110.67   <none>        CentOS Linux 7 (Core)   3.10.0-1062.el7.x86_64        docker://20.10.7
Name Role IP Addr
master-01 control-plane,master 192.168.110.61
worker-01 worker 192.168.110.66
worker-02 worker 192.168.110.67

Worker-01

[root@worker-01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:b2:35:31 brd ff:ff:ff:ff:ff:ff
    inet 192.168.110.66/24 brd 192.168.110.255 scope global noprefixroute ens192
       valid_lft forever preferred_lft forever
    inet6 fe80::f343:5255:f35a:fa51/64 scope link tentative noprefixroute dadfailed
       valid_lft forever preferred_lft forever
    inet6 fe80::bed0:b654:1f36:9ded/64 scope link tentative noprefixroute dadfailed
       valid_lft forever preferred_lft forever
    inet6 fe80::2126:395:4153:6fe4/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:5a:d2:28 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:5a:d2:28 brd ff:ff:ff:ff:ff:ff
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:87:6a:8a:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
6: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 3e:60:c2:54:9f:de brd ff:ff:ff:ff:ff:ff
7: genev_sys_6081: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
    link/ether aa:ce:13:21:f1:bc brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a8ce:13ff:fe21:f1bc/64 scope link
       valid_lft forever preferred_lft forever
8: antrea-gw0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 36:ab:6b:31:0c:75 brd ff:ff:ff:ff:ff:ff
    inet 10.211.1.1/24 brd 10.211.1.255 scope global antrea-gw0
       valid_lft forever preferred_lft forever
    inet6 fe80::34ab:6bff:fe31:c75/64 scope link
       valid_lft forever preferred_lft forever
11: coredns--c4f3d4@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default
    link/ether de:72:46:4d:6b:85 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::dc72:46ff:fe4d:6b85/64 scope link
       valid_lft forever preferred_lft forever

Worker-02

[root@worker-02 harbor.corp.tanzu]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:b2:46:58 brd ff:ff:ff:ff:ff:ff
    inet 192.168.110.67/24 brd 192.168.110.255 scope global noprefixroute ens192
       valid_lft forever preferred_lft forever
    inet6 fe80::bed0:b654:1f36:9ded/64 scope link tentative noprefixroute dadfailed
       valid_lft forever preferred_lft forever
    inet6 fe80::f343:5255:f35a:fa51/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:5a:d2:28 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:5a:d2:28 brd ff:ff:ff:ff:ff:ff
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:72:d7:4a:a4 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
6: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ea:d9:69:03:42:6e brd ff:ff:ff:ff:ff:ff
8: antrea-gw0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 12:c8:e9:86:7e:7b brd ff:ff:ff:ff:ff:ff
    inet 10.211.2.1/24 brd 10.211.2.255 scope global antrea-gw0
       valid_lft forever preferred_lft forever
    inet6 fe80::10c8:e9ff:fe86:7e7b/64 scope link
       valid_lft forever preferred_lft forever
26: genev_sys_6081: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
    link/ether 6e:54:7d:ca:22:8b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::6c54:7dff:feca:228b/64 scope link
       valid_lft forever preferred_lft forever
28: ako-0-d03cf7@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default
    link/ether e6:5b:9f:f6:6e:11 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::e45b:9fff:fef6:6e11/64 scope link
       valid_lft forever preferred_lft forever
29: coredns--c35d50@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default
    link/ether 76:b3:51:a3:58:5b brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::74b3:51ff:fea3:585b/64 scope link
       valid_lft forever preferred_lft forever

worker-01和02的端口情况来看,我们可以看到在Node上antrea-gw0和turn0情况

HostName Role name IP addr MAC
worker-01 genev_sys_6081 36: ab:6b:31:0c:75
worker-02 genev_sys_6081 6e:54:7d:ca:22:8b
worker-01 antrea-gw0 10.211.1.1/24
worker-02 antrea-gw0 10.211.2.1/24
  • “genev_sys” is the actual tunnel interface (antrea-tun0) used on the OVS in the worker nodes for overlay encapsulation functionality of Antrea. It acts as a slave interface of “ovs-system” interface.
  • “ovs-system” interface is the OVS internal port which is not used for any IP / OSI Layer 3 function here. For more info please check this introduction video.
  • “antrea-gw0” interface is basically the default gateway interface for all the pods on Worker .

Route

[root@worker-01 ~]# ip route
default via 192.168.110.1 dev ens192 proto static metric 100
10.211.0.0/24 via 10.211.0.1 dev antrea-gw0 onlink
10.211.1.0/24 dev antrea-gw0 proto kernel scope link src 10.211.1.1
10.211.2.0/24 via 10.211.2.1 dev antrea-gw0 onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.110.0/24 dev ens192 proto kernel scope link src 192.168.110.66 metric 100
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1

从上面的结果可以看到,对于Worker Node,其Pod路由通过antrea-gw0转发,而外部路由通过主机端口转发(有NAT),默认路由即主机端口

Antrea Agent Pods

前面我们说过:在每个工作节点上,Antrea 以 DaemonSet 的形式部署运行 antrea-agent 和 OVS 用户态守护进程,并以 init container 的形式将 antrea-cni 插件安装到主机并加载 OVS 内核模块。antrea-agent 负责 OVS 网桥和 Pod网络接口的管理,并通过OpenFlow协议编程OVS来实现各种网络和安全功能。

[root@master-01 ~]# kubectl get po -A -owide
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
avi-system    ako-0                                1/1     Running   2          3d22h   10.211.2.20      worker-02   <none>           <none>
default       backend1                             1/1     Running   0          175m    10.211.1.15      worker-01   <none>           <none>
default       backend2                             1/1     Running   0          175m    10.211.2.30      worker-02   <none>           <none>
default       frontend                             1/1     Running   0          175m    10.211.1.14      worker-01   <none>           <none>
kube-system   antrea-agent-5fx86                   2/2     Running   7          4d19h   192.168.110.61   master-01   <none>           <none>
kube-system   antrea-agent-bj42h                   2/2     Running   3          4d19h   192.168.110.66   worker-01   <none>           <none>
kube-system   antrea-agent-j28lw                   2/2     Running   8          4d19h   192.168.110.67   worker-02   <none>           <none>
kube-system   antrea-controller-787747dbb7-fck8l   1/1     Running   2          4d19h   192.168.110.66   worker-01   <none>           <none>
kube-system   coredns-558bd4d5db-gtvr4             1/1     Running   1          4d19h   10.211.1.4       worker-01   <none>           <none>
kube-system   coredns-558bd4d5db-vgjs2             1/1     Running   3          4d19h   10.211.2.21      worker-02   <none>           <none>
kube-system   etcd-master-01                       1/1     Running   7          4d20h   192.168.110.61   master-01   <none>           <none>
kube-system   kube-apiserver-master-01             1/1     Running   8          4d20h   192.168.110.61   master-01   <none>           <none>
kube-system   kube-controller-manager-master-01    1/1     Running   34         4d20h   192.168.110.61   master-01   <none>           <none>
kube-system   kube-proxy-nkql8                     1/1     Running   3          4d19h   192.168.110.67   worker-02   <none>           <none>
kube-system   kube-proxy-q2mzj                     1/1     Running   1          4d19h   192.168.110.66   worker-01   <none>           <none>
kube-system   kube-proxy-q7fx9                     1/1     Running   5          4d19h   192.168.110.61   master-01   <none>           <none>
kube-system   kube-scheduler-master-01             1/1     Running   30         4d20h   192.168.110.61   master-01   <none>           <none>

kube-system antrea-agent-bj42h 2/2 Running 3 4d19h 192.168.110.66 worker-01
kube-system antrea-agent-j28lw 2/2 Running 8 4d19h 192.168.110.67 worker-02

OVS

借助于"ovs-vsctl show" 命令可以查看 antrea-ovs的状况,这里用的上一步的antrea-agent 容器,以worker-01为例

[root@master-01 ~]# kubectl exec -n kube-system -it antrea-agent-bj42h -c antrea-ovs -- ovs-vsctl show
7ccc18b5-068f-4977-b7cb-a9fefdd147c3
    Bridge br-int
        datapath_type: system
        Port backend1-911dea
            Interface backend1-911dea
        Port coredns--c4f3d4
            Interface coredns--c4f3d4
        Port antrea-tun0
            Interface antrea-tun0
                type: geneve
                options: {csum="true", key=flow, remote_ip=flow}
        Port antrea-gw0
            Interface antrea-gw0
                type: internal
        Port frontend-fbd015
            Interface frontend-fbd015
    ovs_version: "2.14.0"

借助于"ovs-ofctl show br-int" 命令可以查看 antrea-ovs的端口情况,同样以worker-01为例

[root@master-01 ~]# kubectl exec -n kube-system -it antrea-agent-bj42h -c antrea-ovs -- ovs-ofctl show br-int
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000166b68c6424a
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
 **1(antrea-tun0): addr:06:c6:e4:f2:94:51**
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 **2(antrea-gw0): addr:36:ab:6b:31:0c:75**
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 5(coredns--c4f3d4): addr:de:72:46:4d:6b:85
     config:     0
     state:      0
     current:    10GB-FD COPPER
     speed: 10000 Mbps now, 0 Mbps max
 **15(frontend-fbd015): addr:86:e0:33:27:a8:c0**
     config:     0
     state:      0
     current:    10GB-FD COPPER
     speed: 10000 Mbps now, 0 Mbps max
 **16(backend1-911dea): addr:c6:97:94:e1:2e:a5**
     config:     0
     state:      0
     current:    10GB-FD COPPER
     speed: 10000 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

可以看到该OVS@worker-01上有turn0,gw0,frontend和backend1的接入。

测试用例App

采用集成网络工具https://hub.docker.com/r/praqma/network-multitool/,下面的yaml仅部署了三个 Pod(前端、后端 1、后端 2)、一个服务 (ClusterIP) 和两个网络策略(一个用于前端 Pod,一个用于后端 Pod)。

apiVersion: v1
kind: Pod
metadata:
  labels:
    role: frontend
  name: frontend
spec:
  containers:
  - image: harbor.corp.tanzu/library/network-multitool@sha256:1a546071c99290fa1d02f8ded26070e1e5711efeb02b3208752b92834f058948
    name: frontend
  nodeName: worker-01
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    role: backend
  name: backend1
spec:
  containers:
  - image: harbor.corp.tanzu/library/network-multitool@sha256:1a546071c99290fa1d02f8ded26070e1e5711efeb02b3208752b92834f058948
    name: backend1
  nodeName: worker-01
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    role: backend
  name: backend2
spec:
  containers:
  - image: harbor.corp.tanzu/library/network-multitool@sha256:1a546071c99290fa1d02f8ded26070e1e5711efeb02b3208752b92834f058948
    name: backend2
  nodeName: worker-02
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: backendsvc
  name: backendsvc
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    role: backend
  type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: frontendpolicy
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: frontend
  policyTypes:
  - Egress
  - Ingress
  egress:
  - to:
    - podSelector:
        matchLabels:
          role: backend
    ports:
    - protocol: TCP
      port: 80
  - ports:
    - protocol: UDP
      port: 53
  ingress:
  - ports:
    - protocol: TCP
      port : 80
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: backendpolicy
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: backend
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 80

在上面显示的 yaml 文件中,直接将特定的 pod 固定到特定的工作节点。因此,在 yaml 中使用了“Nodename”。
运行以后:

[root@master-01 ~]# kubectl get po -owide
NAME       READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
backend1   1/1     Running   0          20s   10.211.1.15   worker-01   <none>           <none>
backend2   1/1     Running   0          20s   10.211.2.30   worker-02   <none>           <none>
frontend   1/1     Running   0          20s   10.211.1.14   worker-01   <none>           <none>
[root@master-01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
backendsvc   ClusterIP   10.101.216.214   <none>        80/TCP    47s
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   4d17h
[root@master-01 ~]# kubectl get ep
NAME         ENDPOINTS                       AGE
backendsvc   10.211.1.15:80,10.211.2.30:80   63s
kubernetes   192.168.110.61:6443             4d17h

查看运行情况,登录到frontend上查看后端运行,这个可以使用svc:backendsvc

[root@master-01 ~]# kubectl exec -it frontend -- sh
/ # curl backendsvc
Praqma Network MultiTool (with NGINX) - backend1 - 10.211.1.15
/ # curl backendsvc
Praqma Network MultiTool (with NGINX) - backend1 - 10.211.1.15
/ # curl backendsvc
Praqma Network MultiTool (with NGINX) - backend2 - 10.211.2.30

可以查看到两个后端backend1和2工作正常。
为了让前端 pod 在其 IP 上向 backendsvc 服务发送请求,它实际上向 Kubernetes DNS 服务发送了 DNS 请求。

测试拓扑图

综合上面的分析,我们得到了测试的拓扑图
Antrea-安装与拓扑

上一篇:Linux——Centos8虚拟机添加网卡未显示


下一篇:POJ2060 Taxi Cab Scheme 出租车 NWERC2004