21.Kubernetes(三)-----集群部署(service)

service

一、简介

  • Service 是由 kube-proxy 组件,加上 iptables 来共同实现的.
    • kube-proxy 通过 iptables 处理 Service 的过程,需要在宿主机上设置相当多的
    iptables 规则,如果宿主机有大量的Pod,不断刷新iptables规则,会消耗大量的
    CPU资源。
    • IPVS模式的service,可以使K8s集群支持更多量级的Pod。

二、开启kube-proxy的ipvs模式:

[root@server4 pod]# kubectl -n kube-system get pod |grep proxy

[root@server4 pod]# kubectl -n kube-system get pod -o wide |grep proxy

[root@server4 pod]# kubectl -n kube-system get cm

[root@server4 pod]# kubectl -n kube-system get cm kube-proxy

[root@server4 pod]# kubectl edit cm kube-proxy -n kube-system
configmap/kube-proxy edited
[root@server4 pod]# kubectl get pod -n kube-system |grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-cmq2v" deleted
pod "kube-proxy-hqzkh" deleted
pod "kube-proxy-mjzh2" deleted
[root@server4 pod]# ip addr

21.Kubernetes(三)-----集群部署(service)
21.Kubernetes(三)-----集群部署(service)
21.Kubernetes(三)-----集群部署(service)
21.Kubernetes(三)-----集群部署(service)

三、创建service:(NodePort方式)

[root@server4 pod]# vim  svc.yaml  	#创建service:(NodePort方式)
[root@server4 pod]# kubectl edit svc mysvc
service/mysvc edited
[root@server4 pod]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
demo         ClusterIP   10.97.29.222    <none>        80/TCP         23h
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        2d23h
mysvc        NodePort    10.100.22.177   <none>        80:31525/TCP   4m18s
[root@server4 pod]# netstat -antlp | grep 31525

[root@server4 pod]# kubectl  run demo --image=busyboxplus -it 
If you don't see a command prompt, try pressing enter.
/ # nslookup nginx-svc
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

[root@server4 pod]# dig -t A nginx-svc.default.svc.cluster.local. @10.96.0.10

21.Kubernetes(三)-----集群部署(service)
21.Kubernetes(三)-----集群部署(service)

21.Kubernetes(三)-----集群部署(service)
21.Kubernetes(三)-----集群部署(service)

四、指定一个 LoadBalancer 类型的 Service

1.下载镜像

[root@foundation15 ~]# lftp 172.25.254.250
lftp 172.25.254.250:~> cd pub/docs/k8s/
cd ok, cwd=/pub/docs/k8s                      
lftp 172.25.254.250:/pub/docs/k8s> get metallb.yaml 
8447 bytes transferred
lftp 172.25.254.250:/pub/docs/k8s> get metallb-v0.10.2.tar 
90602496 bytes transferred                              
lftp 172.25.254.250:/pub/docs/k8s> exit
[root@foundation15 ~]# ls
 3.0.115            manifests                        rht-ks-post.log
 compose            metallb-v0.10.2.tar              rht-ks-pre.log
 daemon.json        metallb.yaml                     root@172.25.15.1
 get-docker.sh      Pictures                         root@172.25.15.4
 k8s-1.21.3.tar    'rhel6 lanmp.pdf'                 zabbix.api
 kube-flannel.yml   rhel-server-7.6-x86_64-dvd.iso
[root@foundation15 ~]# scp metallb-v0.10.2.tar metallb.yaml 172.25.15.4:
root@172.25.15.4's password: 
metallb-v0.10.2.tar                                100%   86MB 106.3MB/s   00:00    
metallb.yaml                                       100% 8447   374.5KB/s   00:00    
[root@foundation15 ~]# 

2.上传镜像

[root@server4 ~]# ls
kube-flannel.yml  metallb-v0.10.2.tar  metallb.yaml  pod
[root@server4 ~]# docker load -i metallb-v0.10.2.tar 
b2d5eeeaba3a: Loading layer   5.88MB/5.88MB
a273cec6d851: Loading layer  40.54MB/40.54MB
Loaded image: reg.westos.org/metallb/controller:v0.10.2
b763436ff23f: Loading layer  44.15MB/44.15MB
Loaded image: reg.westos.org/metallb/speaker:v0.10.2
[root@server4 ~]# docker push reg.westos.org/metallb/speaker:v0.10.2
The push refers to repository [reg.westos.org/metallb/speaker]
b763436ff23f: Pushed 
b2d5eeeaba3a: Pushed 
v0.10.2: digest: sha256:b5e9cb99c22f8379238784b611bc3ddde18120f9a4b7481ea888ed9c72854222 size: 952
[root@server4 ~]# docker push reg.westos.org/metallb/controller:v0.10.2
The push refers to repository [reg.westos.org/metallb/controller]
a273cec6d851: Pushed 
b2d5eeeaba3a: Mounted from metallb/speaker 
v0.10.2: digest: sha256:1ed3d2bf860220557b0d862d0f4fc07ae84b33def79595489041186906e58e3d size: 952
[root@server4 ~]# 

21.Kubernetes(三)-----集群部署(service)

3.实现负载均衡

在service提交后,Kubernetes就会调用 CloudProvider 在公有云上为你创建一个负载均衡服务,并且把被代理的 Pod 的 IP地址配置给负载均衡服
务做后端。

[root@server4 metallb]# ls
configmap.yaml  lb-svc.yaml  metallb.yaml
[root@server4 metallb]# kubectl apply -f lb-svc.yaml 
service/lb-svc unchanged
[root@server4 metallb]# kubectl apply -f configmap.yaml 
configmap/config unchanged
[root@server4 metallb]# kubectl apply -f metallb.yaml 

[root@server4 metallb]# kubectl apply -f lb-svc.yaml 
service/lb-svc unchanged
[root@server4 metallb]# kubectl get svc
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
demo         ClusterIP      10.97.29.222   <none>         80/TCP         24h
kubernetes   ClusterIP      10.96.0.1      <none>         443/TCP        3d
lb-svc       LoadBalancer   10.103.125.6   172.25.15.10   80:32421/TCP   5m20s
[root@server4 metallb]# 



[root@foundation15 ~]# curl 172.25.15.10
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@foundation15 ~]# curl 172.25.15.10/hostname.html
replicaset-example-j4lt7
[root@foundation15 ~]# curl 172.25.15.10/hostname.html
replicaset-example-9hbdm

21.Kubernetes(三)-----集群部署(service)
21.Kubernetes(三)-----集群部署(service)
21.Kubernetes(三)-----集群部署(service)

五、service允许为其分配一个公有IP

[root@server4 metallb]# ls
configmap.yaml  lb-svc.yaml  metallb.yaml
[root@server4 metallb]# vim ext-ip.yaml
[root@server4 metallb]# cat ext-ip.yaml 
apiVersion: v1
kind: Service
metadata:
  name: ex-service
spec:
  selector:
    app: myapp
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 80
  externalIPs:
  - 172.25.15.100
[root@server4 metallb]# kubectl apply -f ext-ip.yaml 
service/ex-service created
[root@server4 metallb]# kubectl get svc

21.Kubernetes(三)-----集群部署(service)

21.Kubernetes(三)-----集群部署(service)

  • 一种全局的、为了代理不同后端 Service 而设置的负载均衡服务,就是 Kubernetes 里的Ingress 服务。
  • • Ingress由两部分组成:Ingress controller和Ingress服务。
  • • Ingress Controller 会根据你定义的 Ingress 对象,提供对应的代理能力。业界常用的各种反向代理项目,比如 Nginx、HAProxy、Envoy、Traefik 等,都已经为Kubernetes专门维护了对应的 Ingress Controller。
[root@server4 metallb]# vim ex-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: ExternalName
  externalName: www.westos.org

[root@server4 metallb]# kubectl apply -f ex-svc.yaml 
service/my-service created
[root@server4 metallb]# kubectl get svc
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)        AGE
demo         ClusterIP      10.97.29.222   <none>           80/TCP         26h
ex-service   ClusterIP      10.110.22.46   172.25.15.100    80/TCP         93m
kubernetes   ClusterIP      10.96.0.1      <none>           443/TCP        3d2h
lb-svc       LoadBalancer   10.103.125.6   172.25.15.10     80:32421/TCP   129m
my-service   ExternalName   <none>         www.westos.org   <none>         4s
[root@server4 metallb]# kubectl edit svc my-service 
service/my-service edited
[root@server4 metallb]# kubectl get svc
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)        AGE
demo         ClusterIP      10.97.29.222   <none>          80/TCP         27h
ex-service   ClusterIP      10.110.22.46   172.25.15.100   80/TCP         94m
kubernetes   ClusterIP      10.96.0.1      <none>          443/TCP        3d2h
lb-svc       LoadBalancer   10.103.125.6   172.25.15.10    80:32421/TCP   129m
my-service   ExternalName   <none>         www.baidu.com   <none>         39s
[root@server4 metallb]# dig -t A my-service.default.svc.cluster.local. @10.96.0.10

21.Kubernetes(三)-----集群部署(service)
21.Kubernetes(三)-----集群部署(service)

六、k8s网络通信,配置flannel

flannel支持多种后端:
Vxlan
vxlan			//报文封装,默认
Directrouting		//直接路由,跨网段使用vxlan,同网段使用host-gw模式。
host-gw:		//主机网关,性能好,但只能在二层网络中,不支持跨网络,				如果有成千上万的Pod,容易产生广播风暴,不推荐
UDP:			//性能差,不推荐
[root@server4 ~]# cd pod
[root@server4 pod]# ls
cronjob.yaml     headless.yaml  job.yaml  pod.yaml  svc.yaml
deployment.yaml  init.yaml      perl.tar  rs.yaml
[root@server4 pod]# kubectl apply  -f deployment.yaml 
deployment.apps/nginx-deployment created

21.Kubernetes(三)-----集群部署(service)

模式修改为host-gw

[root@server4 ~]# kubectl -n kube-system  get pod
[root@server4 ~]# kubectl -n kube-system edit cm kube-flannel-cfg
 net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "host-gw"		#模式修改
      }
    }

[root@server4 ~]# kubectl -n kube-system get pod

[root@server4 ~]# kubectl get pod -n kube-system |grep kube-flannel | awk '{system("kubectl delete pod "$1" -n kube-system")}'
[root@server4 ~]# kubectl get pod -o wide
[root@server4 ~]# route -n
[root@server2 ~]# route -n
[root@server3 ~]# route -n

21.Kubernetes(三)-----集群部署(service)
21.Kubernetes(三)-----集群部署(service)
21.Kubernetes(三)-----集群部署(service)
21.Kubernetes(三)-----集群部署(service)
21.Kubernetes(三)-----集群部署(service)
21.Kubernetes(三)-----集群部署(service)

上一篇:Istio 自动注入 sidecar 不成功解决方案


下一篇:K8S学习笔记之卸载K8S集群