1.创建一个nginx的deployment,创建成功,查看pod分配的ip信息,在pod所在的node节点上使用ping的通,在master上ping不通:
分析:首先想到的就flanneld在两个节点上没有正常运行
1.查看flanneld的网卡信息
[root@k8s2-1 ~]# ifconfig docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1472 inet 10.25.1.1 netmask 255.255.255.0 broadcast 0.0.0.0 inet6 fe80::42:c5ff:fe30:de12 prefixlen 64 scopeid 0x20<link> ether 02:42:c5:30:de:12 txqueuelen 0 (Ethernet) RX packets 58 bytes 3880 (3.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 18 bytes 1412 (1.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.191.21 netmask 255.255.255.0 broadcast 192.168.191.255 inet6 fe80::370c:bd9e:9b5a:ed83 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:de:b2:e0 txqueuelen 1000 (Ethernet) RX packets 34986 bytes 15926405 (15.1 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 33482 bytes 6913324 (6.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472 inet 10.25.1.0 netmask 255.255.0.0 destination 10.25.1.0 inet6 fe80::1b37:644b:a7ca:c658 prefixlen 64 scopeid 0x20<link> unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC) RX packets 785 bytes 65940 (64.3 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 19 bytes 1488 (1.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 247 bytes 29744 (29.0 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 247 bytes 29744 (29.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth5112d6a: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1472 inet6 fe80::54f3:44ff:fed1:43f3 prefixlen 64 scopeid 0x20<link> ether 56:f3:44:d1:43:f3 txqueuelen 0 (Ethernet) RX packets 18 bytes 1412 (1.3 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 41 bytes 3290 (3.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel0网卡信息就是安装flanneld组件后,自动创建的。一个tun虚拟网卡,接收不在同一主机的POD的数据,然后将收到的数据转发给flanneld进程
venth5112a网卡:cni0为在同一主机pod共用的网桥,当kubelet创建容器时,将为此容器创建虚拟网卡vethxxx,并桥接到cni0网桥
2.查看etcd中保存的网络配置信息,信息正常
[root@k8s1-1 ~]# etcdctl ls /k8s /registry [root@k8s1-1 ~]# etcdctl ls /k8s/network /k8s/network/subnets /k8s/network/config [root@k8s1-1 ~]# etcdctl ls /k8s/network/subnets /k8s/network/subnets/10.25.1.0-24 /k8s/network/subnets/10.25.15.0-24 /k8s/network/subnets/10.25.92.0-24 [root@k8s1-1 ~]# etcdctl ls /k8s/network/subnets/10.25.1.0-24 /k8s/network/subnets/10.25.1.0-24 [root@k8s1-1 ~]# etcdctl get /k8s/network/subnets/10.25.1.0-24 {"PublicIP":"192.168.191.21"} [root@k8s1-1 ~]# etcdctl get /k8s/network/subnets/10.25.15.0-24 {"PublicIP":"192.168.191.20"} [root@k8s1-1 ~]# etcdctl get /k8s/network/subnets/10.25.15.0-24
3.查看node节点的路由信息,其中xxx.xxx.1.0 xxx.xxx.1.1网桥信息,在master中都时可以ping通的,具体到pod的ip时就ping不通,很纳闷。
4.最后想到的就k8s还与iptables有关,执行了iptables -P INPUT ACCEPT;iptables -P FORWARD ACCEPT;iptables -F后再次ping终于成功了。
2.{kubelet k8s-node-2} spec.containers{ct} Warning BackOff Back-off restarting failed docker container
镜像没有运行进程,当然就直接退出