6443: connect: network is unreachable

K8s API Server的鉴权问题由来已久。默认情况下K8s API Server会开启两个端口:8080(Localhost Port)和 6443(Secure Port),其中8080端口为WEB UI Dashboard,无需认证,用于本地测试与监控;6443端口需要认证且有TLS保护,用于远程连接(如:通过kubectl管理集群)。

 

root@ubuntu:~# kubectl label node bogon  node-role.kubernetes.io/worker=worker
Unable to connect to the server: dial tcp 10.10.16.82:6443: connect: network is unreachable
root@ubuntu:~#   systemctl status  firewalld
Unit firewalld.service could not be found.
root@ubuntu:~# 

 

root@ubuntu:~# kubeadm reset

 

 

root@ubuntu:~# kubeadm init --kubernetes-version=v1.18.1  --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.10.16.82  --image-repository registry.aliyuncs.com/google_containers
W0618 18:47:56.560541    2512 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.1
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
root@ubuntu:~# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
-bash: /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory

 

root@ubuntu:~# modprobe br_netfilter
root@ubuntu:~# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
-bash: /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
root@ubuntu:~# ls /proc/sys/net/bridge
ls: cannot access /proc/sys/net/bridge: No such file or directory
root@ubuntu:~# 

 

reboot

 

root@ubuntu:~# kubectl get pods
W0618 19:10:03.441191    5273 loader.go:223] Config not found: /etc/kubernetes/admin.conf
error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
root@ubuntu:~#  kubeadm init --kubernetes-version=v1.18.1  --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.10.16.82  --image-repository registry.aliyuncs.com/google_containers
W0618 19:10:25.620172    5382 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.1
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using kubeadm config images pull
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.1: output: Error response from daemon: Get https://registry.aliyuncs.com/v2/: dial tcp: lookup registry.aliyuncs.com: Temporary failure in name resolution
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
root@ubuntu:~#  kubeadm init --kubernetes-version=v1.18.1  --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.10.16.82 
W0618 19:11:19.808717    6071 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.1
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using kubeadm config images pull
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ubuntu kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.16.82]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ubuntu localhost] and IPs [10.10.16.82 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ubuntu localhost] and IPs [10.10.16.82 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0618 19:11:29.919280    6071 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0618 19:11:29.921005    6071 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 64.003340 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ubuntu as control-plane by adding the label "node-role.kubernetes.io/master=‘‘"
[mark-control-plane] Marking the node ubuntu as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: qpqoq3.y2lo787xtima2xaz
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.10.16.82:6443 --token qpqoq3.y2lo787xtima2xaz     --discovery-token-ca-cert-hash sha256:374990d65ea0b1dd227fe68aa994fa16439d0ddf99735642eee6116d98e1b829 
root@ubuntu:~# kubectl get pods
No resources found in default namespace.
root@ubuntu:~# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
ubuntu   Ready    master   28s   v1.18.1
root@ubuntu:~# 

 

 

root@cloud:~# kubeadm join 10.10.16.82:6443 --token qpqoq3.y2lo787xtima2xaz >     --discovery-token-ca-cert-hash sha256:374990d65ea0b1dd227fe68aa994fa16439d0ddf99735642eee6116d98e1b829 
W0618 19:13:12.330177  424863 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
root@cloud:~# kubeadm -f
unknown shorthand flag: f in -f
To see the stack trace of this error execute with --v=5 or higher
root@cloud:~# kubeadm reset -f
[preflight] Running pre-flight checks
W0618 19:13:48.230061  425763 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your systems IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
root@cloud:~# rm -rf /etc/cni/net.d
root@cloud:~# kubeadm join 10.10.16.82:6443 --token qpqoq3.y2lo787xtima2xaz     --discovery-token-ca-cert-hash sha256:374990d65ea0b1dd227fe68aa994fa16439d0ddf99735642eee6116d98e1b829 
W0618 19:15:43.669891  426976 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with kubectl -n kube-system get cm kubeadm-config -oyaml
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run kubectl get nodes on the control-plane to see this node join the cluster.

root@cloud:~# 

 

 

root@ubuntu:~# kubectl get nodes
NAME     STATUS     ROLES    AGE     VERSION
bogon    NotReady   <none>   8s      v1.18.1
cloud    Ready      <none>   2m37s   v1.21.1
ubuntu   Ready      master   5m58s   v1.18.1
root@ubuntu:~# kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
bogon    Ready    <none>   10s     v1.18.1
cloud    Ready    <none>   2m39s   v1.21.1
ubuntu   Ready    master   6m      v1.18.1
root@ubuntu:~# kubectl get nodes

 

 

MASTER端+NODE共同服务
systemctl restart etcd
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld

MASTER端独有服务
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler

NODE端独有服务
systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet(status状态为 not ready时候重启即可)

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy

systemctl status etcd
systemctl status flanneld
systemctl status kube-apiserver
systemctl status kube-controller-manager
systemctl status kube-scheduler
systemctl status kubelet
systemctl status kube-proxy

 

6443: connect: network is unreachable

上一篇:[Java 基础]基础语法


下一篇:HTTP缓存机制