I have an HA k8s cluster which has 3 control-plane nodes and 2 worker nodes. When I power off 2 control-plane nodes and all worker node, leaving only 1 control-plane node live, below error occurs:
scnzzh@zubt2:~$ kubectl get node Error from server (InternalError): an error on the server ("") has prevented the request from succeeding (get nodes) scnzzh@zubt2:~$ kubectl get pod Error from server (InternalError): an error on the server ("") has prevented the request from succeeding (get pods)
When I power on one more control-plane node, the error disappears:
scnzzh@zubt2:~$ kubectl get node NAME STATUS ROLES AGE VERSION zubt1 NotReady control-plane,master 9d v1.20.2 zubt2 Ready control-plane,master 9d v1.20.2 zubt3 Ready control-plane,master 9d v1.20.2 zubt4 NotReady <none> 9d v1.20.2 zubt5 NotReady <none> 9d v1.20.2
So I think an HA k8s cluster needs at least 2 control-plane nodes alive to work.