构建企业级就绪Kubernetes容器平台


title: 构建企业级就绪Kubernetes容器平台
author: susu

production ready Kubernetes cluster

What if you need a production ready Kubernetes cluster, but for some reason you cannot use the existing cloud offerings, such as Google Container Engine? Kubernetes can be installed on a variety platforms, including on-premises VMs, VMs on a cloud provider, and bare-metal servers. There are several tools that allow installing production ready Kubernetes cluster on a variety of targets:

  • kargo
  • kube-deploy
  • kubeadm
  • Kubespray

When deploying a Kubernetes cluster in production, one should take care of several things to ensure the best possible outcome. High Availability of master nodes helps minimize downtime and data loss as
well as eliminates single point of failure. Networking should be both robust and scalable to handle growing needs (e.g., The number of nodes in a cluster to handle more replicas). Finally, some users may want to take advantage of multi-site support to uniformly handle geographically dispersed data centers.

HIGH AVAILABILITY

Possible cases of failure in Kubernetes clusters usually point to pods, nodes, and master nodes. Pod failures can be handled by built-in Kubernetes features, so the main concern here is to provide persistent storage if needed. Node failures can be handled by master nodes and require use of services outside of Kubernetes. For example, kubelet talks to an external load-balancer rather than directly to clients; if the entire node fails, traffic can be load balanced to the node with corresponding pods. Finally, the master controller can fail, or one of its services can die. We need to replicate the master controller and its components for a Highly Available environment. Fortunately, multiple master nodes are also accounted for in Kubernetes.

Furthermore, when it comes to monitoring the deployment, It is advisable that process watchers be implemented to “watch” the services that exist on the master node. For example, the API service can be monitored by a kubelet. It can be configured with less aggressive security settings to monitor non-Kubernetes components such as privileged containers. On a different level is the issue of what happens if a kubelet dies. Monitoring processes can be deployed to ensure that the kubelet can be restarted. Finally, redundant storage service can be achieved with clustered etcd.

SECURITY

First of all, direct access to cluster nodes (either physical or through SSH) should be restricted. kubectl exec allows access to containers—this should be enough. Use Security Contexts to segregate privileges. Defining quotas for resources helps prevent DoS attacks. Selectively grant users permissions according to their business needs. Finally, consider separating network traffic that is not related (e.g., The load balancer only needs to see a front-end service, while the back-end service has no need to contact the load balancer).

SCALE

Kubernetes allows for adding and removing nodes dynamically. Each new node has to be configured appropriately and pointed at the master node. The main processes of interest are kubelet and kube-proxy. For larger scale clusters, a means of automation is preferred, such as Ansible or Salt. If the cluster is running on one of supported cloud providers, there is also an option to try the Cluster Autoscaler.

上一篇:第三周.01.DGL应用介绍


下一篇:创建赫夫曼树详解