主机名 |
ip |
备注 |
k8s_master |
192.168.234.130 |
Master&etcd |
k8s_node1 |
192.168.234.131 |
Node1 |
k8s_node2 |
192.168.234.132 |
Node2 |
Kubernetes 是goole开源的大规模容器集群管理系统,使用centos7 自带的Kubernetes 组件、分布式键值存储系统etcd 以及flannel 实现Docker容器中跨容器访问。
(集群环境需要ntp时钟一致,因为是云的机器,系统默认有时钟核对)
第一步组件安装
Master节点:
systemctl stop firewalld && sudo systemctl disable firewalld
yum install -y kubernetes etcd docker flannel
Node节点:
systemctl stop firewalld && sudo systemctl disable firewalld
yum install -y kubernetes docker flannel
第二步配置
节点 |
运行服务 |
Master |
etcd kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet docker flanneld |
node |
flanneld docker kube-proxy kubelet |
Master:
hostnamectl set-hostname k8s_master
vi /etc/hosts
192.168.234.130 k8s_master
192.168.234.131 k8s_node1
192.168.234.132 k8s_node2
etcd配置
vi /etc/etcd/etcd.conf
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS=http://localhost:2379
apiserver 配置
vi /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" (apiserver绑定主机的非安全IP地址)
KUBE_API_PORT="--port=8080" (apiserver绑定主机的非安全端口号)
KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.234.130:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=192.168.234.0/24" (虚机同一网段)
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_ARGS=""
Kubelet配置
vi /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=192.168.234.130"
KUBELET_API_SERVER="--api-servers=http://192.168.234.130:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-Container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""
config配置
vi /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.234.130:8080"
scheduler和 proxy 暂时没有用到,就不需要配置
flannel 配置
vi /etc/sysconfig/flanneld
FLANNEL_ETCD="http://192.168.234.130:2379"
FLANNEL_ETCD_KEY="/atomic.io/network"
etcdctl set修改get查询。不管是修改还是创建的时候,配置文件必须完整/coreos.com/network/config,要不然启动会报错。
添加网络:
systemctl enable etcd.service
systemctl start etcd.service
etcdctl mk //atomic.io/network/config '{"Network":"172.17.0.0/16"}' 创建
etcdctl rm //atomic.io/network/config '{"Network":"172.17.0.0/16"}' 删除
Master启动:
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet docker flanneld ; do systemctl restart $SERVICES; systemctl enable $SERVICES; systemctl status $SERVICES; done;
node配置:
hostnamectl set-hostname k8s_node1/2
Kubelet配置
vi /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=192.168.234.131" (相应节点IP)
KUBELET_API_SERVER="--api-servers=http://192.168.234.130:8080" (master节点IP)
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=" "
config配置
vi /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.234.130:8080"
flannel 配置
vi /etc/sysconfig/flanneld
FLANNEL_ETCD="http://192.168.234.130:2379"
FLANNEL_ETCD_KEY="/atomic.io/network"
node启动
for SERVICES in kube-proxy kubelet docker flanneld; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done;
查看所有NODE是否正常
kubectl -s 192.168.234.130:8080 get no
kubectl get nodes
访问http://kube-apiserver:port
http://192.168.234.130:8080/ 查看所有请求url
http://192.168.234.130:8080/healthz/ping 查看健康状况
wget https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
kubectl create -f kubernetes-dashboard.yaml
kubectl delete -f kubernetes-dashboard.yaml
kubectl create -f kubernetes-dashboard.yaml
kubectl get namespace
本文转自crazy_charles 51CTO博客,原文链接:http://blog.51cto.com/douya/1945382,如需转载请自行联系原作者