09-部署配置kubedns插件

安装和配置 kubedns 插件

官方的yaml文件在:kubernetes/cluster/addons/dns

该插件直接使用kubernetes部署,官方的配置文件中包含以下镜像:

gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1
gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1
gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1

我这使用时速云上的镜像:

index.tenxcloud.com/jimmy/k8s-dns-kube-dns-amd64:1.14.1
index.tenxcloud.com/jimmy/k8s-dns-dnsmasq-nanny-amd64:1.14.1
index.tenxcloud.com/jimmy/k8s-dns-sidecar-amd64:1.14.1

以下yaml配置文件中使用的是时速云中的镜像。

kubedns-cm.yaml
kubedns-sa.yaml
kubedns-controller.yaml
kubedns-svc.yaml

已经修改好的 yaml 文件见:dns

系统预定义的 RoleBinding

预定义的 RoleBinding system:kube-dns 将 kube-system 命名空间的 kube-dns ServiceAccount 与 system:kube-dns Role 绑定, 该 Role 具有访问 kube-apiserver DNS 相关 API 的权限;

$ kubectl get clusterrolebindings system:kube-dns -o yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: 2017-04-11T11:20:42Z
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-dns
resourceVersion: "58"
selfLink: /apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindingssystem%3Akube-dns
uid: e61f4d92-1ea8-11e7-8cd7-f4e9d49f8ed0
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-dns
subjects:
- kind: ServiceAccount
name: kube-dns
namespace: kube-system

kubedns-controller.yaml 中定义的 Pods 时使用了 kubedns-sa.yaml 文件定义的 kube-dns ServiceAccount,所以具有访问 kube-apiserver DNS 相关 API 的权限。

配置 kube-dns ServiceAccount

无需修改。

配置 kube-dns 服务

# cat kubedns-svc.yaml
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. # __MACHINE_GENERATED_WARNING__ apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.254.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
  • spec.clusterIP = 10.254.0.2,即明确指定了 kube-dns Service IP,这个 IP 需要和 kubelet 的 --cluster-dns 参数值一致;

配置 kube-dns Deployment

# cat kubedns-controller.yaml

# # Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. # Should keep target in cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml
# in sync with this file. # __MACHINE_GENERATED_WARNING__ apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
volumes:
- name: kube-dns-config
configMap:
name: kube-dns
optional: true
containers:
- name: kubedns
image: index.tenxcloud.com/jimmy/k8s-dns-kube-dns-amd64:1.14.1
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain=cluster.local.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
#__PILLAR__FEDERATIONS__DOMAIN__MAP__
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
volumeMounts:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: index.tenxcloud.com/jimmy/k8s-dns-dnsmasq-nanny-amd64:1.14.1
livenessProbe:
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- -v=2
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --log-facility=-
- --server=/cluster.local./127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
# see: https://github.com/kubernetes/kubernetes/issues/29055 for details
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
image: index.tenxcloud.com/jimmy/k8s-dns-sidecar-amd64:1.14.1
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local.,5,A
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local.,5,A
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 20Mi
cpu: 10m
dnsPolicy: Default # Don't use cluster DNS.
serviceAccountName: kube-dns
  • 主要也就更改image的地址,根据各自的镜像地址而更改
  • 使用系统已经做了 RoleBinding 的 kube-dns ServiceAccount,该账户具有访问 kube-apiserver DNS 相关 API 的权限;

执行所有定义文件

# pwd
/root/yaml/dns
# ls *.yaml
kubedns-cm.yaml kubedns-controller.yaml kubedns-sa.yaml kubedns-svc.yaml
# kubectl create -f .
configmap "kube-dns" created
deployment "kube-dns" created
serviceaccount "kube-dns" created
service "kube-dns" created
# 使用kubectl get deployment -n kube-system查看deployment状态
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-dns 1 1 1 1 12m #使用kubectl get pods --all-namespaces| grep kube-dns查看dns pods是否都正常启动
kube-system kube-dns-351402727-vcvpc 3/3 Running 0 10m
#使用kubectl get services --all-namespaces| grep kube-dns查看服务端口
kube-system kube-dns 10.254.0.2 <none> 53/UDP,53/TCP 14m

开始测试 kubedns 功能

新建一个nginx Deployment

# cat  my-nginx.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: index.tenxcloud.com/docker_library/nginx:1.9.0
ports:
- containerPort: 80 # kubectl create -f my-nginx.yaml
deployment "my-nginx" created
# kubectl get pods --all-namespaces|grep my-nginx
default my-nginx-925637600-4sr5g 1/1 Running 0 19m
default my-nginx-925637600-6f9w7 1/1 Running 0 19m

Export 该 Deployment, 生成 my-nginx 服务

# kubectl expose deploy my-nginx
# kubectl get services --all-namespaces |grep my-nginx
default my-nginx 10.254.101.236 <none> 80/TCP 10s

创建另一个 Pod,查看 /etc/resolv.conf 是否包含 kubelet 配置的 --cluster-dns--cluster-domain,是否能够将服务 my-nginx 解析到 Cluster IP 10.254.101.236

# cat nginxnew.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginxnew
spec:
replicas: 2
template:
metadata:
labels:
run: nginxnew
spec:
containers:
- name: nginxnew
image: index.tenxcloud.com/docker_library/nginx:1.9.0
ports:
- containerPort: 80 # kubectl create -f nginxnew.yaml
deployment "nginxnew" created
# kubectl get pods --all-namespaces|grep nginxnew
default nginxnew-248912974-bwqrx 1/1 Running 0 4m
default nginxnew-248912974-c881p 1/1 Running 0 4m
# kubectl exec nginxnew-248912974-bwqrx -i -t -- /bin/bash
root@nginxnew-248912974-bwqrx:/# cat /etc/resolv.conf
nameserver 10.254.0.2
search default.svc.cluster.local. svc.cluster.local. cluster.local.
options ndots:5 root@nginxnew-248912974-bwqrx:/# ping my-nginx
PING my-nginx.default.svc.cluster.local (10.254.101.236): 56 data bytes
^C--- my-nginx.default.svc.cluster.local ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss root@nginxnew-248912974-bwqrx:/# ping kubernetes
PING kubernetes.default.svc.cluster.local (10.254.0.1): 56 data bytes
^C--- kubernetes.default.svc.cluster.local ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss root@nginxnew-248912974-bwqrx:/# ping kube-dns.kube-system.svc.cluster.local
PING kube-dns.kube-system.svc.cluster.local (10.254.0.2): 56 data bytes
^C--- kube-dns.kube-system.svc.cluster.local ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss

从结果来看,service名称可以正常解析到对应ip

上一篇:MYSQL命令cmd操作


下一篇:@postconstruct初始化的操作