文章目录
k8s简介
k8s是Kubernetes的缩写,源自谷歌的Borg,Borg是谷歌公司的内部容器管理系统。Borg系统运行几十万个以上的任务,来自几千个不同的应用,跨多个集群,每个集群(cell)有上万个机器。它通过管理控制、高效的任务包装、超售、和进程级别性能隔离实现了高利用率。它支持高可用性应用程序与运行时功能,最大限度地减少故障恢复时间,减少相关故障概率的调度策略。该项目的目的是实现资源管理的自动化以及跨多个数据中心的资源利用率最大化。Kubernetes项目的目的就是将Borg最精华的部分提取出来,使现在的开发者能够更简单、直接地应用。它以Borg为灵感,但又没那么复杂和功能全面,更强调了模块性和可理解性。
Kubernetes在Docker技术之上,为容器化的应用提供了资源调度、部署运行、服务发现和扩容缩容等丰富多样的功能。在项目公开后不久,微软、IBM、VMware、Docker、CoreOS以及SaltStack等多家公司便纷纷加入了Kubernetes社区,为该项目发展作出贡献。
优秀特性
- 强大的容器编排能力
- 轻量级
- 开放开源
基础架构
核心概念
- pod
pod 是若干容器功能的组合,pod包含的容器运行在同一台宿主机上。这些容器使用相同的网络命名空间,ip地址和端口,在k8s中创建 管理 调度的最小单位就是pod - replication controller
replication controller 原来控制管理pod副本 - service
service是真实应用服务的抽象,定义了pod的逻辑集合和访问这个pod集合的策略 - label
label是用于区分pod,service,replication controller 的k/v对 - node
node是k8s的操作单元,用来分配给pod进行容器绑定,可认为node是pod的宿主机
安装k8s
- 确保安装了docker
yum install docker-ce -y
- 设置 daemon
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://w3ok45be.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
- 确保cgroup驱动是systemd
Cgroup Driver: systemd
- 确保内核桥接路由功能打开
cat > /etc/sysctl.d/99-kubernetes-cri.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl --system
- 禁用swap分区
[root@server1 ~]# swapoff -a
[root@server1 ~]# vim /etc/fstab
#
# /etc/fstab
# Created by anaconda on Sun Sep 13 22:26:48 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel-root / xfs defaults 0
UUID=79e61c2e-5981-4230-9f0d-350ab73c9132 /boot xfs defaults 0 0
#/dev/mapper/rhel-swap swap swap defaults 0 0
- 设置阿里云k8s镜像源
[root@server1 yum.repos.d]# cat kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
[root@server1 yum.repos.d]# yum repolist
Loaded plugins: product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
repo id repo name status
addons-ha HA 51
addons-rs RS 56
base Base 5,152
docker docker 15
kubernetes Kubernetes 570
repolist: 5,844
- 设置kubelet开机自启
systemctl enable --now kubelet.service
- 修改默认镜像地址
[root@server1 ~]# kubeadm config print init-defaults
W0926 12:12:02.039501 4502 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: server1
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io ## 这个地址国内访问不到
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: {}
设定阿里云镜像仓库
[root@server1 ~]# kubeadm config images list --image-repository registry.aliyuncs.com/google_containers
W0926 12:15:36.939458 4694 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.2
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.2
registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.2
registry.aliyuncs.com/google_containers/kube-proxy:v1.19.2
registry.aliyuncs.com/google_containers/pause:3.2
registry.aliyuncs.com/google_containers/etcd:3.4.13-0
registry.aliyuncs.com/google_containers/coredns:1.7.0
- 拉取镜像
[root@server1 ~]# kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers
W0926 12:21:22.603616 5015 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.2
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.2
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.2
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.19.2
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.2
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.7.0
[root@server1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-proxy v1.19.2 d373dd5a8593 9 days ago 118MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.19.2 8603821e1a7a 9 days ago 111MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.19.2 607331163122 9 days ago 119MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.19.2 2f32d66b884f 9 days ago 45.7MB
registry.aliyuncs.com/google_containers/etcd 3.4.13-0 0369cf4303ff 4 weeks ago 253MB
registry.aliyuncs.com/google_containers/coredns 1.7.0 bfe3a36ebd25 3 months ago 45.2MB
registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 7 months ago 683kB
- 初始化
[root@server1 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository registry.aliyuncs.com/google_containers
W1002 13:59:34.974256 3865 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local server1] and IPs [10.96.0.1 172.25.254.101]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost server1] and IPs [172.25.254.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost server1] and IPs [172.25.254.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 34.149819 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node server1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node server1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ve5fr8.9vnxgo974jzkdrm5
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.25.254.101:6443 --token ve5fr8.9vnxgo974jzkdrm5 \
--discovery-token-ca-cert-hash sha256:fd77d0843aee59ff94ed8a1151e107ba5cb293d896a12b42053190ba1ef1508d
- 创建用户k8s 并设置sudo权限
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/confi
pod并没有完全启动
[k8s@server1 ~]$ kubectl get node
NAME STATUS ROLES AGE VERSION
server1 NotReady control-plane,master 13m v1.20.0
[k8s@server1 ~]$ kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7f89b7bc75-ftrsm 0/1 Pending 0 12m
kube-system coredns-7f89b7bc75-r4b2c 0/1 Pending 0 12m
kube-system etcd-server1 1/1 Running 0 13m
kube-system kube-apiserver-server1 1/1 Running 0 13m
kube-system kube-controller-manager-server1 1/1 Running 0 13m
kube-system kube-proxy-gsz6v 1/1 Running 0 12m
kube-system kube-scheduler-server1 1/1 Running 0 13m
- 此时需要设置网络插件
[k8s@server1 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/flannel created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
如果上面的网站打不开,可以使用https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml
此时查看pod
[k8s@server1 ~]$ kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7f89b7bc75-ftrsm 1/1 Running 0 61m
kube-system coredns-7f89b7bc75-r4b2c 1/1 Running 0 61m
kube-system etcd-server1 1/1 Running 0 61m
kube-system kube-apiserver-server1 1/1 Running 0 61m
kube-system kube-controller-manager-server1 1/1 Running 0 61m
kube-system kube-flannel-ds-8xkct 1/1 Running 0 2m
kube-system kube-proxy-gsz6v 1/1 Running 0 61m
kube-system kube-scheduler-server1 1/1 Running 0 61m
当所有pod都running后
在节点上执行
kubeadm join 172.25.254.101:6443 --token o1m5s1.rqqxzf2xnauri9p5 --discovery-token-ca-cert-hash sha256:d731f405efc7c71e8507b902d09200a20427371b8d493a980178a2e02eb4b4a9
注意:
token可以通过kubeadm token list查看
[k8s@server1 ~]$ kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
o1m5s1.rqqxzf2xnauri9p5 23h 2020-12-13T23:10:20+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
hash可以通过openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -pubkey | openssl rsa -pubin -outform DER 2>/dev/null | sha256sum | cut -d’ ’ -f1查看
[k8s@server1 ~]$ openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -pubkey | openssl rsa -pubin -outform DER 2>/dev/null | sha256sum | cut -d' ' -f1
d731f405efc7c71e8507b902d09200a20427371b8d493a980178a2e02eb4b4a9
等待一点时间,node上拉取到镜像后:
在master上查看
Pod管理
配置命令补齐
echo "source <(kubectl completion bash)" >> ~/.bashrc
source ~/.bashrc
部署nginx
[k8s@server1 ~]$ kubectl
annotate attach cluster-info cp describe exec help options proxy scale uncordon
api-resources auth completion create diff explain kustomize patch replace set version
api-versions autoscale config debug drain expose label plugin rollout taint wait
apply certificate cordon delete edit get logs port-forward run top
[k8s@server1 ~]$ kubectl run nginx --image=nginx
pod/nginx created
[k8s@server1 ~]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 0/1 ContainerCreating 0 24s
[k8s@server1 ~]$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 0/1 ContainerCreating 0 50s <none> server2 <none> <none>
[k8s@server1 ~]$ kubectl describe pod nginx
Name: nginx
Namespace: default
Priority: 0
Node: server2/172.25.254.102
Start Time: Sun, 13 Dec 2020 13:29:08 +0800
Labels: run=nginx
Annotations: <none>
Status: Running
IP: 10.244.1.2
IPs:
IP: 10.244.1.2
Containers:
nginx:
Container ID: docker://de5d0a85d0b1a2bca6481f25810d890b16df723b6145b73594794d9b88c8d391
Image: nginx
Image ID: docker-pullable://nginx@sha256:31de7d2fd0e751685e57339d2b4a4aa175aea922e592d36a7078d72db0a45639
Port: <none>
Host Port: <none>
State: Running
Started: Sun, 13 Dec 2020 13:29:58 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4dd4l (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-4dd4l:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4dd4l
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 52s default-scheduler Successfully assigned default/nginx to server2
Normal Pulling 49s kubelet Pulling image "nginx"
Normal Pulled 2s kubelet Successfully pulled image "nginx" in 47.108084363s
Normal Created 2s kubelet Created container nginx
Normal Started 1s kubelet Started container nginx
默认从外网下载image 部署到server2上
当本地存在镜像时可以使用–image-pull-policy设置拉取策略
[k8s@server1 ~]$ kubectl run nginx --image=nginx --image-pull-policy=Never
pod/nginx created
可以看到没有重新下载镜像
一般我们会设置私有仓库使得pod部署加快 在server4上部署harbor仓库
测试一下
[k8s@server1 ~]$ kubectl create deployment nginx --image=nginx -r 2
deployment.apps/nginx created
[k8s@server1 ~]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-6799fc88d8-tzs7d 1/1 Running 0 38s
nginx-6799fc88d8-xrvvn 1/1 Running 0 38s
kubectl 常见用法
- 扩容,缩容
[k8s@server1 ~]$ kubectl scale deployment nginx --replicas=6
deployment.apps/nginx scaled
[k8s@server1 ~]$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-6799fc88d8-blcjh 1/1 Running 0 12s
nginx-6799fc88d8-qr5h9 1/1 Running 0 12s
nginx-6799fc88d8-tzs7d 1/1 Running 0 15m
nginx-6799fc88d8-vxz6c 1/1 Running 0 12s
nginx-6799fc88d8-xrvvn 1/1 Running 0 15m
nginx-6799fc88d8-z596g 1/1 Running 0 12s
- 更新与回滚
仓库里有1.19和1.18两个版本
[k8s@server1 ~]$ kubectl set image deployment nginx nginx=nginx:1.18.0 --record
deployment.apps/nginx image updated
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m40s default-scheduler Successfully assigned default/nginx-5bcdcbf444-x4dwc to server2
Normal Pulling 3m38s kubelet Pulling image "nginx:1.18.0"
Normal Pulled 3m29s kubelet Successfully pulled image "nginx:1.18.0" in 8.436729221s
Normal Created 3m29s kubelet Created container nginx
Normal Started 3m28s kubelet Started container nginx
- 查看历史版本
- 回滚版本
[k8s@server1 ~]$ kubectl rollout undo deployment nginx --to-revision=1
deployment.apps/nginx rolled back
- 外部通信
每当一个机器加入集群时会分配一个网段,并建立一个cni网桥用于内部容器通信
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
link/ether ce:c2:2f:db:81:c0 brd ff:ff:ff:ff:ff:ff
inet 10.244.1.1/24 brd 10.244.1.255 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::ccc2:2fff:fedb:81c0/64 scope link
valid_lft forever preferred_lft forever
- ClusterIP
只能内部访问
[k8s@server1 ~]$ kubectl expose deployment nginx --port=80
service/nginx exposed
[k8s@server1 ~]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 18h
nginx ClusterIP 10.107.225.39 <none> 80/TCP 19s
- NodePort
使用NodePort类型暴露端口,使得外部可以访问
在已有svc时直接更改
[k8s@server1 ~]$ kubectl edit svc nginx
service/nginx edited
也可以在创建svc时
kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort
[k8s@server1 ~]$ kubectl describe svc nginx
Name: nginx
Namespace: default
Labels: app=nginx
Annotations: <none>
Selector: app=nginx
Type: NodePort
IP Families: <none>
IP: 10.107.225.39
IPs: 10.107.225.39
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 31530/TCP
Endpoints: 10.244.1.10:80,10.244.1.9:80,10.244.3.7:80 + 1 more...
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
访问: