2.1、准备Spring Boot项目
(1)创建Spring Boot项目(springboot-demo)
创建Spring Boot项目过程等操作这里就全部省略了,主要提供下测试核心代码pom.xml、K8SController.java。
pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.2.0.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>com.gupao</groupId>
<artifactId>springboot-demo</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>springboot-demo</name>
<description>Demo project for Spring Boot</description>
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.junit.vintage</groupId>
<artifactId>junit-vintage-engine</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
K8SController.java
package com.gupao.springbootdemo.controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
/**
* 测试Controller
*/
@RestController
public class K8SController {
@RequestMapping("/k8s")
public String k8s(){
return "<h1>Hello K8S ...</h1><br/><br/>测试成功!";
}
}
(2)启动测试项目
启动测试项目,确保springBoot项目本身没有问题。
访问地址:http://localhost:8080/k8s
(3)本地打包项目(springboot-demo)
在项目springboot-demo的根目录,执行以下打包命令:
mvn clean package
(4)将项目包生成镜像
上传springboot-demo-0.0.1-SNAPSHOT.jar到linux,生成镜像,运行项目容器:
#(1)创建文件夹并跳转至该文件夹(可选)
[root@m ~]# mkdir -p /user/java/test/springboot-demo
[root@m ~]# cd /user/java/test/springboot-demo
#(2)上传springboot-demo-0.0.1-SNAPSHOT.jar
[root@m springboot-demo]# ll
总用量 17140
-rw-r--r--. 1 root root 17547533 1月 10 13:40 springboot-demo-0.0.1-SNAPSHOT.jar
(5)为项目创建Dockerfile
[root@m springboot-demo]# vi Dockerfile
内容:
FROM openjdk:8-jre-alpine
COPY springboot-demo-0.0.1-SNAPSHOT.jar /springboot-demo.jar
ENTRYPOINT ["java","-jar","/springboot-demo.jar"]
(6)根据Dockerfile创建image
[root@m springboot-demo]# docker build -t springboot-demo-image .
(7)使用docker run创建container
[root@m springboot-demo]# docker run -d --name s1 springboot-demo-image
19bd517788ce64807a411e2e3b431c1d4154e6117934a743635c9a4d1783feef
[root@m springboot-demo]#
附: 运行容器指定端口(
-p参数
):
docker run -di --name=s1 -p 8080:8080 springboot-demo-image
(8)访问测试
# (1)查看项目容器详情(可跳过)
[root@m springboot-demo]# docker inspect s1
#(2)访问测试(192.168.116.170为master节点ip)
[root@m springboot-demo]# curl 192.168.116.170:8080/k8s
(9)将镜像推送到私有仓库
说明: 如果没有私有镜像仓库,可自行创建私有镜像仓库,参考博文:Docker搭建官方私有仓库registry及相关配置
需要注意:
每个虚拟机节点都要在/etc/docker/daemon.json
文件配置私有仓库,否则后续步骤pull镜像会失败。
- (1)添加如下:
{"insecure-registries":["192.168.116.161:5000"]}
- (2)然后重启服务:
[root@localhost java]# systemctl daemon-reload
[root@localhost java]# systemctl restart docker
① 标记镜像为私有仓库的镜像
使用 docker tag 命令标记镜像,将其归入某一仓库。
[root@m springboot-demo]# docker tag springboot-demo-image:latest 192.168.116.170:5000/springboot-demo-image:v1.0
[root@m springboot-demo]#
② 推送镜像到私有仓库
[root@m springboot-demo]# docker push 192.168.116.170:5000/springboot-demo-image:v1.0
The push refers to repository [192.168.116.170:5000/springboot-demo-image]
2f28b827deb4: Pushed
edd61588d126: Pushed
9b9b7f3d56a0: Pushed
f1b5933fe4b5: Pushed
v1.0: digest: sha256:ea7998365883f5ed9dedde32e85983aa47a848448827a3717b5cd4fd7329afc7 size: 1159
[root@m springboot-demo]#
③ 浏览器查看私有仓库
访问URL:http://192.168.116.170:5000/v2/_catalog
(10)创建nginx ingress controller
① 创建并编辑mandatory.yaml文件
以Deployment方式创建Pod,该Pod为
Ingress Nginx Controller
,要想让外界访问,可以通过Service的NodePort或者HostPort方式,这里选择HostPort,比如指定worker01运行
创建并编辑mandatory.yaml文件:
[root@m test]# vi mandatory.yaml
mandatory.yaml内容:
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
# wait up to five minutes for the drain of connections
terminationGracePeriodSeconds: 300
serviceAccountName: nginx-ingress-serviceaccount
hostNetwork: true
nodeSelector:
name: ingress
kubernetes.io/os: linux
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
---
说明: 在上面的mandatory.yaml中,
- (1)使用HostPort方式运行,需要增加配置(
上面已配置
)hostNetwork: true- (2)搜索nodeSelector,并且要确保w1(worker01)节点上的80和443端口没有被占用。
- (3)需要注意,这里镜像拉取需要较长的时间。
② 给worker01节点打lable
在master节点,给worker01节点打lable,确保nginx-controller运行到w1节点上:
[root@m test]# kubectl label node w1 name=ingress
node/w1 labeled
[root@m test]#
③ 应用mandatory.yaml
[root@m test]# kubectl apply -f mandatory.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
[root@m test]#
④ 查看所有资源/pod
说明: 等待全部资源创建完成,需要等待时间很长。
# (1)查看指定命名空间下的pod(确实分配到了worker01节点)
[root@m test]# kubectl get pod -o wide -n ingress-nginx
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ingress-controller-7c66dcdd6c-nttzp 1/1 Running 0 37m 192.168.116.171 w1 <none> <none>
[root@m test]#
# (2)查看所有资源或对象
[root@m test]# kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/nginx-ingress-controller-7c66dcdd6c-nttzp 1/1 Running 0 36m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-ingress-controller 1/1 1 1 36m
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-ingress-controller-7c66dcdd6c 1 1 1 36m
[root@m test]#
⑤ 查看worker01节点的80和443端口
在worker01节点,查看80和443端口使用情况(可看到nginx占用
):
[root@w1 ~]# lsof -i tcp:80
[root@w1 ~]# lsof -i tcp:443
(11)编写Kubernetes配置文件(springboot-demo.yaml,包含ingress规则
)
[root@m springboot-demo]# vi springboot-demo.yaml
yaml内容:
# 以Deployment部署Pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: springboot-demo
spec:
selector:
matchLabels:
app: springboot-demo
replicas: 1
template:
metadata:
labels:
app: springboot-demo
spec:
containers:
- name: springboot-demo
# 这里使用私有仓库镜像
image: 192.168.116.170:5000/springboot-demo-image:v1.0
ports:
- containerPort: 8080
---
# 创建Pod的Service
apiVersion: v1
kind: Service
metadata:
name: springboot-demo
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: springboot-demo
---
# 创建Ingress,定义访问规则,一定要记得提前创建好nginx ingress controller
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: springboot-demo
spec:
rules:
- host: k8s.demo.gper.club
http:
paths:
- path: /
backend:
serviceName: springboot-demo
servicePort: 80
(12)创建pod、service、ingress
说明: 这里需要注意下,我们在每个节点可以尝试手动拉取下我们的项目镜像(加快速度、也是测试):
[root@m ~]# docker pull 192.168.116.170:5000/springboot-demo-image:v1.0
[root@m springboot-demo]# kubectl apply -f springboot-demo.yaml
deployment.apps/springboot-demo created
service/springboot-demo created
ingress.extensions/springboot-demo created
[root@m springboot-demo]#
(13)查看pod相关
① 查看pod
[root@m springboot-demo]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
springboot-demo-76c9685f5-4d2m5 1/1 Running 0 7m32s 192.168.190.84 w1 <none> <none>
① 集群内测试访问springboot-demo的pod
注意: 命令中192.168.190.86为springboot-demo的pod的IP(非集群节点IP)。
[root@m springboot-demo]# curl 192.168.190.86:8080/k8s
<h1>Hello K8S ...</h1><br/><br/>测试成功!
[root@m springboot-demo]#
① 查看service
[root@m springboot-demo]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d15h
springboot-demo ClusterIP 10.100.135.47 <none> 80/TCP 44m
tomcat-service ClusterIP 10.110.130.134 <none> 80/TCP 74m
[root@m springboot-demo]#
(14)扩容测试(5个副本)
[root@m springboot-demo]# kubectl scale deploy springboot-demo --replicas=5
(15)查看ingress
[root@m springboot-demo]# kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
nginx-ingress tomcat.jack.com 80 66m
springboot-demo k8s.demo.gper.club 80 37m
[root@m springboot-demo]#
(16)外部浏览器测试(通过ingress配置的域名访问)
① 修改hosts文件,配置域名解析
修改windows系统的hosts文件(C:\Windows\System32\drivers\etc\hosts),添加dns解析:
注: 其中,192.168.116.171为worker01节点IP,其他IP不可以,因为我们在创建时就指定了创建在worker01节点。
②浏览器访问
访问URL:http://k8s.demo.gper.club/k8s
(17)删除测试资源
#(1)删除pod和service等资源
[root@m test]# cd springboot-demo
[root@m test]# kubectl delete -f mandatory.yaml
[root@m test]# kubectl delete -f springboot-demo.yaml
#(2)删除nginx ingress controller(也可保留)
[root@m test]# kubectl delete -f mandatory.yaml
#(3)删除yaml配置文件
[root@m test]# rm mandatory.yaml
[root@m test]# rm springboot-demo.yaml