k8s环境下安装zookeeper集群并接入springboot项目

k8s环境下安装zookeeper集群并接入springboot项目

zookeeper集群安装

创建存储卷

首先通过nfs创建三个共享目录

mkdir -p /data/share/pv/{zk01,zk02,zk03}

分别对应三节点zk集群中的三个pod的持久化目录,创建好目录之后编写yaml创建zk-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: k8s-pv-zk01
  namespace: test
  labels:
    app: zk
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteOnce
  hostPath:
    path: /data/share/pv/zk01
  persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: k8s-pv-zk02
  namespace: test
  labels:
    app: zk
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteOnce
  hostPath:
    path: /data/share/pv/zk02
  persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: k8s-pv-zk03
  namespace: test
  labels:
    app: zk
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteOnce
  hostPath:
    path: /data/share/pv/zk03
  persistentVolumeReclaimPolicy: Recycle
---

使用如下命令创建zk-pk

kubectl create -f zk-pv.yaml

然后可以通过如下命令去查看创建成功的pv

kubectl get pv -o wide

创建zookeeper集群

我们选择使用statefulset去部署zk集群的三节点,并且使用刚刚创建的pv作为存储设备。

apiVersion: v1
kind: Service
metadata:
  name: zk-hs
  namespace: test
  labels:
    app: zk
spec:
  selector:
    app: zk
  clusterIP: None
  ports:
  - name: server
    port: 2888
  - name: leader-election
    port: 3888
--- 
apiVersion: v1
kind: Service
metadata:
  name: zk-cs
  namespace: test
  labels:
    app: zk
spec:
  selector:
    app: zk
  type: NodePort
  ports:
  - name: client
    port: 2181
    nodePort: 21811
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
  namespace: test
spec:
  selector:
    matchLabels:
      app: zk
  maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zk
  namespace: test
spec:
  selector:
    matchLabels:
      app: zk # has to match .spec.template.metadata.labels
  serviceName: "zk-hs"
  replicas: 3 # by default is 1
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        app: zk # has to match .spec.selector.matchLabels
    spec:
      containers:
      - name: zk
        imagePullPolicy: Always
        image: leolee32/kubernetes-library:kubernetes-zookeeper1.0-3.4.10
        resources:
          requests:
            memory: "500Mi"
            cpu: "0.5"
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: server
        - containerPort: 3888
          name: leader-election
        command:
        - sh
        - -c
        - "start-zookeeper \
        --servers=3 \
        --data_dir=/var/lib/zookeeper/data \
        --data_log_dir=/var/lib/zookeeper/data/log \
        --conf_dir=/opt/zookeeper/conf \
        --client_port=2181 \
        --election_port=3888 \
        --server_port=2888 \
        --tick_time=2000 \
        --init_limit=10 \
        --sync_limit=5 \
        --heap=512M \
        --max_client_cnxns=60 \
        --snap_retain_count=3 \
        --purge_interval=12 \
        --max_session_timeout=40000 \
        --min_session_timeout=4000 \
        --log_level=INFO"
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/zookeeper
  volumeClaimTemplates:
  - metadata:
      name: datadir
      annotations:
        volume.beta.kubernetes.io/storage-class: "anything"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi

使用如下命令部署

kubectl apply -f zk.yaml

查看pod状态

kubect get pods -n test

查看service状态

kubect get svc -n test

验证zookeeper集群是否启动成功

通过以下命令查看是否启动成功

for i in 0 1 2; do kubectl exec zk-$i -n test zkServer.sh status; done

两个follower节点一个leader 就代表zookeeper集群部署成功

springboot项目接入zookeeper

引入依赖

<dependencies>
			<dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-dependencies</artifactId>
                <version>1.5.3.RELEASE</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
            <dependency>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-zookeeper-dependencies</artifactId>
                <version>1.1.1.RELEASE</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
            <dependency>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-netflix-dependencies</artifactId>
                <version>1.3.4.RELEASE</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
            <dependency>
                <groupId>io.reactivex</groupId>
                <artifactId>rxjava</artifactId>
                <version>1.3.1</version>
            </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-zookeeper-all</artifactId>
        </dependency>
        <dependency>
            <groupId>org.apache.zookeeper</groupId>
            <artifactId>zookeeper</artifactId>
            <!-- 这里要对应安装的zookeeper版本号,我们上面装的是3.4.10 -->
            <version>3.4.10</version>
            <exclusions>
                <exclusion>
                    <groupId>org.slf4j</groupId>
                    <artifactId>slf4j-log4j12</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
    </dependencies>

yml配置

bootstrap.yml

spring:
  application:
    name: demo-service
  cloud:
    zookeeper:
      connect-string: 192.168.253.30:2181,192.168.253.31:2181,192.168.253.32:2181
      discovery:
        enabled: true
        register: true

注解启用

Application.java启动类加@SpringBootApplication注解

package demo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;

@EnableDiscoveryClient
@SpringBootApplication
public class Application{

    public static void main(String[] args) throws Exception {
        SpringApplication.run(Application.class, args);
    }

}

DemoController

package demo;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping(value = "/api/demo")
public class DemoController {

    private static final UUID INSTANCE_UUID = UUID.randomUUID();

    @GetMapping(value = "/instance")
    public Object instance() {
        return INSTANCE_UUID.toString();
    }

}
上一篇:干货分享!java开发流程图工具


下一篇:mac开发Java的好处,相关资料参考