Zookeeper分布式锁

1、分布式锁

在我们进行单机应用开发时,涉及到同步时,我们都是采用synchronized或者Lock方式解决多线程间的共享数据问题。这时多线程的运行都是在一个JVM之下,没有任何问题。
当时我们的应用是分布式集群工作的情况下,属于多JVM下的工作环境,跨JVM之间已经无法通过多线程的锁解决同步问题
这时需要一种更加高级的锁机制,来处理跨机器的进程之间的数据同步问题。
Zookeeper分布式锁

2、分布式锁原理

核心思想:当客户端获取锁,则创建节点,使用完锁,则删除节点

1)客户端获取锁时,在lock(随便哪个)节点下创建临时顺序节点

2)然后获取lock下的所有子节点,客户端获取到所有的子节点之后,如果发现自己创建的子节点序号最小,那么就认为该客户端获取到了锁,使用完锁后,该节点删除

3)如果发现自己创建的节点并非lock所有子节点中最小的,说明自己还没有获取到锁,此时客户端需要找到比自己小的那个节点,同时对其注册事件监听器,监听删除事件。

4)如果发现比自己小的那个节点被删除,则客户端的Watcher会收到相应的通知,此时再次判断自己创建的节点是否是lock子节点中序号最小的,重复3)步骤(不知道前面的顺序中还有比自己小的)如果是则获取到了锁.如果不是则重复以上的步骤继续获取比自己小的节点并注册监听.
Zookeeper分布式锁

3、Curator实现分布式锁

Zookeeper分布式锁
maven依赖:

<dependencies>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.12</version>
        </dependency>

        <dependency>
            <groupId>org.apache.curator</groupId>
            <artifactId>curator-framework</artifactId>
            <version>2.13.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.curator</groupId>
            <artifactId>curator-recipes</artifactId>
            <version>2.13.0</version>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>1.7.21</version>
            <scope>test</scope>
        </dependency>

        <!-- https://mvnrepository.com/artifact/log4j/log4j -->
        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.17</version>
        </dependency>
    </dependencies>

代码:

public class TestLock {
    public static void main(String[] args) {
        Ticket12306 ticket = new Ticket12306();
        new Thread(ticket,"携程").start();
        new Thread(ticket,"飞猪").start();
    }
}


import org.apache.curator.RetryPolicy;
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.CuratorFrameworkFactory;
import org.apache.curator.framework.recipes.locks.InterProcessMutex;
import org.apache.curator.retry.ExponentialBackoffRetry;

import java.util.concurrent.TimeUnit;

public class Ticket12306 implements Runnable {
    private Integer ticket = 10;
    private InterProcessMutex lock;
    public Ticket12306(){
        RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000,10);
        CuratorFramework client = CuratorFrameworkFactory.newClient(
                "192.168.42.134:2181",
                60 * 1000,
                15 * 1000,
                retryPolicy
        );
        client.start();
        lock = new InterProcessMutex(client,"/lock");
    }
    public void run() {
        while(true){
            try {
                lock.acquire(3, TimeUnit.SECONDS);
                if(ticket > 0){
                    System.out.println(Thread.currentThread().getName()+": "+ticket--);
                }
                Thread.sleep(500);
            } catch (Exception e) {
                e.printStackTrace();
            }finally {
                try {
                    lock.release();
                } catch (Exception e) {
                    e.printStackTrace();
                }
            }
        }
    }
}

显示结果:

飞猪: 3
2021-12-18 16:50:22,788 13657  [68.42.134:2181)] DEBUG rg.apache.zookeeper.ClientCnxn  - Reading reply sessionid:0x400000237160001, packet:: clientPath:null serverPath:null finished:false header:: 38,1  replyHeader:: 38,51539607621,0  request:: '/lock/_c_a4d5bb09-3655-40b3-8f3e-b8d4616c5c7c-lock-,#3139322e3136382e37332e31,v{s{31,s{'world,'anyone}}},3  response:: '/lock/_c_a4d5bb09-3655-40b3-8f3e-b8d4616c5c7c-lock-0000000031 
2021-12-18 16:50:22,793 13662  [68.42.134:2181)] DEBUG rg.apache.zookeeper.ClientCnxn  - Reading reply sessionid:0x400000237160001, packet:: clientPath:null serverPath:null finished:false header:: 39,12  replyHeader:: 39,51539607621,0  request:: '/lock,F  response:: v{'_c_8231f736-89fb-4134-9d53-06bac741496d-lock-0000000030,'_c_a4d5bb09-3655-40b3-8f3e-b8d4616c5c7c-lock-0000000031},s{51539607558,51539607558,1639817196743,1639817196743,0,62,0,0,0,2,51539607621} 
2021-12-18 16:50:22,797 13666  [68.42.134:2181)] DEBUG rg.apache.zookeeper.ClientCnxn  - Reading reply sessionid:0x400000237160001, packet:: clientPath:null serverPath:null finished:false header:: 40,4  replyHeader:: 40,51539607621,0  request:: '/lock/_c_8231f736-89fb-4134-9d53-06bac741496d-lock-0000000030,T  response:: #3139322e3136382e37332e31,s{51539607619,51539607619,1639817422951,1639817422951,0,0,0,288230385665835009,12,0,51539607619} 
2021-12-18 16:50:23,290 14159  [68.42.134:2181)] DEBUG rg.apache.zookeeper.ClientCnxn  - Got notification sessionid:0x400000237160001
2021-12-18 16:50:23,290 14159  [68.42.134:2181)] DEBUG rg.apache.zookeeper.ClientCnxn  - Got WatchedEvent state:SyncConnected type:NodeDeleted path:/lock/_c_8231f736-89fb-4134-9d53-06bac741496d-lock-0000000030 for sessionid 0x400000237160001
2021-12-18 16:50:23,290 14159  [68.42.134:2181)] DEBUG rg.apache.zookeeper.ClientCnxn  - Reading reply sessionid:0x400000237160001, packet:: clientPath:null serverPath:null finished:false header:: 41,2  replyHeader:: 41,51539607622,0  request:: '/lock/_c_8231f736-89fb-4134-9d53-06bac741496d-lock-0000000030,-1  response:: null
2021-12-18 16:50:23,292 14161  [68.42.134:2181)] DEBUG rg.apache.zookeeper.ClientCnxn  - Reading reply sessionid:0x400000237160001, packet:: clientPath:null serverPath:null finished:false header:: 42,12  replyHeader:: 42,51539607622,0  request:: '/lock,F  response:: v{'_c_a4d5bb09-3655-40b3-8f3e-b8d4616c5c7c-lock-0000000031},s{51539607558,51539607558,1639817196743,1639817196743,0,63,0,0,0,1,51539607622} 
携程: 2
....

在zookeeper的显示:

[_c_0d6114b0-f2e3-4dc6-a69c-29083d67cf04-lock-0000000082, _c_6fa35ef0-ecb6-4bcd-bbf2-0ea04994f6a0-lock-0000000081]
[zk: localhost:2181(CONNECTED) 9] ls /lock 
[]
上一篇:二、Zookeeper客户端操作


下一篇:k8s部署zookeeper集群