实现分布式队列ZooKeeper的实现

一、背景

  有一些时候,多个团队需要共同完成一个任务,比如,A团队将Hadoop集群计算的结果交给B团队继续计算,B完成了自己任务再交给C团队继续做。这就有点像业务系统的工作流一样,一环一环地传下

去,直到最后一部分完成。在业务系统中,我们经常会用SOA的架构来解决这种问题,每个团队在ESB(企业服务股总线)服务器上部署自己的服务,然后通过消息中间件完成调度任务。对亍分步式的多个

Hadoop集群系统的协作,同样可以用这种架构来做只要把消息中间件引擎换成支持分步式的消息中间件的引擎就行了。

  本文楼主将使用zookeeper做为分步式消息中间件构造一个大型超市的部分数据计算模型来完成各个区域利润计算的业务需求。

  由于采购和销售分别是由不同厂商进行的软件开发和维护,而且业务往来也在不同的城市和地区。 所以在每月底结算时,工作量都特别大。 比如,计算利润表: 当月利润 = 当月销售金额 - 当月采购

额 - 当月其他支出(楼主只是粗略计算)。如果采购系统是单独的系统,销售是另外单独的系统,及以其他几十个大大小小的系统, 如何能让多个系统,配合起来完成该需求?

二、系统构思

  楼主基于zookeeper来构建一个分步式队列的应用,来解决上面的功能需求。排除了ESB的部分,只保留zookeeper进行实现。

  1.   采购数据:海量数据,基于Hadoop存储和分析(楼主环境有限,只使用了很少的数据)
  2.   销售数据:海量数据,基于Hadoop存储和分析(楼主环境有限,只使用了很少的数据)
  3.   其他费用支出:为少量数据,基于文件或数据库存储和分析

  我们设计一个同步队列,这个队列有3个条件节点,分别对应采购(purchase),销售 (sell),其他费用(other)3个部分。当3个节点都被创建后,程序会自动触发计算利润, 幵创建利润(profit)节点。上面3个节点的创建,无顺序要求。每个节点只能被创建一次 。

  实现分布式队列ZooKeeper的实现

  Hadoop mapreduce1,Hadoop mapreduce2 是2个独立的Hadoop集群应用。 Java App 是2个独立的Java应用 。ZooKeeper集群的有3个节点 。

  • /queue,是znode的队列目录,假设队列长度为3
  • /queue/purchase,是znode队列中,1号排对者,由Hadoop mapreduce1提交,用于统计采购金额
  • /queue/sell,是znode队列中,2号排对者,由Hadoop mapreduce2提交,用于统计销售金额
  • /queue/other,是znode队列中,3号排对者,由Java App提交,用于统计其他费用支出金额
  • /queue/profit,当znode队列中满了,触发创建利润节点。

  当/qeueu/profit被创建后,利润java app被启动,所有zookeeper的连接通知同步程序(红色线),队列已完成,所有程序结束。

三、环境准备

  1)hadoop集群。楼主用的6个节点的hadoop2.7.3集群,各位同学可以根据自己的实际情况进行搭建,但至少需要1台伪分布式的。(参考http://www.cnblogs.com/qq503665965/p/6790580.html

  2)zookeeper集群。至少三个节点。安装参考楼主这篇文章(http://www.cnblogs.com/qq503665965/p/6790580.html

  3)java开发环境。

四、mapreduce及java app程序

  计算采购金额:

 package zkqueue;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import java.util.regex.Pattern; import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; /**
* 采购金额计算
* @author Jon_China
*
*/
public class Purchase { public static final String HDFS = "hdfs://192.168.8.101:9000";
public static final Pattern DELIMITER = Pattern.compile("[\t,]"); public static class PurchaseMapper extends Mapper<LongWritable, Text, Text, IntWritable> { private String month = "2017-01";
private Text k = new Text(month);
private IntWritable v = new IntWritable();
private int money = 0; public void map(LongWritable key, Text values, Context context) throws IOException, InterruptedException {
System.out.println(values.toString());
String[] tokens = DELIMITER.split(values.toString());//拆分源数据
if (tokens[3].startsWith(month)) {// 过滤1月份数据
money = Integer.parseInt(tokens[1]) * Integer.parseInt(tokens[2]);//计算
v.set(money);
context.write(k, v);
}
}
} public static class PurchaseReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable v = new IntWritable();
private int money = 0; @Override
public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
for (IntWritable line : values) {
money += line.get();
}
v.set(money);
context.write(null, v);
System.out.println("Output:" + key + "," + money);
} } public static void run(Map<String, String> path) throws IOException, InterruptedException, ClassNotFoundException {
JobConf conf = config();
String local_data = path.get("purchase");
String input = path.get("input");
String output = path.get("output"); HdfsDAO hdfs = new HdfsDAO(HDFS, conf);
hdfs.rmr(input);
hdfs.mkdirs(input);
hdfs.copyFile(local_data, input); Job job = Job.getInstance(conf);
job.setJarByClass(Purchase.class); job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class); job.setMapperClass(PurchaseMapper.class);
job.setReducerClass(PurchaseReducer.class); job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class); FileInputFormat.setInputPaths(job, new Path(input));
FileOutputFormat.setOutputPath(job, new Path(output)); job.waitForCompletion(true);
} public static JobConf config() {
JobConf conf = new JobConf(Purchase.class);
conf.setJobName("purchase");
conf.addResource("classpath:/hadoop/core-site.xml");
conf.addResource("classpath:/hadoop/hdfs-site.xml");
conf.addResource("classpath:/hadoop/mapred-site.xml");
conf.addResource("classpath:/hadoop/yarn-site.xml");
return conf;
} public static Map<String,String> path(){
Map<String, String> path = new HashMap<String, String>();
path.put("purchase", Purchase.class.getClassLoader().getResource("logfile/biz/purchase.csv").getPath());// 源文件数据
path.put("input", HDFS + "/user/hdfs/biz/purchase");//hdfs存储路径
path.put("output", HDFS + "/user/hdfs/biz/purchase/output"); //hdfs输出路径
return path;
} public static void main(String[] args) throws Exception {
run(path());
} }

  销售数据计算:

 package zkqueue;

 import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import java.util.regex.Pattern; import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; /**
* 销售数据计算
* @author Jon_China
*
*/
public class Sell { public static final String HDFS = "hdfs://192.168.8.101:9000";
public static final Pattern DELIMITER = Pattern.compile("[\t,]"); public static class SellMapper extends Mapper<LongWritable, Text, Text, IntWritable> { private String month = "2013-01";
private Text k = new Text(month);
private IntWritable v = new IntWritable();
private int money = 0; public void map(LongWritable key, Text values, Context context) throws IOException, InterruptedException {
System.out.println(values.toString());
String[] tokens = DELIMITER.split(values.toString());
if (tokens[3].startsWith(month)) {// 1月的数据
money = Integer.parseInt(tokens[1]) * Integer.parseInt(tokens[2]);//单价*数量
v.set(money);
context.write(k, v);
}
}
} public static class SellReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable v = new IntWritable();
private int money = 0; @Override
public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
for (IntWritable line : values) {
money += line.get();
}
v.set(money);
context.write(null, v);
System.out.println("Output:" + key + "," + money);
} } public static void run(Map<String, String> path) throws IOException, InterruptedException, ClassNotFoundException {
JobConf conf = config();
String local_data = path.get("sell");
String input = path.get("input");
String output = path.get("output"); // 初始化sell
HdfsDAO hdfs = new HdfsDAO(HDFS, conf);
hdfs.rmr(input);
hdfs.mkdirs(input);
hdfs.copyFile(local_data, input); Job job = Job.getInstance(conf);
job.setJarByClass(Sell.class); job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class); job.setMapperClass(SellMapper.class);
job.setReducerClass(SellReducer.class); job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class); FileInputFormat.setInputPaths(job, new Path(input));
FileOutputFormat.setOutputPath(job, new Path(output)); job.waitForCompletion(true);
} public static JobConf config() {// Hadoop集群的远程配置信息
JobConf conf = new JobConf(Purchase.class);
conf.setJobName("purchase");
conf.addResource("classpath:/hadoop/core-site.xml");
conf.addResource("classpath:/hadoop/hdfs-site.xml");
conf.addResource("classpath:/hadoop/mapred-site.xml");
conf.addResource("classpath:/hadoop/yarn-site.xml");
return conf;
} public static Map<String,String> path(){
Map<String, String> path = new HashMap<String, String>();
path.put("sell", Sell.class.getClassLoader().getResource("logfile/biz/sell.csv").getPath());// 本地的数据文件
path.put("input", HDFS + "/user/hdfs/biz/sell");// HDFS的目录
path.put("output", HDFS + "/user/hdfs/biz/sell/output"); // 输出目录
return path;
} public static void main(String[] args) throws Exception {
run(path());
} }

  其他金额计算:

 package zkqueue;

 import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
import java.util.regex.Pattern; public class Other { public static String file = "/logfile/biz/other.csv";
public static final Pattern DELIMITER = Pattern.compile("[\t,]");
private static String month = "2017-01"; public static void main(String[] args) throws IOException {
calcOther(file);
} public static int calcOther(String file) throws IOException {
int money = 0;
BufferedReader br = new BufferedReader(new FileReader(new File(file))); String s = null;
while ((s = br.readLine()) != null) {
String[] tokens = DELIMITER.split(s);
if (tokens[0].startsWith(month)) {// 1月的数据
money += Integer.parseInt(tokens[1]);
}
}
br.close();
System.out.println("Output:" + month + "," + money);
return money;
}
}

  计算利润:

  

 package zkqueue;

 import java.io.IOException;

 /**
* 利润计算
* @author Jon_China
*
*/
public class Profit { public static void main(String[] args) throws Exception {
profit();
} public static void profit() throws Exception {
int sell = getSell();
int purchase = getPurchase();
int other = getOther();
int profit = sell - purchase - other;
System.out.printf("profit = sell - purchase - other = %d - %d - %d = %d\n", sell, purchase, other, profit);
} public static int getPurchase() throws Exception {
HdfsDAO hdfs = new HdfsDAO(Purchase.HDFS, Purchase.config());
return Integer.parseInt(hdfs.cat(Purchase.path().get("output") + "/part-r-00000").trim());
} public static int getSell() throws Exception {
HdfsDAO hdfs = new HdfsDAO(Sell.HDFS, Sell.config());
return Integer.parseInt(hdfs.cat(Sell.path().get("output") + "/part-r-00000").trim());
} public static int getOther() throws IOException {
return Other.calcOther(Other.file);
} }

  zookeeper任务调度:

 package zkqueue;

 import java.io.IOException;
import java.util.List; import org.apache.zookeeper.CreateMode;
import org.apache.zookeeper.KeeperException;
import org.apache.zookeeper.WatchedEvent;
import org.apache.zookeeper.Watcher;
import org.apache.zookeeper.ZooDefs.Ids;
import org.apache.zookeeper.ZooKeeper;
/**
* 分布式队列zookeeper调度
* @author Jon_China
*
*/
public class QueueZookeeper {
//设置队列目录树
final public static String QUEUE = "/queue";
final public static String PROFIT = "/queue/profit";
final public static String PURCHASE = "/queue/purchase";
final public static String SELL = "/queue/sell";
final public static String OTHER = "/queue/other"; public static void main(String[] args) throws Exception {
if (args.length == 0) {
System.out.println("Please start a task:");
} else {
doAction(Integer.parseInt(args[0]));
}
}
public static void doAction(int client) throws Exception {
//zookeeper地址
String host1 = "192.168.8.104:2181";
String host2 = "192.168.8.105:2181";
String host3 = "192.168.8.106:2181"; ZooKeeper zk = null;
switch (client) {//1,2,3分别将不同任务加入队列
case 1:
zk = connection(host1);
initQueue(zk);
doPurchase(zk);
break;
case 2:
zk = connection(host2);
initQueue(zk);
doSell(zk);
break;
case 3:
zk = connection(host3);
initQueue(zk);
doOther(zk);
break;
}
} // 创建一个与服务器的连接
public static ZooKeeper connection(String host) throws IOException {
ZooKeeper zk = new ZooKeeper(host, 60000, new Watcher() {
// 监控所有被触发的事件
public void process(WatchedEvent event) {
if (event.getType() == Event.EventType.NodeCreated && event.getPath().equals(PROFIT)) {
System.out.println("Queue has Completed!!!");
}
}
});
return zk;
}
/**
* 初始化队列
* @param zk
* @throws KeeperException
* @throws InterruptedException
*/
public static void initQueue(ZooKeeper zk) throws KeeperException, InterruptedException {
System.out.println("WATCH => " + PROFIT);
zk.exists(PROFIT, true); if (zk.exists(QUEUE, false) == null) {
System.out.println("create " + QUEUE);
zk.create(QUEUE, QUEUE.getBytes(), Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
} else {
System.out.println(QUEUE + " is exist!");
}
}
/**
* 采购任务
* @param zk
* @throws Exception
*/
public static void doPurchase(ZooKeeper zk) throws Exception {
if (zk.exists(PURCHASE, false) == null) { Purchase.run(Purchase.path()); System.out.println("create " + PURCHASE);
zk.create(PURCHASE, PURCHASE.getBytes(), Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
} else {
System.out.println(PURCHASE + " is exist!");
}
isCompleted(zk);
}
/**
* 销售任务
* @param zk
* @throws Exception
*/
public static void doSell(ZooKeeper zk) throws Exception {
if (zk.exists(SELL, false) == null) { Sell.run(Sell.path()); System.out.println("create " + SELL);
zk.create(SELL, SELL.getBytes(), Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
} else {
System.out.println(SELL + " is exist!");
}
isCompleted(zk);
}
/**
* 其他计算任务
* @param zk
* @throws Exception
*/
public static void doOther(ZooKeeper zk) throws Exception {
if (zk.exists(OTHER, false) == null) { Other.calcOther(Other.file); System.out.println("create " + OTHER);
zk.create(OTHER, OTHER.getBytes(), Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
} else {
System.out.println(OTHER + " is exist!");
}
isCompleted(zk);
}
/**
* 检测完成情况
* @param zk
* @throws Exception
*/
public static void isCompleted(ZooKeeper zk) throws Exception {
int size = 3;
List<String> children = zk.getChildren(QUEUE, true);
int length = children.size(); System.out.println("Queue Complete:" + length + "/" + size);
if (length >= size) {
System.out.println("create " + PROFIT);
Profit.profit();
zk.create(PROFIT, PROFIT.getBytes(), Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL); for (String child : children) {// 清空节点
zk.delete(QUEUE + "/" + child, -1);
} }
}
}

四、运行结果

  在最后一步,统计其他费用数据程序运行后,从日志中看到3个条件节点都已满足要求 。然后,通过同步的分步式队列自动启动了计算利润的程序,幵在日志中打印了2017 年1月的利润为-6693765。

实现分布式队列ZooKeeper的实现

  示例代码地址:https://github.com/LJunChina/hadoop/tree/master/distributed_mq

上一篇:向空项目添加 ASP.NET Identity


下一篇:python并发编程(并发与并行,同步和异步,阻塞与非阻塞)