-
Flink DataStream
- DataStream相关概念
5.1.1 ExecutionEnvironment执行环境
- 执行环境创建方式
和Flink交互需要一个入口,这个入口就是ExecutionEnvironment执行环境。在Stream API中,它的执行环境就使用StreamExecutionEnvironment来创建,里面包含了创建各种执行环境的静态方法。
这里这些静态方法都可以创建执行环境,我们最常用的就是getExecutionEnvironment方法,它会根据实际的执行环境来判断是运行在Local还是集群上。
// 创建流处理环境 StreamExecutionEnvironment.getExecutionEnvironment(); StreamExecutionEnvironment.getExecutionEnvironment(new Configuration()); // 创建一个本地运行环境 StreamExecutionEnvironment.createLocalEnvironment(2,new Configuration()); // 创建一个本地带WebUI的执行环境,需要引入flink-runtime-web的依赖 StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration()); // 创建远程执行环境,给定远程地址将任务直接提交到集群上,需要提供jar包 StreamExecutionEnvironment.createRemoteEnvironment("hadoop01",6123); |
- 远程执行RemoteEnv
示例:
// 因为下面用到了一个类AddKeyMapFunction这是用户自己定义的一个类,这个类默认Flink的jar包环境里面是没有的,现在需要先将这个类打成一个jar包, // 下面创建环境第三个参数填入这个jar包的路径,运行时就可以将jar包上传给Flink,任务加载到就可以开始运行了 StreamExecutionEnvironment env = StreamExecutionEnvironment.createRemoteEnvironment( "hadoop01", 8081, "com.example.flink.datastream/target/com.example.flink.datastream-0.0.1-SNAPSHOT.jar" ); DataStreamSource<String> source = env.socketTextStream("hadoop01", 9999); source.map(new AddKeyMapFunction()).print(); try { env.execute(); } catch (Exception e) { e.printStackTrace(); } |
- 异步提交任务
DataStream的内部操作是懒执行的,要想触发执行动作需要执行execute动作。这里execute()方法是线程阻塞的,可以通过executeAsync()来异步提交任务,后续可以通过其返回值jobClient来监控任务状态。
try { // 如果是execution()程序就会卡在这里 // 可以使用executeAsync()异步提交,这样程序就不会卡在这里,后续可以通过jobClient来监控任务状态 JobClient jobClient = env.executeAsync(); ExecutorService executor = Executors.newSingleThreadExecutor(); executor.execute(() -> { while (true) { try { JobStatus jobStatus = jobClient.getJobStatus().get(); if (!JobStatus.RUNNING.equals(jobStatus)) break; else { TimeUnit.SECONDS.sleep(1); System.out.println("运行状态:" + jobStatus.name()); } } catch (InterruptedException e) { e.printStackTrace(); } catch (ExecutionException e) { e.printStackTrace(); } } }); } catch (Exception e) { e.printStackTrace(); } |
5.1.2 什么是DataStream
DataStream API 得名于一个特殊的DataStream类,该类用于表示 Flink 程序中的数据集合。您可以将它们视为可以包含重复项的不可变数据集合。这些数据可以是有限的,也可以是无限的,用于处理它们的 API 是相同的。
在用法上DataStream与常规 Java 相似,Collection但在某些关键方面却大不相同。它们是不可变的,这意味着一旦它们被创建,你就不能添加或删除元素。您也不能简单地检查内部元素,而只能使用DataStreamAPI 操作(也称为转换)处理它们。
大概的意思就是DataStream就是一个大的数据集向Collection一样,但是这个数据集内的数据不能被改变只能用它提供出来的API进行转换。
-
- DataStreamAPI操作
5.2.1 Connector连接器
一些比较基本的 Source 和 Sink 已经内置在 Flink 里。 预定义 data sources 支持从文件、目录、socket,以及 collections 和 iterators 中读取数据。 预定义 data sinks 支持把数据写入文件、标准输出(stdout)、标准错误输出(stderr)和 socket。
以下是1.12版本Flink社区已经开发完成的连接器source是输入sink是输出。
- Apache Kafka (source/sink)
- Apache Cassandra (sink)
- Amazon Kinesis Streams (source/sink)
- Elasticsearch (sink)
- FileSystem(包括 Hadoop ) - 仅支持流 (sink)
- FileSystem(包括 Hadoop ) - 流批统一 (sink)
- RabbitMQ (source/sink)
- Apache NiFi (source/sink)
- Twitter Streaming API (source)
- Google PubSub (source/sink)
-
JDBC (sink)
-
-
- Source
-
-
- 内部数据源读取
以上都是可以从内部数据源读取数据的API
- 文本读取
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); DataStreamSource<String> source = env.readTextFile("data/streaming/AFINN-111.txt"); source.print(); env.execute(); |
- Socket读取
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); DataStreamSource<String> source = env.socketTextStream("hadoop01", 9999); source.print(); env.execute(); |
- Kafka读取
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); String topic= "sensor"; Properties consumerConfig = new Properties(); consumerConfig.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"hadoop01:9092"); consumerConfig.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); consumerConfig.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); consumerConfig.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,"latest"); consumerConfig.setProperty(ConsumerConfig.GROUP_ID_CONFIG,"flink_consumer"); DataStreamSource<String> source = env.addSource(new FlinkKafkaConsumer<String>(topic,new SimpleStringSchema(),consumerConfig)); source.print(); env.execute(); |
- 自定义Source
public static void main(String[] args) throws Exception { StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); String topic = "sensor"; Properties consumerConfig = new Properties(); consumerConfig.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "hadoop01:9092"); consumerConfig.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); consumerConfig.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); consumerConfig.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest"); consumerConfig.setProperty(ConsumerConfig.GROUP_ID_CONFIG, "flink_consumer"); DataStreamSource<String> source = env.addSource(new MySource(topic, consumerConfig)); source.print(); env.execute(); } /** * 自定义一个Source用来消费kafka数据 */ static class MySource implements SourceFunction<String> { private final String topic; private final Properties config; private volatile boolean run = true; public MySource(String topic, Properties config) { this.topic = topic; this.config = config; this.run = true; } @Override public void run(SourceContext<String> ctx) throws Exception { KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(config); consumer.subscribe(Collections.singletonList(this.topic)); while (run) { ConsumerRecords<String, String> consumerRecords = consumer.poll(Duration.ofSeconds(10)); for (ConsumerRecord<String, String> consumerRecord : consumerRecords) { ctx.collect(consumerRecord.value()); } } } @Override public void cancel() { this.run = false; } } |
-
-
-
- Sink
-
-
- Kafka Sink
// 从Socket接收数据发送到Kafka StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); DataStreamSource<String> source = env.socketTextStream("hadoop01", 9999); String brokerList = "hadoop01:9092"; String sendTopic = "sensor"; source.addSink(new FlinkKafkaProducer<String>(brokerList, sendTopic, new SimpleStringSchema())); env.execute(); |
- JDBC Sink
// 从Socket接收数据写入到mysql StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); DataStreamSource<String> source = env.socketTextStream("hadoop01", 9999); String writeSql = "INSERT INTO tbl2(value) VALUES(?)"; source.addSink(JdbcSink.sink(writeSql, (JdbcStatementBuilder<String>) (preparedStatement, s) -> { preparedStatement.setObject(1, s); }, // JDBC是以批的方式写入的,这里改下批次大小好看到效果 new JdbcExecutionOptions.Builder().withBatchSize(1) .build() , new JdbcConnectionOptions.JdbcConnectionOptionsBuilder() .withDriverName("com.mysql.jdbc.Driver") .withUsername("root") .withPassword("") .withUrl("jdbc:mysql:///test") .build() )); env.execute(); |
-
-
-
Operators
- Transform数据流转换
-
Operators
-
- Map & FatMap
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.setParallelism(1); DataStreamSource<String> source = env.socketTextStream("hadoop01", 9999); // 在接收到的消息前加个前缀打印出来 SingleOutputStreamOperator<String> result = source.map(new MapFunction<String, String>() { private static final String PREFIX = "input message is : "; @Override public String map(String value) throws Exception { return PREFIX.concat(value); } }); result.print("map result:"); SingleOutputStreamOperator<String> flatMapResult = source.flatMap(new FlatMapFunction<String, String>() { @Override public void flatMap(String value, Collector<String> out) throws Exception { for (String word : value.split(" ")) { out.collect(word); } } }); flatMapResult.print("flatmap result:"); env.execute(); |
- Fiter
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.setParallelism(1); DataStreamSource<String> source = env.socketTextStream("hadoop01", 9999); // 在接收到的消息前加个前缀打印出来 SingleOutputStreamOperator<String> result = source.filter(new FilterFunction<String>() { @Override public boolean filter(String value) throws Exception { // 对内容进行过滤,如果消息的长度超过10就被滤掉 return StringUtils.length(value) <= 10; } }); result.print("长度不超过10的消息:"); env.execute(); |
- Union
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.setParallelism(1); DataStreamSource<String> hadoop01Stream = env.socketTextStream("hadoop01", 9999); DataStreamSource<String> hadoop02Stream = env.socketTextStream("hadoop02", 9999); // 将两个相同泛型的DataStream Union到一起联合处理 DataStream<String> unionResult = hadoop01Stream.union(hadoop02Stream); unionResult.print("union stream:"); env.execute(); |
- Connect
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.setParallelism(1); DataStreamSource<String> hadoop01Stream = env.socketTextStream("hadoop01", 9999); DataStream<Integer> hadoop02Stream = env.socketTextStream("hadoop02", 9999).map(Integer::parseInt); // 将两个流connect到一起,不管泛型一不一致 ConnectedStreams<String, Integer> connect = hadoop01Stream.connect(hadoop02Stream); // 后续对两个流的操作就是两个方法单独处理都是Co开头的Function最终整合成一个流 SingleOutputStreamOperator<String> result = connect.map(new CoMapFunction<String, Integer, String>() { @Override public String map1(String value) throws Exception { return value; } @Override public String map2(Integer value) throws Exception { return value.toString(); } }); result.print("connect stream : "); env.execute(); |
- KeyBy
keyBy算子是用来分组的算子,返回结果就是KeyedStream键控流,后面的Reduce算子Aggravate算子都是需要分组之后才能操作。
// 对source进行分组,WordCount统计单词个数 KeyedStream<Tuple2<String, Long>, String> wordKeyedStream = wordStream.keyBy(new KeySelector<Tuple2<String, Long>, String>() { @Override public String getKey(Tuple2<String, Long> tuple2) throws Exception { return tuple2.f0; } }); |
- Reduce
Reduce累计,接收的是KeyedStream,按照什么Key来汇总,group by后的聚合操作。
SingleOutputStreamOperator<Tuple2<String, Long>> reduceStream = wordKeyedStream.reduce(new ReduceFunction<Tuple2<String, Long>>() { @Override public Tuple2<String, Long> reduce(Tuple2<String, Long> value1, Tuple2<String, Long> value2) throws Exception { return new Tuple2<>(value1.f0, value1.f1 + value2.f1); } }); |
- Aggregations
Aggregations操作就是分组之后的聚合操作,简单的聚合例如sum,count,min,max之类的函数。
KeyedStream<Word, String> wordSumKeyedStream = wordStream.keyBy(new KeySelector<Word, String>() { @Override public String getKey(Word value) throws Exception { return "长度比较"; } }); // 再看哪些单词长度长 SingleOutputStreamOperator<Word> minByResult = wordSumKeyedStream.minBy("len"); SingleOutputStreamOperator<Word> maxByResult = wordSumKeyedStream.maxBy("len"); minByResult.print("minByResult:"); maxByResult.print("maxByResult:"); env.execute(); |
-
-
-
- 分区
-
-
前面运行架构中有数据从上游到下游的发送方式,可以是一对一可以是重新分发。
重新分发又有几种分发方式,用户定义分发方式、随机分发、平衡随机分发、组内随机、下游每个分区一份。
- RebalancePartitioner
Rebalance方式将数据发送至下游
@Override
public int selectChannel(SerializationDelegate<StreamRecord<T>> record) {
nextChannelToSendTo = (nextChannelToSendTo + 1) % numberOfChannels;
return nextChannelToSendTo;
}
通过这种方式来得出是发往哪个分区的,并不是随机而是一种轮询的方式发送。
API操作:
source.rebalance().print("rebalance").setParallelism(6);
- RescalePartitioner
这个是个限制版的rebalence,和它很像,但是rescale是会对下游或上游进行分组,rebalance不分组就是直接轮询假如下游有4个分区也不管上游几个分区,向下游发送时就是固定的一个顺序1,2,3,4,1,2,3,4,1,2,3,4....但是Rescale就不会这么粗暴的轮询,而是上游和下游进行一个对应分组,假如上游有2个分区,下游有4个分区那么,上游的0分区就会在1,2之间轮询,1分区就会在3,4之间轮询。如果上游是4个下游是2个那么就是1,2向下游的0发送,3,4向下游的1发送。
@Override
public int selectChannel(SerializationDelegate<StreamRecord<T>> record) {
if (++nextChannelToSendTo >= numberOfChannels) {
nextChannelToSendTo = 0;
}
return nextChannelToSendTo;
}
API操作
.rescale().print("rescale").setParallelism(4);
- GlobalPartitioner
简单粗暴,直接给到0分区,不管怎么来数据都是给到下游的ID=0分区。
@Override
public int selectChannel(SerializationDelegate<StreamRecord<T>> record) {
return 0;
}
API操作:
source.global().print("global").setParallelism(4);
- KeyGroupStreamPartitioner
通过Key的Hash值判断发送给下游的哪个分区
@Override
public int selectChannel(SerializationDelegate<StreamRecord<T>> record) {
K key;
try {
key = keySelector.getKey(record.getInstance().getValue());
} catch (Exception e) {
throw new RuntimeException("Could not extract key from " + record.getInstance().getValue(), e);
}
return KeyGroupRangeAssignment.assignKeyToParallelOperator(key, maxParallelism, numberOfChannels);
}
// KeyGroupRangeAssignment中的方法
public static int assignKeyToParallelOperator(Object key, int maxParallelism, int parallelism) {
return computeOperatorIndexForKeyGroup(maxParallelism, parallelism, assignToKeyGroup(key, maxParallelism));
}
// KeyGroupRangeAssignment中的方法
public static int assignToKeyGroup(Object key, int maxParallelism) {
return computeKeyGroupForKeyHash(key.hashCode(), maxParallelism);
}
// KeyGroupRangeAssignment中的方法
public static int computeKeyGroupForKeyHash(int keyHash, int maxParallelism) {
return MathUtils.murmurHash(keyHash) % maxParallelism;
}
API操作
.KeyBy(new KeySelector<String, String>() {
@Override
public String getKey(String value) throws Exception {
return value;
}
});
- ForwardPartitioner
仅将元素转发到本地运行的下游操作的分区器,将记录输出到下游本地的operator实例。ForwardPartitioner分区器要求上下游算子并行度一样,上下游Operator同属一个SubTasks。
注意上下游并行度一定得是对等的,否则会运行就会报错: Forward partitioning does not allow change of parallelism. Upstream operation: Map-15 parallelism: 2, downstream operation: Sink: Print to Std. Out-17 parallelism: 4 You must use another partitioning strategy, such as broadcast, rebalance, shuffle or global.意思就是使用Forward分区就得一致,不然你用其他的分区策略。
@Override
public int selectChannel(SerializationDelegate<StreamRecord<T>> record) {
return 0;
}
API操作:
.forward().print(“forward”);
- ShufflePartitioner
也不是轮询就是从下游通道中随机选一个分区。
@Override
public int selectChannel(SerializationDelegate<StreamRecord<T>> record) {
return random.nextInt(numberOfChannels);
}
API操作
source.shuffle().print("shuffle").setParallelism(4);
- CustomPartitionerWrapper
用户自定义分区,用户指定这条数据要发送的分区编号。
public CustomPartitionerWrapper(Partitioner<K> partitioner, KeySelector<T, K> keySelector) {
this.partitioner = partitioner;
this.keySelector = keySelector;
}
API操作:
source.partitionCustom(new Partitioner<String>() {
@Override
public int partition(String key, int numPartitions) {
int mid = numPartitions / 2;
return key.length() > 5 ? RandomUtils.nextInt(0, mid) : RandomUtils.nextInt(mid, numPartitions);
}
}, new KeySelector<String, String>() {
@Override
public String getKey(String value) throws Exception {
return value;
}
}).map(Object::toString).print("custom").setParallelism(4);
- BroadcastPartitioner
下游分区每个分区都分发。
@Override
public int selectChannel(SerializationDelegate<StreamRecord<T>> record) {
throw new UnsupportedOperationException("Broadcast partitioner does not support select channels.");
}
@Override
public boolean isBroadcast() {
return true;
}
API操作
source.broadcast().print("broadcast").setParallelism(4);
-
-
-
- 共享组
-
-
禁用算子链操作
算子链就是Operator chain,运行架构中介绍了操作链,可以将相同并行度的算子放到同一个Slot中执行,这样能避免数据在多个task之间流转来提高性能。
操作链可以通过env.disableChaining()全局禁用,也可以在某个算子操作后对后面的算子禁用。
Source -> map -> sink
- 并行度始终为1,默认开启效果
- 并行度始终为1,全局禁用效果
- 并行度始终为1,Source禁用算子链效果
- Map并行度增大,任何阶段不禁用算子链效果
由此可见并行度并不是随着上个算子传递下来的,而是没有设置就是默认,*别依次是 算子后设置并行度 > env设置并行度 > 任务提交参数设置的并行度 > flink-conf.yaml配置文件中的并行度。
设置共享组操作
设置操作的槽位共享组。Flink 会将具有相同槽共享组的操作放在同一个槽中,而将没有槽共享组的操作保留在其他槽中。这可用于隔离插槽。如果所有输入操作都在同一个槽共享组中,则槽共享组从输入操作继承。默认插槽共享组的名称为“default”,可以通过调用 slotSharingGroup(“default”) 将操作显式放入该组。