概述
Flink是构建在数据流之上的有状态计算的流计算框架,通常被人们理解为是第三代
大数据分析方案。
-
第一代 - Hadoop的MapReduce计算(静态)、Storm流计算(2014.9) ;两套独立计算引擎,使用难度大
-
第二代 - Spark RDD 静态批处理(2014.2)、DStream|Structured Streaming 流计算;统一计算引擎,难度系数小
-
第三代 - Flink DataStream(2014 .12) 流计算框架、Flink Dataset 批处理;统一计算引擎,难度系数不低也不高
可以看出Spark和Flink几乎同时诞生,但是Flink之所以发展慢,是因为早期人们对大数据的分析的认知不够深刻或者当时业务场景大都局限在批处理领域,从而导致了Flink的发展相比较于Spark较为缓慢,直到2016年人们才开始慢慢的意识流计算的重要性。
流计算领域:系统监控、舆情监控、交通预测、国家电网、疾病预测、银行/金融风控等。
更多详细分析:https://blog.csdn.net/weixin_38231448/article/details/100062961
概念
Task和Operator Chain
Flink是一个分布式流计算引擎,该引擎将一个计算job拆分成若干个Task(等价于Spark中的Stage),每个Task都有自己的并行度,每个并行度都由一个线程表示,因为一个Task是并行执行的,因此一个Task底层对应一系列的线程,Flink称为这些线程为该Task的subtask。与Spark不同的地方在于Spark是通过RDD的依赖关系实现Stage的划分而Flink是通过OperatorChain
的概念实现Task的拆分。所谓的OperatorChain
指的是Flink在做job编织的时候,尝试将多个操作符算子进行串联到一个Task中,以减少数据的线程到线程传输的开销,目前Flink常见的Operatorchain的方式有两种:forward、hash|rebalance.
- Task - 等价spark中的Stage,每个Task都有若干个SubTask
- SubTask - 等价一个线程,是Task中的一个子任务吗,等价spark中stage的一个分区
- OperatorChain - 将多个算子归并到一个Task的一种机制,归并原则类似SparkRDD的宽窄依赖
JobManagers、TaskManagers、Clients
- JobManagers - (也称为master)负责协调分布式执行。负责任务调度,协调检查点,协调故障恢复等,等价于Spark中的Master+Driver的功能。通常一个集群中至少有1个Active的JobManager,如果在HA模式下其他处于StandBy状态。
(also called masters) coordinate the distributed execution. They schedule tasks, coordinate checkpoints, coordinate recovery on failures, etc.There is always at least one Job Manager. A high-availability setup will have multiple JobManagers, one of which one is always the leader, and the others are standby.
- TaskManagers - (称为Worker) 真正负责Task执行计算节点,同时需要向JobManager汇报自身状态信息和工作负荷。通常一个集群中有若干个TaskManager。
(also called workers) execute the tasks (or more specifically, the subtasks) of a dataflow, and buffer and exchange the data streams.There must always be at least one TaskManager.
- client -与Spark不同,Flink中的Client并不是集群计算的一部分,Client仅仅负责提交任务的Dataflow Graph给JobManager,提交完成之后,可以直接退出。因此该Client并不负责任务执行过程中调度。
The client is not part of the runtime and program execution, but is used to prepare and send a dataflow to the JobManager. After that, the client can disconnect, or stay connected to receive progress reports.
参考:https://ci.apache.org/projects/flink/flink-docs-release-1.10/concepts/runtime.html
Resource & Task Slots
每个worker(TaskManager)都是一个JVM进程,可以在不同的线程中执行一个或多个子任务。为了控制Worker接受的Task数量,Worker节点运行task slots(at least one)。每个Task Slot代表TaskManager的固定资源子集。例如,具有3个Task Slots的TaskManager将其1/3的托管内存专用于每个task slot.切分资源的目的是为了对一个任务的执行做资源隔离,也就意味着当前任务的执行一旦分配完slot之后,不会被其他job任务侵占。如果一个TaskManager 拥有多个Task Slots意味着更多Sub Tasks 共享同一个JVM。同一JVM中的任务共享TCP连接(通过多路复用)和心跳消息。
默认情况下,Flink允许子任务共享Task slot,即使它们是不同任务的子任务,只要它们来自同一个job即可。一个Slot槽可以保存Job的的整个工作流程。允许此Task Slots共享有两个主要好处:
- Flink集群只需要知道作业中使用的TaskSlots总数即可。无需计算程序总共包含多少任务。
- 更好的资源利用率。允许一个job*享Task Slots 也就意味着系统可以更加充分的使得资源得到合理的利用。没有Task Slot共享,非密集源/ map()子任务将阻止与资源密集型 window subtasks一样多的资源。通过Task slot,将示例中的基本并行性从2增加到6可以充分利用时隙资源,同时确保繁重的子任务在TaskManagers之间公平分配.
默认同一个job的不同Task(Stage)的SubTask/Thread(分区)可以共享Slot。这样程序只需要在启动的时候指定最大并行度即可。最大的并行度就等价于下系统需要给该job分配的slot的个数。由于Flink实现了不同job间的Slot的隔离,因此一旦slot分配完之后,后去的job因为没有Slot可用,只能处于挂起状态,这样设计的好处在于后续提交的任务不会影响先在执行中的job。
环境安装
前提条件
-
JDK必须是1.8+,完成JAVA_HOME配置
-
安装Hadoop、并保证HADOOP正常运行(SSH免密码、HADOOP_HOME)
Flink 安装(Standalone)
- 上传并解压
[root@CentOS ~]# tar -zxf flink-1.10.0-bin-scala_2.11.tgz -C /usr/
[root@CentOS flink-1.10.0]# tree -L 1 ./
./
├── bin #执行脚本目录
├── conf #配置目录
├── examples #案例jar
├── lib # 依赖的jars
├── LICENSE
├── licenses
├── log # 运行日志
├── NOTICE
├── opt # 第三方备用插件包
├── plugins
└── README.txt
8 directories, 3 files
- 配置flink-conf.yaml
[root@CentOS flink-1.10.0]# vi conf/flink-conf.yaml
#==============================================================================
# Common
#==============================================================================
jobmanager.rpc.address: CentOS
taskmanager.numberOfTaskSlots: 4
parallelism.default: 3
#==============================================================================
# HistoryServer
#==============================================================================
jobmanager.archive.fs.dir: hdfs:///completed-jobs/
historyserver.web.address: 0.0.0.0
historyserver.web.port: 8082
historyserver.archive.fs.dir: hdfs:///completed-jobs/
historyserver.archive.fs.refresh-interval: 10000
必须配置HADOOP_CLASSAPTH环境变量
- 配置salves
[root@CentOS flink-1.10.0]# vi conf/slaves
CentOS
- 启动Flink
[root@CentOS flink-1.10.0]# ./bin/historyserver.sh start historyserver
Starting historyserver daemon on host CentOS.
[root@CentOS flink-1.10.0]# ./bin/start-cluster.sh
Starting cluster.
Starting standalonesession daemon on host CentOS.
Starting taskexecutor daemon on host CentOS.
[root@CentOS flink-1.10.0]# jps
58544 HistoryServer
52867 DataNode
52629 NameNode
59045 StandaloneSessionClusterEntrypoint
53257 SecondaryNameNode
59643 TaskManagerRunner
59676 Jps
- 检查是否启动成功
用户可以访问Flink的WEB UI地址:http://CentOS:8081(任务提交)访问8082历史服务页面
快速入门
- 导入依赖
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.9.2</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-scala_2.11</artifactId>
<version>1.10.0</version>
</dependency>
- Client程序
import org.apache.flink.streaming.api.scala._
object FlinkWordCountQiuckStart {
def main(args: Array[String]): Unit = {
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
//2.创建DataStream - 细化
val text = env.socketTextStream("CentOS", 9999)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.sum(1)
//4.将计算的结果在控制打印
counts.print()
//5.执行流计算任务
env.execute("Window Stream WordCount")
}
}
- 引入maven打包插件
<build>
<plugins>
<!--scala编译插件-->
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>4.0.1</version>
<executions>
<execution>
<id>scala-compile-first</id>
<phase>process-resources</phase>
<goals>
<goal>add-source</goal>
<goal>compile</goal>
</goals>
</execution>
</executions>
</plugin>
<!--创建fatjar插件-->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
</configuration>
</execution>
</executions>
</plugin>
<!--编译插件-->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.2</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
<encoding>UTF-8</encoding>
</configuration>
<executions>
<execution>
<phase>compile</phase>
<goals>
<goal>compile</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
-
使用mvn package打包
-
使用WEB UI提交任务
程序部署
本地执行
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.createLocalEnvironment(3)
//2.创建DataStream - 细化
val text = env.socketTextStream("CentOS", 9999)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.sum(1)
//4.将计算的结果在控制打印
counts.print()
//5.执行流计算任务
env.execute("Window Stream WordCount")
远程部署
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
//2.创建DataStream - 细化
val text = env.socketTextStream("CentOS", 9999)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.sum(1)
//4.将计算的结果在控制打印
counts.print()
//5.执行流计算任务
env.execute("Window Stream WordCount")
StreamExecutionEnvironment.getExecutionEnvironment自动识别运行环境,如果运行环境是idea,系统会自动切换成本地模式,默认系统的并行度使用系统最大线程数,等价于Spark中设置的
local[*]
,如果是生产环境,需要用户在提交任务的时候指定并行度--parallelism
-
部署方式
-
WEB UI部署(略)
-
通过脚本部署
[root@CentOS ~]# cd /usr/flink-1.10.0/
[root@CentOS flink-1.10.0]# ./bin/flink run
--class com.baizhi.quickstart.FlinkWordCountQiuckStart
--detached # 后台提交
--parallelism 4 #指定程序默认并行度
--jobmanager CentOS:8081 # 提交目标主机
/root/flink-datastream-1.0-SNAPSHOT.jar
Job has been submitted with JobID f2019219e33261de88a1678fdc78c696
查看现有任务
[root@CentOS flink-1.10.0]# ./bin/flink list --running --jobmanager CentOS:8081
## Waiting for response... Running/Restarting Jobs -------------------
## 01.03.2020 05:38:16 : f2019219e33261de88a1678fdc78c696 : Window Stream WordCount (RUNNING)
No scheduled jobs.
[root@CentOS flink-1.10.0]# ./bin/flink list --all --jobmanager CentOS:8081
## Waiting for response... Running/Restarting Jobs -------------------
## 01.03.2020 05:44:29 : ddfc2ddfb6dc05910a887d61a0c01392 : Window Stream WordCount (RUNNING)
## No scheduled jobs. Terminated Jobs -----------------------
01.03.2020 05:36:28 : f216d38bfef7745b36e3151855a18ebd : Window Stream WordCount (CANCELED)
## 01.03.2020 05:38:16 : f2019219e33261de88a1678fdc78c696 : Window Stream WordCount (CANCELED)
取消指定任务
[root@CentOS flink-1.10.0]# ./bin/flink cancel --jobmanager CentOS:8081 f2019219e33261de88a1678fdc78c696
Cancelling job f2019219e33261de88a1678fdc78c696.
Cancelled job f2019219e33261de88a1678fdc78c696.
查看程序执行计划
## [root@CentOS flink-1.10.0]# ./bin/flink info --class com.baizhi.quickstart.FlinkWordCountQiuckStart --parallelism 4 /root/flink-datastream-1.0-SNAPSHOT.jar Execution Plan -----------------------
## {"nodes":[{"id":1,"type":"Source: Socket Stream","pact":"Data Source","contents":"Source: Socket Stream","parallelism":1},{"id":2,"type":"Flat Map","pact":"Operator","contents":"Flat Map","parallelism":4,"predecessors":[{"id":1,"ship_strategy":"REBALANCE","side":"second"}]},{"id":3,"type":"Map","pact":"Operator","contents":"Map","parallelism":4,"predecessors":[{"id":2,"ship_strategy":"FORWARD","side":"second"}]},{"id":5,"type":"aggregation","pact":"Operator","contents":"aggregation","parallelism":4,"predecessors":[{"id":3,"ship_strategy":"HASH","side":"second"}]},{"id":6,"type":"Sink: Print to Std. Out","pact":"Data Sink","contents":"Sink: Print to Std. Out","parallelism":4,"predecessors":[{"id":5,"ship_strategy":"FORWARD","side":"second"}]}]}
No description provided.
用户可以访问:[https://flink.apache.org/visualizer/将json数据粘贴过去,查看Flink执行计划图
跨平台发布
object FlinkWordCountQiuckStartCorssPlatform {
def main(args: Array[String]): Unit = {
//1.创建流计算执行环境
var jars="/Users/admin/IdeaProjects/20200203/flink-datastream/target/flink-datastream-1.0-SNAPSHOT.jar"
val env = StreamExecutionEnvironment.createRemoteEnvironment("CentOS",8081,jars)
//设置默认并行度
env.setParallelism(4)
//2.创建DataStream - 细化
val text = env.socketTextStream("CentOS", 9999)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.sum(1)
//4.将计算的结果在控制打印
counts.print()
//5.执行流计算任务
env.execute("Window Stream WordCount")
}
}
在运行之前需要使用mvn重新打包程序。直接运行main函数即可
Streaming (DataStream API)
DataSource
数据源是程序读取数据的来源,用户可以通过env.addSource(SourceFunction)
,将SourceFunction添加到程序中。Flink内置许多已知实现的SourceFunction,但是用户可以自定义实现SourceFunction
(非并行化的接口)接口或者实现ParallelSourceFunction
(并行化)接口,如果需要有状态管理还可以继承RichParallelSourceFunction
.
File-based
-
readTextFile(path)
- Reads(once) text files, i.e. files that respect theTextInputFormat
specification, line-by-line and returns them as Strings.
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
//2.创建DataStream - 细化
val text:DataStream[String] = env.readTextFile("hdfs://CentOS:9000/demo/words")
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.sum(1)
//4.将计算的结果在控制打印
counts.print()
//5.执行流计算任务
env.execute("Window Stream WordCount")
-
readFile(fileInputFormat, path)
- Reads (once) files as dictated by the specified file input format.
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
//2.创建DataStream - 细化
var inputFormat:FileInputFormat[String]=new TextInputFormat(null)
val text:DataStream[String] = env.readFile(inputFormat,"hdfs://CentOS:9000/demo/words")
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.sum(1)
//4.将计算的结果在控制打印
counts.print()
//5.执行流计算任务
env.execute("Window Stream WordCount")
-
readFile(fileInputFormat, path, watchType, interval, pathFilter, typeInfo)
- This is the method called internally by the two previous ones. It reads files in thepath
based on the givenfileInputFormat
. Depending on the providedwatchType
, this source may periodically monitor (everyinterval
ms) the path for new data (FileProcessingMode.PROCESS_CONTINUOUSLY
), or process once the data currently in the path and exit (FileProcessingMode.PROCESS_ONCE
). Using thepathFilter
, the user can further exclude files from being processed.
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
//2.创建DataStream - 细化
var inputFormat:FileInputFormat[String]=new TextInputFormat(null)
val text:DataStream[String] = env.readFile(inputFormat,
"hdfs://CentOS:9000/demo/words",FileProcessingMode.PROCESS_CONTINUOUSLY,1000)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.sum(1)
//4.将计算的结果在控制打印
counts.print()
//5.执行流计算任务
env.execute("Window Stream WordCount")
该方法会检查采集目录下的文件,如果文件发生变化系统会重新采集。此时可能会导致文件的重复计算。一般来说不建议修改文件内容,直接上传新文件即可
Socket Based
-
socketTextStream
- Reads from a socket. Elements can be separated by a delimiter.
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
//2.创建DataStream - 细化 主机 端口 分隔符 重试次数
val text = env.socketTextStream("CentOS", 9999,'\n',3)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.sum(1)
//4.将计算的结果在控制打印
counts.print()
//5.执行流计算任务
env.execute("Window Stream WordCount")
Collection-based
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
//2.创建DataStream - 细化
val text = env.fromCollection(List("this is a demo","hello word"))
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.sum(1)
//4.将计算的结果在控制打印
counts.print()
//5.执行流计算任务
env.execute("Window Stream WordCount")
UserDefinedSource
- SourceFunction(不支持并行)
import org.apache.flink.streaming.api.functions.source.SourceFunction
import scala.util.Random
class UserDefinedNonParallelSourceFunction extends SourceFunction[String]{
@volatile //防止线程拷贝变量
var isRunning:Boolean=true
val lines:Array[String]=Array("this is a demo","hello world","ni hao ma")
//在该方法中启动线程,通过sourceContext的collect方法发送数据
override def run(sourceContext: SourceFunction.SourceContext[String]): Unit = {
while(isRunning){
Thread.sleep(100)
//输送数据给下游
sourceContext.collect(lines(new Random().nextInt(lines.size)))
}
}
//释放资源
override def cancel(): Unit = {
isRunning=false
}
}
- ParallelSourceFunction(并行)
import org.apache.flink.streaming.api.functions.source.{ParallelSourceFunction, SourceFunction}
import scala.util.Random
class UserDefinedParallelSourceFunction extends ParallelSourceFunction[String]{
@volatile //防止线程拷贝变量
var isRunning:Boolean=true
val lines:Array[String]=Array("this is a demo","hello world","ni hao ma")
//在该方法中启动线程,通过sourceContext的collect方法发送数据
override def run(sourceContext: SourceFunction.SourceContext[String]): Unit = {
while(isRunning){
Thread.sleep(100)
//输送数据给下游
sourceContext.collect(lines(new Random().nextInt(lines.size)))
}
}
//释放资源
override def cancel(): Unit = {
isRunning=false
}
}
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(4)
//2.创建DataStream - 细化
val text = env.addSource[String](用户定义的SourceFunction)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.sum(1)
//4.将计算的结果在控制打印
counts.print()
println(env.getExecutionPlan) //打印执行计划
//5.执行流计算任务
env.execute("Window Stream WordCount")
√Kafka集成
- 引入maven
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka_2.11</artifactId>
<version>1.10.0</version>
</dependency>
- SimpleStringSchema
该SimpleStringSchema方案只会反序列化kafka中的value
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
//2.创建DataStream - 细化
val props = new Properties()
props.setProperty("bootstrap.servers", "CentOS:9092")
props.setProperty("group.id", "g1")
val text = env.addSource(new FlinkKafkaConsumer[String]("topic01",new SimpleStringSchema(),props))
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.sum(1)
//4.将计算的结果在控制打印
counts.print()
//5.执行流计算任务
env.execute("Window Stream WordCount")
- KafkaDeserializationSchema
import org.apache.flink.api.common.typeinfo.TypeInformation
import org.apache.flink.streaming.connectors.kafka.KafkaDeserializationSchema
import org.apache.kafka.clients.consumer.ConsumerRecord
import org.apache.flink.api.scala._
class UserDefinedKafkaDeserializationSchema extends KafkaDeserializationSchema[(String,String,Int,Long)]{
override def isEndOfStream(t: (String, String, Int, Long)): Boolean = false
override def deserialize(consumerRecord: ConsumerRecord[Array[Byte], Array[Byte]]): (String, String, Int, Long) = {
if(consumerRecord.key()!=null){
(new String(consumerRecord.key()),new String(consumerRecord.value()),consumerRecord.partition(),consumerRecord.offset())
}else{
(null,new String(consumerRecord.value()),consumerRecord.partition(),consumerRecord.offset())
}
}
override def getProducedType: TypeInformation[(String, String, Int, Long)] = {
createTypeInformation[(String, String, Int, Long)]
}
}
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
//2.创建DataStream - 细化
val props = new Properties()
props.setProperty("bootstrap.servers", "CentOS:9092")
props.setProperty("group.id", "g1")
val text = env.addSource(new FlinkKafkaConsumer[(String,String,Int,Long)]("topic01",new UserDefinedKafkaDeserializationSchema(),props))
//3.执行DataStream的转换算子
val counts = text.flatMap(t=> t._2.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.sum(1)
//4.将计算的结果在控制打印
counts.print()
//5.执行流计算任务
env.execute("Window Stream WordCount")
- JSONKeyValueNodeDeserializationSchema
要求Kafka中的topic的key和value都必须是json格式,也可以在使用的时候,指定是否读取元数据(topic、分区、offset等)
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
//2.创建DataStream - 细化
val props = new Properties()
props.setProperty("bootstrap.servers", "CentOS:9092")
props.setProperty("group.id", "g1")
//{"id":1,"name":"zhangsan"}
val text = env.addSource(new FlinkKafkaConsumer[ObjectNode]("topic01",new JSONKeyValueDeserializationSchema(true),props))
//t:{"value":{"id":1,"name":"zhangsan"},"metadata":{"offset":0,"topic":"topic01","partition":13}}
text.map(t=> (t.get("value").get("id").asInt(),t.get("value").get("name").asText()))
.print()
//5.执行流计算任务
env.execute("Window Stream WordCount")
参考:https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/connectors/kafka.html
Data Sinks
Data Sink使用DataStreams并将其转发到文件,Socket,外部系统或打印它们。 Flink带有多种内置输出格式,这些格式封装在DataStreams的操作后面。
File-based
-
writeAsText() /
TextOutputFormat
- Writes elements line-wise as Strings. The Strings are obtained by calling the toString() method of each element. -
writeAsCsv(…) /
CsvOutputFormat
- Writes tuples as comma-separated value files. Row and field delimiters are configurable. The value for each field comes from the toString() method of the objects. -
writeUsingOutputFormat/
FileOutputFormat
- Method and base class for custom file outputs. Supports custom object-to-bytes conversion.
请注意DataStream上的write*()方法主要用于调试目的。
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
//2.创建DataStream - 细化
val text = env.socketTextStream("CentOS", 9999)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.sum(1)
//4.将计算的结果在控制打印
counts.writeUsingOutputFormat(new TextOutputFormat[(String, Int)](new Path("file:///Users/admin/Desktop/flink-results")))
//5.执行流计算任务
env.execute("Window Stream WordCount")
注意事项:如果改成HDFS,需要用户自己产生大量数据,才能看到测试效果,原因是因为HDFS文件系统写入时的缓冲区比较大。以上写入文件系统的Sink不能够参与系统检查点,如果在生产环境下通常使用flink-connector-filesystem写入到外围系统。
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-filesystem_2.11</artifactId>
<version>1.10.0</version>
</dependency>
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
//2.创建DataStream - 细化
val text = env.readTextFile("hdfs://CentOS:9000/words/src")
var bucketingSink=StreamingFileSink.forRowFormat(new Path("hdfs://CentOS:9000/bucket-results"),
new SimpleStringEncoder[(String,Int)]("UTF-8"))
.withBucketAssigner(new DateTimeBucketAssigner[(String, Int)]("yyyy-MM-dd"))//动态产生写入路径
.build()
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.sum(1)
counts.addSink(bucketingSink)
//5.执行流计算任务
env.execute("Window Stream WordCount")
老版本写法
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(4)
//2.创建DataStream - 细化
val text = env.readTextFile("hdfs://CentOS:9000/demo/words")
var bucketingSink=new BucketingSink[(String,Int)]("hdfs://CentOS:9000/bucket-results")
bucketingSink.setBucketer(new DateTimeBucketer[(String,Int)]("yyyy-MM-dd"))
bucketingSink.setBatchSize(1024)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.sum(1)
counts.addSink(bucketingSink)
//5.执行流计算任务
env.execute("Window Stream WordCount")
print()/
printToErr()
Prints the toString() value of each element on the standard out / standard error stream. Optionally, a prefix (msg) can be provided which is prepended to the output. This can help to distinguish between different calls to print. If the parallelism is greater than 1, the output will also be prepended with the identifier of the task which produced the output.
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(4)
//2.创建DataStream - 细化
val text = env.readTextFile("hdfs://CentOS:9000/demo/words")
var bucketingSink=new BucketingSink[(String,Int)]("hdfs://CentOS:9000/bucket-results")
bucketingSink.setBucketer(new DateTimeBucketer[(String,Int)]("yyyy-MM-dd"))
bucketingSink.setBatchSize(1024)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.sum(1)
counts.printToErr("测试").setParallelism(2)
//5.执行流计算任务
env.execute("Window Stream WordCount")
UserDefinedSinkFunction
class UserDefineSinkFunction extends RichSinkFunction[(String,Int)]{
var jedisPool:JedisPool=null
override def open(parameters: Configuration): Unit = {
val jobParameters = getRuntimeContext.getExecutionConfig.getGlobalJobParameters.asInstanceOf[Configuration]
val host = jobParameters.getString("host", "localhost")
val port = jobParameters.getInteger("port", 6379)
println(s"..open...${host}:${port}")
// jedisPool= new JedisPool(host,port)
}
override def close(): Unit = {
println("close")
//jedisPool.close()
}
override def invoke(value: (String, Int), context: SinkFunction.Context[_]): Unit = {
println("写出redis.."+value)
// val jedis = jedisPool.getResource
// jedis.set(value._1,value._2+"")
//jedis.close()
}
}
object FlinkWordCountToplogyUserDefineSinkFunction {
def main(args: Array[String]): Unit = {
//1.创建StreamExecutionEnvironment流环境
val fsEnv = StreamExecutionEnvironment.getExecutionEnvironment
//传递全局参数
val conf = new Configuration()
conf.setString("host", "CentOS")
conf.setString("port", "6379")
fsEnv.getConfig.setGlobalJobParameters(conf)
fsEnv.readTextFile("hdfs://CentOS:9000/words/src")
.flatMap(line=>line.split("\\s+")) //3.编写转换算子
.map(word=>(word,1))
.keyBy(0)
.sum(1)
.addSink(new UserDefineSinkFunction)//4.输出结果
//5.执行流
fsEnv.execute("FlinkWordCountToplogyFile1")
}
}
RedisSink
参考:https://bahir.apache.org/docs/flink/current/flink-streaming-redis/
<dependency>
<groupId>org.apache.bahir</groupId>
<artifactId>flink-connector-redis_2.11</artifactId>
<version>1.0</version>
</dependency>
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(1)
//2.创建DataStream - 细化
val text = env.readTextFile("hdfs://CentOS:9000/demo/words")
var flinkJeidsConf = new FlinkJedisPoolConfig.Builder()
.setHost("CentOS")
.setPort(6379)
.build()
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.sum(1)
counts.addSink(new RedisSink(flinkJeidsConf,new UserDefinedRedisMapper()))
//5.执行流计算任务
env.execute("Window Stream WordCount")
import org.apache.flink.streaming.connectors.redis.common.mapper.{RedisCommand, RedisCommandDescription, RedisMapper}
class UserDefinedRedisMapper extends RedisMapper[(String,Int)]{
override def getCommandDescription: RedisCommandDescription = {
new RedisCommandDescription(RedisCommand.HSET,"wordcounts")
}
override def getKeyFromData(data: (String, Int)): String = data._1
override def getValueFromData(data: (String, Int)): String = data._2+""
}
√Kafka集成
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka_2.11</artifactId>
<version>1.10.0</version>
</dependency>
class UserDefinedKeyedSerializationSchema extends KeyedSerializationSchema[(String,Int)]{
Int
override def serializeKey(element: (String, Int)): Array[Byte] = {
element._1.getBytes()
}
override def serializeValue(element: (String, Int)): Array[Byte] = {
element._2.toString.getBytes()
}
//可以覆盖 默认是topic,如果返回null,则将数据写入到默认的topic中
override def getTargetTopic(element: (String, Int)): String = {
null
}
}
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(1)
val props = new Properties()
props.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "centos:9092")
props.setProperty(ProducerConfig.BATCH_SIZE_CONFIG,"100")
props.setProperty(ProducerConfig.LINGER_MS_CONFIG,"500")
props.setProperty(ProducerConfig.ACKS_CONFIG,"all")
props.setProperty(ProducerConfig.RETRIES_CONFIG,"2")
//2.创建DataStream - 细化
val text = env.readTextFile("hdfs://CentOS:9000/demo/words")
var flinkJeidsConf = new FlinkJedisPoolConfig.Builder()
.setHost("CentOS")
.setPort(6379)
.build()
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.sum(1)
counts.addSink(new FlinkKafkaProducer[(String, Int)]("topic01",new UserDefinedKeyedSerializationSchema,props2))
//5.执行流计算任务
env.execute("Window Stream WordCount")
Operators
参考:https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/stream/operators/
DataStream Transformations
DataStream → DataStream
Map
Takes one element and produces one element. A map function that doubles the values of the input stream:
dataStream.map { x => x * 2 }
FlatMap
Takes one element and produces zero, one, or more elements. A flatmap function that splits sentences to words:
dataStream.flatMap { str => str.split(" ") }
Filter
Evaluates a boolean function for each element and retains those for which the function returns true. A filter that filters out zero values:
dataStream.filter { _ != 0 }
DataStream* → DataStream
Union
Union of two or more data streams creating a new stream containing all the elements from all the streams. Note: If you union a data stream with itself you will get each element twice in the resulting stream.
dataStream.union(otherStream1, otherStream2, ...)
DataStream,DataStream → ConnectedStreams
connect
“Connects” two data streams retaining their types, allowing for shared state between the two streams.
someStream : DataStream[Int] = ...
otherStream : DataStream[String] = ...
val connectedStreams = someStream.connect(otherStream)
ConnectedStreams → DataStream
CoMap, CoFlatMap
Similar to map and flatMap on a connected data stream
connectedStreams.map(
(_ : Int) => true,
(_ : String) => false
)
connectedStreams.flatMap(
(_ : Int) => true,
(_ : String) => false
)
案例剖析
object FlinkWordCountToplogy01 {
def main(args: Array[String]): Unit = {
val fsEnv = StreamExecutionEnvironment.getExecutionEnvironment
val input01 = fsEnv.socketTextStream("CentOS", 8888)
val input02 = fsEnv.socketTextStream("CentOS", 7777)
input01.connect(input02)
.flatMap(_.split("\\s+"),_.split("\\s+"))
.print()
fsEnv.execute("FlinkWordCountToplogy")
}
}
DataStream → SplitStream
Split
Split the stream into two or more streams according to some criterion.
val split = someDataStream.split(
(num: Int) =>
(num % 2) match {
case 0 => List("even")
case 1 => List("odd")
}
)
SplitStream → DataStream
Select
Select one or more streams from a split stream.
val even = split.select("even")
val odd = split.select("odd")
val all = split.select("even","odd")
PrcoessFunction
一般来说,更多使用PrcoessFunctio完成流的分支。
val env = StreamExecutionEnvironment.getExecutionEnvironment
val text = env.socketTextStream("CentOS", 9999)
val errorTag = new OutputTag[String]("error")
val allTag = new OutputTag[String]("all")
val infoStream = text.process(new ProcessFunction[String, String] {
override def processElement(value: String,
ctx: ProcessFunction[String, String]#Context,
out: Collector[String]): Unit = {
if (value.contains("error")) {
ctx.output(errorTag, value) //边输出
} else {
out.collect(value) //正常数据
}
ctx.output(allTag, value) //边输出
}
})
infoStream.getSideOutput(errorTag).printToErr("错误")
infoStream.getSideOutput(allTag).printToErr("所有")
infoStream.print("正常")
env.execute("Stream WordCount")
DataStream → KeyedStream
KeyBy
Logically partitions a stream into disjoint partitions, each partition containing elements of the same key. Internally, this is implemented with hash partitioning. See keys on how to specify keys. This transformation returns a KeyedStream.
dataStream.keyBy("someKey") // Key by field "someKey"
dataStream.keyBy(0) // Key by the first element of a Tuple
KeyedStream → DataStream
Reduce
A “rolling” reduce on a keyed data stream. Combines the current element with the last reduced value and emits the new value.
A reduce function that creates a stream of partial sums:
keyedStream.reduce(_ + _)
val env = StreamExecutionEnvironment.getExecutionEnvironment
val lines = env.socketTextStream("CentOS", 9999)
lines.flatMap(_.split("\\s+"))
.map((_,1))
.keyBy("_1")
.reduce((v1,v2)=>(v1._1,v1._2+v2._2))
.print()
env.execute("Stream WordCount")
Fold
A “rolling” fold on a keyed data stream with an initial value. Combines the current element with the last folded value and emits the new value.
A fold function that, when applied on the sequence (1,2,3,4,5), emits the sequence “start-1”, “start-1-2”, “start-1-2-3”, …
val result: DataStream[String] =
keyedStream.fold("start")((str, i) => { str + "-" + i })
val env = StreamExecutionEnvironment.getExecutionEnvironment
val lines = env.socketTextStream("CentOS", 9999)
lines.flatMap(_.split("\\s+"))
.map((_,1))
.keyBy("_1")
.fold((null:String,0:Int))((z,v)=>(v._1,v._2+z._2))
.print()
env.execute("Stream WordCount")
Aggregations
Rolling aggregations on a keyed data stream. The difference between min and minBy is that min returns the minimum value, whereas minBy returns the element that has the minimum value in this field (same for max and maxBy).
keyedStream.sum(0)
keyedStream.sum("key")
keyedStream.min(0)
keyedStream.min("key")
keyedStream.max(0)
keyedStream.max("key")
keyedStream.minBy(0)
keyedStream.minBy("key")
keyedStream.maxBy(0)
keyedStream.maxBy("key")
Physical partitioning
Flink还通过以下function对转换后的DataStream进行分区(如果需要)。
Rebalancing (Round-robin partitioning):
分区元素轮循,从而为每个分区创建相等的负载。在存在数据偏斜的情况下对性能优化有用。
dataStream.rebalance()
Random partitioning
根据均匀分布对元素进行随机划分。
dataStream.shuffle()
Rescaling
和Roundrobin Partitioning一样,Rescaling Partitioning也是一种通过循环的方式进行数据重平衡的分区策略。但是不同的是,当使用Roundrobin Partitioning时,数据会全局性地通过网络介质传输到其他的节点完成数据的重新平衡,而Rescaling Partitioning仅仅会对上下游继承的算子数据进行重平衡,具体的分区主要根据上下游算子的并行度决定。例如上游算子的并发度为2,下游算子的并发度为4,就会发生上游算子中一个分区的数据按照同等比例将数据路由在下游的固定的两个分区中,另外一个分区同理路由到下游两个分区中。
dataStream.rescale()
Broadcasting
Broadcasts elements to every partition.
dataStream.broadcast
Custom partitioning
Selects a subset of fields from the tuples
dataStream.partitionCustom(partitioner, "someKey")
dataStream.partitionCustom(partitioner, 0)
Task chaining and resource groups
对两个子操作进行Chain,意味着将这两个 算子放置子一个线程中,这样是为了避免没必要的线程开销,提升性能。如果可能的话,默认情况下Flink会Chain运算符。例如用户可以调用:
StreamExecutionEnvironment.disableOperatorChaining()
禁用chain行为,但是不推荐。
startNewChain
someStream.filter(...).map(...).startNewChain().map(...)
将第一map算子和filter算子进行隔离
disableChaining
someStream.map(...).disableChaining()
所有操作符禁止和map操作符进行chain
slotSharingGroup
设置操作的slot共享组。 Flink会将具有相同slot共享组的operator放在同一个Task slot中,同时将没有slot共享组的operator保留在其他Task slot中。这可以用来隔离Task Slot。下游的操作符会自动继承上游资源组。默认情况下,所有的输入算子的资源组的名字是default
,因此当用户不对程序进行资源划分的情况下,一个job所需的资源slot,就等于最大并行度的Task。
someStream.filter(...).slotSharingGroup("name")
State & Fault Tolerance
Flink是一个基于状态计算的流计算服务。Flink将所有的状态分为两大类:keyed state
与operator state
.所谓的keyed state指的是Flink底层会给每一个Key绑定若干个类型的状态值,特指操作KeyedStream中所涉及的状态。所谓operator state指的是非keyed stream中所涉及状态称为operator state,所有的operator state会将状态和具体某个操作符进行绑定。无论是keyed state
还是operator state
flink将这些状态管理底层分为两种存储形式:Managed State和Raw State
Managed State- 所谓的Managed State,指的是由Flink控制状态存储结构,例如:状态数据结构、数据类型等,由于是Flink自己管理状态,因此Flink可以更好的针对于管理状态做内存的优化和故障恢复。
Raw State - 所谓的Raw state,指的是Flink对状态的信息和结构一无所知,Flink仅仅知道该状态是一些二进制字节数组,需要用户自己完成状态序列化和反序列化。,因此Raw State Flink不能够针对性的做内存优化,也不支持故障状态的恢复。因此在Flink实战项目开发中,几乎不使用Raw State.
All datastream functions can use managed state, but the raw state interfaces can only be used when implementing operators. Using managed state (rather than raw state) is recommended, since with managed state Flink is able to automatically redistribute state when the parallelism is changed, and also do better memory management.
参考:https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/stream/state/state.html
Managed Keyed State
flink中的managed keyed state接口提供访问不同数据类型状态,这些状态都是和key进行绑定的。这意味着这种状态只能在KeyedStream上使用。flink内建了以下六种类型的state:
类型 | 使用场景 | 方法 |
---|---|---|
ValueState | 该状态主要用于存储单一状态值。 | T value() update(T) clear() |
ListState | 该状态主要用于存储单集合状态值。 | add(T) addAll(List) update(List) Iterable get() clear() |
MapState<UK, UV> | 该状态主要用于存储一个Map集合 | put(UK, UV) putAll(Map) get(UK) entries() keys() values() clear() |
ReducingState | 该状态主要用于存储单一状态值。该状态会将添加的元素和历史状态自动做运算,调用用户提供的ReduceFunction | add(T) T get() clear() |
AggregatingState<IN, OUT> | 该状态主要用于存储单一状态值。该状态会将添加的元素和历史状态自动做运算,调用用户提供的AggregateFunction,该状态和ReducingState不同点在于数据输入和输出类型可以不一致 | add(IN) OUT get() clear() |
FoldingState<T, ACC> | 该状态主要用于存储单一状态值。该状态会将添加的元素和历史状态自动做运算,调用用户提供的FoldFunction,该状态和ReducingState不同点在于数据输入和中间结果类型可以不一致 | add(T) T get() clear() |
It is important to keep in mind that these state objects are only used for interfacing with state. The state is not necessarily stored inside but might reside on disk or somewhere else. The second thing to keep in mind is that the value you get from the state depends on the key of the input element. So the value you get in one invocation of your user function can differ from the value in another invocation if the keys involved are different.To get a state handle, you have to create a
StateDescriptor
. This holds the name of the state (as we will see later, you can create several states, and they have to have unique names so that you can reference them), the type of the values that the state holds, and possibly a user-specified function, such as aReduceFunction
. Depending on what type of state you want to retrieve, you create either aValueStateDescriptor
, aListStateDescriptor
, aReducingStateDescriptor
, aFoldingStateDescriptor
or aMapStateDescriptor
.State is accessed using theRuntimeContext
, so it is only possible in rich functions. Please see here for information about that, but we will also see an example shortly. TheRuntimeContext
that is available in aRichFunction
has these methods for accessing state:
-
ValueState getState(ValueStateDescriptor)
-
ReducingState getReducingState(ReducingStateDescriptor)
-
ListState getListState(ListStateDescriptor)
-
AggregatingState getAggregatingState(AggregatingStateDescriptor)
-
FoldingState getFoldingState(FoldingStateDescriptor)
-
MapState getMapState(MapStateDescriptor)
ValueState
class WordCountMapFunction extends RichMapFunction[(String,Int),(String,Int)]{
var vs:ValueState[Int]=_
override def open(parameters: Configuration): Unit = {
//1.创建对应状态描述符
val vsd = new ValueStateDescriptor[Int]("wordcount", createTypeInformation[Int])
//2.获取RuntimeContext
var context: RuntimeContext = getRuntimeContext
//3.获取指定类型状态
vs=context.getState(vsd)
}
override def map(value: (String, Int)): (String, Int) = {
//获取历史值
val historyData = vs.value()
//更新状态
vs.update(historyData+value._2)
//返回最新值
(value._1,vs.value())
}
}
object FlinkWordCountValueState {
def main(args: Array[String]): Unit = {
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
//2.创建DataStream - 细化
val text = env.socketTextStream("CentOS", 9999)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.map(new WordCountMapFunction)
//4.将计算的结果在控制打印
counts.print()
//5.执行流计算任务
env.execute("Stream WordCount")
}
}
ListState
class UserVisitedMapFunction extends RichMapFunction[(String,String),(String,String)]{
var userVisited:ListState[String]=_
override def open(parameters: Configuration): Unit = {
//1.创建对应状态描述符
val lsd = new ListStateDescriptor[String]("userVisited", createTypeInformation[String])
//2.获取RuntimeContext
var context: RuntimeContext = getRuntimeContext
//3.获取指定类型状态
userVisited=context.getListState(lsd)
}
override def map(value: (String, String)): (String, String) = {
//获取历史值
var historyData = userVisited.get().asScala.toList
//更新状态
historyData = historyData.::(value._2).distinct
userVisited.update(historyData.asJava)
//返回最新值
(value._1,historyData.mkString(" | "))
}
}
object FlinkUserVisitedListState {
def main(args: Array[String]): Unit = {
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
//2.创建DataStream - 细化 001 zhangsan 电子类 xxxx 001 zhangsan 手机类 xxxx 001 zhangsan 母婴类 xxxx
val text = env.socketTextStream("CentOS", 9999)
//3.执行DataStream的转换算子
val counts = text.map(line=>line.split("\\s+"))
.map(ts=>(ts(0)+":"+ts(1),ts(2)))
.keyBy(0)
.map(new UserVisitedMapFunction)
//4.将计算的结果在控制打印
counts.print()
//5.执行流计算任务
env.execute("Stream WordCount")
}
}
MapState
class UserVisitedMapMapFunction extends RichMapFunction[(String,String),(String,String)]{
var userVisitedMap:MapState[String,Int]=_
override def open(parameters: Configuration): Unit = {
//1.创建对应状态描述符
val msd = new MapStateDescriptor[String,Int]("UserVisitedMap", createTypeInformation[String],createTypeInformation[Int])
//2.获取RuntimeContext
var context: RuntimeContext = getRuntimeContext
//3.获取指定类型状态
userVisitedMap=context.getMapState(msd)
}
override def map(value: (String, String)): (String, String) = {
var count=0
if(userVisitedMap.contains(value._2)){
count=userVisitedMap.get(value._2)
}
userVisitedMap.put(value._2,count+1)
var historyList= userVisitedMap.entries()
.asScala
.map(entry=> entry.getKey+":"+entry.getValue)
.toList
//返回最新值
(value._1,historyList.mkString(" | "))
}
}
object FlinkUserVisitedMapState {
def main(args: Array[String]): Unit = {
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
//2.创建DataStream - 细化 001 zhangsan 电子类 xxxx 001 zhangsan 手机类 xxxx 001 zhangsan 母婴类 xxxx
val text = env.socketTextStream("CentOS", 9999)
//3.执行DataStream的转换算子
val counts = text.map(line=>line.split("\\s+"))
.map(ts=>(ts(0)+":"+ts(1),ts(2)))
.keyBy(0)
.map(new UserVisitedMapMapFunction)
//4.将计算的结果在控制打印
counts.print()
//5.执行流计算任务
env.execute("Stream WordCount")
}
}
ReducingState
class WordCountReduceStateMapFunction extends RichMapFunction[(String,Int),(String,Int)]{
var rs:ReducingState[Int]=_
override def open(parameters: Configuration): Unit = {
//1.创建对应状态描述符
val rsd = new ReducingStateDescriptor[Int]("wordcountReducingStateDescriptor",
new ReduceFunction[Int](){
override def reduce(v1: Int, v2: Int): Int = v1+v2
},createTypeInformation[Int])
//2.获取RuntimeContext
var context: RuntimeContext = getRuntimeContext
//3.获取指定类型状态
rs=context.getReducingState(rsd)
}
override def map(value: (String, Int)): (String, Int) = {
rs.add(value._2)
//返回最新值
(value._1,rs.get())
}
}
object FlinkWordCountReduceState {
def main(args: Array[String]): Unit = {
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
//2.创建DataStream - 细化
val text = env.socketTextStream("CentOS", 9999)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.map(new WordCountReduceStateMapFunction)
//4.将计算的结果在控制打印
counts.print()
//5.执行流计算任务
env.execute("Stream WordCount")
}
}
AggregatingState
class UserOrderAggregatingStateMapFunction extends RichMapFunction[(String,Double),(String,Double)]{
var as:AggregatingState[Double,Double]=_
override def open(parameters: Configuration): Unit = {
//1.创建对应状态描述符
val asd = new AggregatingStateDescriptor[Double,(Int,Double),Double]("userOrderAggregatingStateMapFunction",
new AggregateFunction[Double,(Int,Double),Double](){
override def createAccumulator(): (Int, Double) = (0,0.0)
override def add(value: Double, accumulator: (Int, Double)): (Int, Double) = {
(accumulator._1+1,accumulator._2+value)
}
override def getResult(accumulator: (Int, Double)): Double = {
accumulator._2/accumulator._1
}
override def merge(a: (Int, Double), b: (Int, Double)): (Int, Double) = {
(a._1+b._1,a._2+b._2)
}
},createTypeInformation[(Int,Double)])
//2.获取RuntimeContext
var context: RuntimeContext = getRuntimeContext
//3.获取指定类型状态
as=context.getAggregatingState(asd)
}
override def map(value: (String, Double)): (String, Double) = {
as.add(value._2)
//返回最新值
(value._1,as.get())
}
}
object FlinkUserOrderAggregatingState {
def main(args: Array[String]): Unit = {
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
//2.创建DataStream - 细化 001 zhangsan 1000
val text = env.socketTextStream("CentOS", 9999)
//3.执行DataStream的转换算子
val counts = text.map(line=>line.split("\\s+"))
.map(ts=>(ts(0)+":"+ts(1),ts(2).toDouble))
.keyBy(0)
.map(new UserOrderAggregatingStateMapFunction)
//4.将计算的结果在控制打印
counts.print()
//5.执行流计算任务
env.execute("Stream WordCount")
}
}
FoldingState
class UserOrderAvgMapFunction extends RichMapFunction[(String,Double),(String,Double)]{
var rs:ReducingState[Int]=_
var fs:FoldingState[Double,Double]=_
override def open(parameters: Configuration): Unit = {
//1.创建对应状态描述符
val rsd = new ReducingStateDescriptor[Int]("wordcountReducingStateDescriptor",
new ReduceFunction[Int](){
override def reduce(v1: Int, v2: Int): Int = v1+v2
},createTypeInformation[Int])
val fsd=new FoldingStateDescriptor[Double,Double]("foldstate",0,new FoldFunction[Double,Double](){
override def fold(accumulator: Double, value: Double): Double = {
accumulator+value
}
},createTypeInformation[Double])
//2.获取RuntimeContext
var context: RuntimeContext = getRuntimeContext
//3.获取指定类型状态
rs=context.getReducingState(rsd)
fs=context.getFoldingState(fsd)
}
override def map(value: (String, Double)): (String, Double) = {
rs.add(1)
fs.add(value._2)
//返回最新值
(value._1,fs.get()/rs.get())
}
}
object FlinkUserOrderFoldState {
def main(args: Array[String]): Unit = {
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
//2.创建DataStream - 细化 001 zhangsan 1000
val text = env.socketTextStream("CentOS", 9999)
//3.执行DataStream的转换算子
val counts = text.map(line=>line.split("\\s+"))
.map(ts=>(ts(0)+":"+ts(1),ts(2).toDouble))
.keyBy(0)
.map(new UserOrderAvgMapFunction)
//4.将计算的结果在控制打印
counts.print()
//5.执行流计算任务
env.execute("Stream WordCount")
}
}
State Time-To-Live (TTL)
Flink支持对上有的keyed state的状态指定TTL存活时间,配置状态的时效性,该特性默认是关闭。一旦开启该特性,Flink会尽最大努力删除过期状态。TTL支持单一值失效特性,同时也支持集合类型数据失效,例如MapState和ListState中的元素,每个元素都有自己的失效时间。
基本使用
//1.创建对应状态描述符
val xsd = new XxxxStateDescriptor[Int]("wordcount", createTypeInformation[Int])
//设置TTL实效性
val stateTtlConfig = StateTtlConfig.newBuilder(Time.seconds(5)) //设置存活时间5s ①
.setUpdateType(UpdateType.OnCreateAndWrite) //创建、修改重新更新时间 ②
.setStateVisibility(StateVisibility.NeverReturnExpired) //永不返回过期数据 ③
.build()
//启用TTL特性
xsd.enableTimeToLive(stateTtlConfig)
①:该参数指定State存活时间,必须指定。
②:该参数指定State失效时间更新时机,默认值OnCreateAndWrite
-
OnCreateAndWrite: 只有修改操作,才会更新时间
-
OnReadAndWrite:只有访问读取、修改state时间就会更新
③:设置state的可见性,默认值NeverReturnExpired
-
NeverReturnExpired:永不返回过期状态
-
ReturnExpiredIfNotCleanedUp:如果flink没有删除过期的状态数据,系统会将过期的数据返回
注意:一旦用户开启了TTL特征,系统每个存储的状态数据会额外开辟8bytes(Long类型)的字节大小,用于存储state时间戳;系统的时效时间目前仅仅支持的是计算节点时间;如果程序一开始没有开启TTL,在服务重启以后,开启了TTL,此时服务在故障恢复的时候,会报错!
Cleanup Of Expired State
Flink默认仅仅当用户读状态的时候,才会去检查状态数据是否失效,如果失效将失效的数据立即删除。但就会导致系统在长时间运行的时候,会存在很多数据已经过期了,但是系统又没有去读取过期的状态数据,该数据一直驻留在内存中。
This means that by default if expired state is not read, it won’t be removed, possibly leading to ever growing state. This might change in future releases.
1.9.x之前说法
在flink-1.10版本中,系统可以根据State backend配置,定期在后台收集失效状态进行删除。用户可以通过调用以下API关闭自动清理。
val stateTtlConfig = StateTtlConfig.newBuilder(Time.seconds(5)) //设置存活时间5s
.setUpdateType(UpdateType.OnCreateAndWrite) //创建、修改重新更新时间
.setStateVisibility(StateVisibility.NeverReturnExpired) //永不返回过期数据
.disableCleanupInBackground()
.build()
早期版本需要用户手动调用cleanupInBackground开启后台清理。flink-1.10版本该特性自动打开。
Cleanup in full snapshot
可以通过配置Cleanup in full snapshot机制,在系统恢复的时候或者启动的时候, 系统会加载状态数据,此时会将过期的数据删除。也就意味着系统只用在重启或者恢复的时候才会加载状态快照信息。
val stateTtlConfig = StateTtlConfig.newBuilder(Time.seconds(5)) //设置存活时间5s
.setUpdateType(UpdateType.OnCreateAndWrite) //创建、修改重新更新时间
.setStateVisibility(StateVisibility.NeverReturnExpired) //永不返回过期数据
.cleanupFullSnapshot()
.build()
缺点:需要定期的关闭服务,进行服务重启,实现内存释放。
Incremental cleanup
Another option is to trigger cleanup of some state entries incrementally. The trigger can be a callback from each state access or/and each record processing. If this cleanup strategy is active for certain state, The storage backend keeps a lazy global iterator for this state over all its entries. Every time incremental cleanup is triggered, the iterator is advanced. The traversed state entries are checked and expired ones are cleaned up.
用户还可以使用增量清理策略。在用户每一次读取或者写入状态的数据的时候,该清理策略会运行一次。系统的state backend会持有所有状态的一个全局迭代器。每一次当用用户访问状态,该迭代器就会增量迭代一个批次数据,检查是否存在过期的数据,如果存在就删除。
//设置TTL实效性
val stateTtlConfig = StateTtlConfig.newBuilder(Time.seconds(5)) //设置存活时间5s
.setUpdateType(UpdateType.OnCreateAndWrite) //创建、修改重新更新时间
.setStateVisibility(StateVisibility.NeverReturnExpired) //永不返回过期数据
.cleanupIncrementally(100,true)//默认 5 false
.build()
-
cleanupSize: 表示一次检查key的数目
-
runCleanupForEveryRecord:是否在有数据的数据就触发检查,如果为false,表示只有在状态访问或者修改的时候才会触发检查
The first one is number of checked state entries per each cleanup triggering. It is always triggered per each state access. The second parameter defines whether to trigger cleanup additionally per each record processing. The default background cleanup for heap backend checks 5 entries without cleanup per record processing.
Notes:
-
如果没有状态访问或者记录处理,过期的数据依旧不会删除,会被持久化。
-
增量检查state,会带来记录处理延迟。
-
目前增量式的清理仅仅在支持
Heap state backend
,如果是RocksDB该配置不起作用。
Cleanup during RocksDB compaction
如果用户使用的是RocksDB作为状态后端实现,用户可以在RocksDB在做Compation的时候加入Filter,对过期的数据进行检查。删除过期数据。
RocksDB periodically runs asynchronous compactions to merge state updates and reduce storage. Flink compaction filter checks expiration timestamp of state entries with TTL and excludes expired values.
更详细RocksDB介绍:https://rocksdb.org.cn/doc.html
val stateTtlConfig = StateTtlConfig.newBuilder(Time.seconds(5)) //设置存活时间5s
.setUpdateType(UpdateType.OnCreateAndWrite) //创建、修改重新更新时间
.setStateVisibility(StateVisibility.NeverReturnExpired) //永不返回过期数据
.cleanupInRocksdbCompactFilter(1000)//默认值1000
.build()
- queryTimeAfterNumEntries:RocksDB进行合并扫描多少条记录之后,执行一次查询,将过期数据删除。
更频繁地更新时间戳可以提高清除速度,但由于使用本地代码中的JNI调用,因此会降低压缩性能。每次处理1000个条目时,RocksDB后端的默认后台清理都会查询当前时间戳。
Updating the timestamp more often can improve cleanup speed but it decreases compaction performance because it uses JNI call from native code. The default background cleanup for RocksDB backend queries the current timestamp each time 1000 entries have been processed.
在flink-1.10版本之前,RocksDB的Compact Filter特性是关闭的,需要额外的开启,用户只需在flink-conf.yaml中添加如下配置
state.backend.rocksdb.ttl.compaction.filter.enabled: true
This feature is disabled by default. It has to be firstly activated for the RocksDB backend by setting Flink configuration option
state.backend.rocksdb.ttl.compaction.filter.enabled
or by callingRocksDBStateBackend::enableTtlCompactionFilter
if a custom RocksDB state backend is created for a job.
Checkpoint & Savepoint
由于Flink是一个有状态计算的流服务,因此状态的管理和容错是非常重要的。为了保证程序的健壮性,Flink提出Checkpoint机制,该机制用于持久化计算节点的状态数据,继而实现Flink故障恢复。所谓的Checkpoint机制指的是Flink会定期的持久化的状态数据。将状态数据持久化到远程文件系统(取决于State backend),例如HDFS,该检查点协调或者发起是由JobManager负责实施。JobManager会定期向下游的计算节点发送Barrier(栅栏),下游计算节点收到该Barrier信号之后,会预先提交自己的状态信息,并且给JobManage以应答,同时会继续将接收到的Barrier继续传递给下游的任务节点,以此类推,所有的下游计算节点在收到该Barrier信号的时候都会做预提交自己的状态信息。等到所有的下游节点都完成了状态的预提交,并且JobManager收集完成所有下游节点的应答之后,JobManager才会认定此次的Checkpoint是成功的,并且会自动删除上一次检查点数据。
Savepoint是手动触发的Checkpoint,Savepoint为程序创建快照并将其写到State Backend。Savepoint依靠常规的Checkpoint机制。所谓的Checkpoint指的是程序在执行期间,程序会定期在工作节点上快照并产生Checkpoint。为了进行恢复,仅需要获取最后一次完成的Checkpoint即可,并且可以在新的Checkpoint完成后立即安全地丢弃较旧的Checkpoint。
Savepoint与这些定期Checkpoint类似,Savepoint由用户触发并且更新的Checkpoint完成时不会自动过期。用户可以使用命令行或通过REST API取消作业时创建Savepoint
由于Flink 中的Checkpoint机制默认是不开启的,需要用户通过调用以下方法开启检查点机制。
env.enableCheckpointing(1000);
为了控制检查点执行的一些细节,Flink支持用户定制Checkpoiont的一些行为。
//间隔5s执行一次checkpoint 精准一次
env.enableCheckpointing(5000,CheckpointingMode.EXACTLY_ONCE)
//设置检查点超时 4s
env.getCheckpointConfig.setCheckpointTimeout(4000)
//开启本次检查点 与上一次完成的检查点时间间隔不得小于 2s 优先级高于 checkpoint interval
env.getCheckpointConfig.setMinPauseBetweenCheckpoints(2000)
//如果检查点失败,任务宣告退出 setFailOnCheckpointingErrors(true)
env.getCheckpointConfig.setTolerableCheckpointFailureNumber(0)
//设置如果任务取消,系统该如何处理检查点数据
//RETAIN_ON_CANCELLATION:如果取消任务的时候,没有加--savepoint,系统会保留检查点数据
//DELETE_ON_CANCELLATION:取消任务,自动是删除检查点(不建议使用)
env.getCheckpointConfig.enableExternalizedCheckpoints(ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
State Backend
Flink指定多种State Backend实现,State Backend指定了状态数据(检查点数据)存储的位置信息。配置Flink的状态后端的方式有两种:
- 每个计算独立状态后端
val env = StreamExecutionEnvironment.getExecutionEnvironment()
env.setStateBackend(...)
- 全局默认状态后端,需要在flink-conf.yaml配置
#==============================================================================
# Fault tolerance and checkpointing
#==============================================================================
# The backend that will be used to store operator state checkpoints if
# checkpointing is enabled.
#
# Supported backends are 'jobmanager', 'filesystem', 'rocksdb', or the
# <class-name-of-factory>.
#
state.backend: rocksdb
# Directory for checkpoints filesystem, when using any of the default bundled
# state backends.
#
state.checkpoints.dir: hdfs:///flink-checkpoints
# Default target directory for savepoints, optional.
#
state.savepoints.dir: hdfs:///flink-savepoints
# Flag to enable/disable incremental checkpoints for backends that
# support incremental checkpoints (like the RocksDB state backend).
#
state.backend.incremental: false
Note
由于状态后端需要将数据同步到HDFS,因此Flink必须能够连接HDFS,所以需要在~/.bashrc配置HADOOP_CLASSPATH
JAVA_HOME=/usr/java/latest
HADOOP_HOME=/usr/hadoop-2.9.2
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
CLASSPATH=.
export JAVA_HOME
export CLASSPATH
export PATH
export HADOOP_HOME
export HADOOP_CLASSPATH=`hadoop classpath`
MemoryStateBackend(jobmanager)
MemoryStateBackend使用内存存储内部状态数据,将状态数据存储在在Java的heap中。在Checkpoint时候,此状态后端将对该状态进行快照,并将其作为检查点确认消息的一部分发送给JobManager(主服务器),该JobManager也将其存储在其堆中。
val env = StreamExecutionEnvironment.getExecutionEnvironment()
env.setStateBackend(new MemoryStateBackend(MAX_MEM_STATE_SIZE, true))
-
The size of each individual state is by default limited to 5 MB. This value can be increased in the constructor of the MemoryStateBackend.
-
Irrespective of the configured maximal state size, the state cannot be larger than the akka frame size (see Configuration).
-
The aggregate state must fit into the JobManager memory.
场景: 1)本地部署进行debug调试的可以使用 2)不仅涉及太多的状态管理。
FsStateBackend(filesystem)
该种状态后端实现是将数据的状态存储在TaskManager(计算节点)的内存。在执行检查点的时候后会将TaskManager内存的数据写入远程的文件系统。非常少量的元数据想信息会存储在JobManager的内存中。
val env = StreamExecutionEnvironment.getExecutionEnvironment()
env.setStateBackend(new FsStateBackend("hdfs:///flink-checkpoints",true))
场景:1)当用户有非常大的状态需要管理 2)所有生产环境
RocksDBStateBackend(rocksdb)
该种状态后端实现是将数据的状态存储在TaskManager(计算节点)的本地的RocksDB数据文件中。在执行检查点的时候后会将TaskManager本地的RocksDB数据库文件写入远程的文件系统。非常少量的元数据想信息会存储在JobManager的内存中。
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-statebackend-rocksdb_2.11</artifactId>
<version>1.10.0</version>
</dependency>
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setStateBackend(new RocksDBStateBackend("hdfs:///flink-rocksdb-checkpoints",true))
限制:
- As RocksDB’s JNI bridge API is based on byte[], the maximum supported size per key and per value is 2^31 bytes each. IMPORTANT: states that use merge operations in RocksDB (e.g. ListState) can silently accumulate value sizes > 2^31 bytes and will then fail on their next retrieval. This is currently a limitation of RocksDB JNI.
场景:1)当用户有超大的状态需要管理 2)所有生产环境
Note that the amount of state that you can keep is only limited by the amount of disk space available. This allows keeping very large state, compared to the FsStateBackend that keeps state in memory. This also means, however, that the maximum throughput that can be achieved will be lower with this state backend. All reads/writes from/to this backend have to go through de/serialization to retrieve/store the state objects, which is also more expensive than always working with the on-heap representation as the heap-based backends are doing.
测试故障恢复
[root@CentOS flink-1.10.0]# ./bin/flink list -m CentOS:8081
Waiting for response...
------------------ Running/Restarting Jobs -------------------
30.11.2020 02:45:48 : 4249111754347d2a026f36e10103ca84 : FlinkWordCountToplogy_checkpoint (RUNNING)
--------------------------------------------------------------
No scheduled jobs.
[root@CentOS flink-1.10.0]# ./bin/flink cancel -m CentOS:8081 4249111754347d2a026f36e10103ca84
Cancelling job 4249111754347d2a026f36e10103ca84.
Cancelled job 4249111754347d2a026f36e10103ca84.
[root@CentOS flink-1.10.0]# ./bin/flink run -c com.baizhi.checkpoints.FlinkWordCountToplogy -d -m CentOS:8081 -p 4 -s hdfs://CentOS:9000/flink-rocksdb-checkpoints/4249111754347d2a026f36e10103ca84/chk-94 /root/original-flink-datastream-1.0-SNAPSHOT.jar
Job has been submitted with JobID 1c7ae7333180f4b922e1eab73f7bee0c
[root@CentOS flink-1.10.0]# ./bin/flink cancel -m CentOS:8081 -s d8a42e6d8434788e8cf75aa84eb5ca49
DEPRECATION WARNING: Cancelling a job with savepoint is deprecated. Use "stop" instead.
Cancelling job d8a42e6d8434788e8cf75aa84eb5ca49 with savepoint to default savepoint directory.
Cancelled job d8a42e6d8434788e8cf75aa84eb5ca49. Savepoint stored in hdfs://CentOS:9000/flink-savepoints/savepoint-d8a42e-190d7295e06f.
Managed Operator State
Flink提供了基于keyed stream操作符状态称为keyedstate,对于一些非keyed stream的操作中使用的状态统称为Operator State,如果用户希望使用Operator State需要实现通用的CheckpointedFunction接口或者ListCheckpointed
CheckpointedFunction
其中CheckpointedFunction接口提供non-keyed state的不同状态分发策略。用户在实现该接口的时候需要实现以下两个方法:
public interface CheckpointedFunction {
void snapshotState(FunctionSnapshotContext context) throws Exception;
void initializeState(FunctionInitializationContext context) throws Exception;
}
-
snapshotState:当系统进行Checkpoint的时候,系统回调用该方法,通常用户需要将持久化的状态数据存储到状态中。
-
initializeState:当第一次启动的时候系统自动调用initializeState,进行状态初始化。或者系统在故障恢复的时候进行状态的恢复。
Whenever a checkpoint has to be performed,
snapshotState()
is called. The counterpart,initializeState()
, is called every time the user-defined function is initialized, be that when the function is first initialized or be that when the function is actually recovering from an earlier checkpoint. Given this,initializeState()
is not only the place where different types of state are initialized, but also where state recovery logic is included.
当前,Operator State支持list-style的Managed State。该状态应为彼此独立的可序列化对象的列表,因此在系统故障恢复的时候才有可能进行重新分配。目前Flink针对于Operator State分配方案有以下两种:
-
Even-split redistribution - 每一个操作符实例都会保留一个List的状态,因此Operator State逻辑上是将该Operator的并行实例的所有的List状态拼接成一个完成的List State。当系统在恢复、重新分发状态的时候,系统会根据当前Operator实例并行度,对当前的状态进行均分。例如,如果在并行度为1的情况下,Operator的检查点状态包含元素element1和element2,则在将Operator并行度提高到2时,element1可能会分配给Operator Instance 0,而element2将进入Operator Instance 1.
-
Union redistribution: - 每一个操作符实例都会保留一个List的状态,因此Operator State逻辑上是将该Operator的并行实例的所有的List状态拼接成一个完成的List State。在还原/重新分发状态时,每个Operator实例都会获得状态元素的完整列表。
class UserDefineBufferSinkEvenSplit(threshold: Int = 0) extends SinkFunction[(String, Int)] with CheckpointedFunction{
@transient
private var checkpointedState: ListState[(String, Int)] = _
private val bufferedElements = ListBuffer[(String, Int)]()
//复写写出逻辑
override def invoke(value: (String, Int), context: SinkFunction.Context[_]): Unit = {
bufferedElements += value
if(bufferedElements.size >= threshold){
for(e <- bufferedElements){
println("元素:"+e)
}
bufferedElements.clear()
}
}
//需要将状态数据存储起来
override def snapshotState(context: FunctionSnapshotContext): Unit = {
checkpointedState.clear()
checkpointedState.update(bufferedElements.asJava)//直接将状态数据存储起来
}
//初始化状态逻辑、状态恢复逻辑
override def initializeState(context: FunctionInitializationContext): Unit = {
//初始化状态、也有可能是故障恢复
val lsd=new ListStateDescriptor[(String, Int)]("list-state",createTypeInformation[(String,Int)])
checkpointedState = context.getOperatorStateStore.getListState(lsd) //默认均分方式恢复
//context.getOperatorStateStore.getUnionListState(lsd) //默认广播方式恢复
if(context.isRestored){ //实现故障恢复逻辑
bufferedElements.appendAll(checkpointedState.get().asScala.toList)
}
}
}
object FlinkWordCountValueStateCheckpoint {
def main(args: Array[String]): Unit = {
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setStateBackend(new RocksDBStateBackend("hdfs:///flink-rocksdb-checkpoints",true))
//间隔5s执行一次checkpoint 精准一次
env.enableCheckpointing(5000,CheckpointingMode.EXACTLY_ONCE)
//设置检查点超时 4s
env.getCheckpointConfig.setCheckpointTimeout(4000)
//开启本次检查点 与上一次完成的检查点时间间隔不得小于 2s 优先级高于 checkpoint interval
env.getCheckpointConfig.setMinPauseBetweenCheckpoints(2000)
//如果检查点失败,任务宣告退出 setFailOnCheckpointingErrors(true)
env.getCheckpointConfig.setTolerableCheckpointFailureNumber(0)
//设置如果任务取消,系统该如何处理检查点数据
//RETAIN_ON_CANCELLATION:如果取消任务的时候,没有加--savepoint,系统会保留检查点数据
//DELETE_ON_CANCELLATION:取消任务,自动是删除检查点(不建议使用)
env.getCheckpointConfig.enableExternalizedCheckpoints(ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
//2.创建DataStream - 细化
val text = env.socketTextStream("CentOS", 9999)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.map(new WordCountMapFunction)
.uid("wc-map")
//4.将计算的结果在控制打印
counts.addSink(new UserDefineBufferSinkEvenSplit(3))
.uid("buffer-sink")
//5.执行流计算任务
env.execute("Stream WordCount")
}
}
class WordCountMapFunction extends RichMapFunction[(String,Int),(String,Int)]{
var vs:ValueState[Int]=_
override def open(parameters: Configuration): Unit = {
//1.创建对应状态描述符
val vsd = new ValueStateDescriptor[Int]("wordcount", createTypeInformation[Int])
//2.获取RuntimeContext
var context: RuntimeContext = getRuntimeContext
//3.获取指定类型状态
vs=context.getState(vsd)
}
override def map(value: (String, Int)): (String, Int) = {
//获取历史值
val historyData = vs.value()
//更新状态
vs.update(historyData+value._2)
//返回最新值
(value._1,vs.value())
}
}
ListCheckpointed
ListCheckpointed接口是CheckpointedFunction的更有限的变体写法。因为该接口仅仅支持list-style state 的Even Split分发策略。
public interface ListCheckpointed<T extends Serializable> {
List<T> snapshotState(long checkpointId, long timestamp) throws Exception;
void restoreState(List<T> state) throws Exception;
}
-
snapshotState:在做系统检查点的时候,用户只需要将需要存储的数据返回即可。
-
restoreState:直接提供给用户需要恢复状态。
On
snapshotState()
the operator should return a list of objects to checkpoint andrestoreState
has to handle such a list upon recovery. If the state is not re-partitionable, you can always return aCollections.singletonList(MY_STATE)
in thesnapshotState()
.
object FlinkCounterSource {
def main(args: Array[String]): Unit = {
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setStateBackend(new RocksDBStateBackend("hdfs:///flink-rocksdb-checkpoints",true))
//间隔5s执行一次checkpoint 精准一次
env.enableCheckpointing(5000,CheckpointingMode.EXACTLY_ONCE)
//设置检查点超时 4s
env.getCheckpointConfig.setCheckpointTimeout(4000)
//开启本次检查点 与上一次完成的检查点时间间隔不得小于 2s 优先级高于 checkpoint interval
env.getCheckpointConfig.setMinPauseBetweenCheckpoints(2000)
//如果检查点失败,任务宣告退出 setFailOnCheckpointingErrors(true)
env.getCheckpointConfig.setTolerableCheckpointFailureNumber(0)
//设置如果任务取消,系统该如何处理检查点数据
//RETAIN_ON_CANCELLATION:如果取消任务的时候,没有加--savepoint,系统会保留检查点数据
//DELETE_ON_CANCELLATION:取消任务,自动是删除检查点(不建议使用)
env.getCheckpointConfig.enableExternalizedCheckpoints(ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
val text = env.addSource(new UserDefineCounterSource)
.uid("UserDefineCounterSource")
text.print("offset")
//5.执行流计算任务
env.execute("Stream WordCount")
}
}
class UserDefineCounterSource extends RichParallelSourceFunction[Long] with ListCheckpointed[JLong]{
@volatile
private var isRunning = true
private var offset = 0L
//存储状态值
override def snapshotState(checkpointId: Long, timestamp: Long): util.List[JLong] = {
println("snapshotState:"+offset)
Collections.singletonList(offset)//返回一个不可拆分集合
}
override def restoreState(state: util.List[JLong]): Unit = {
println("restoreState:"+state.asScala)
offset=state.asScala.head //取第一个元素
}
override def run(ctx: SourceFunction.SourceContext[Long]): Unit = {
val lock = ctx.getCheckpointLock
while (isRunning) {
Thread.sleep(1000)
lock.synchronized({
ctx.collect(offset) //往下游输出当前offset
offset += 1
})
}
}
override def cancel(): Unit = isRunning=false
}
Broadcast State Pattern
广播状态是Flink提供的第三种状态共享的场景。通常需要将一个吞吐量比较小的流中状态数据进行广播给下游的任务,另外一个流可以以只读的形式读取广播状态。
A third type of supported operator state is the Broadcast State. Broadcast state was introduced to support use cases where some data coming from one stream is required to be broadcasted to all downstream tasks, where it is stored locally and is used to process all incoming elements on the other stream. As an example where broadcast state can emerge as a natural fit, one can imagine a low-throughput stream containing a set of rules which we want to evaluate against all elements coming from another stream. Having the above type of use cases in mind, broadcast state differs from the rest of operator states in that:1. it has a map format,2. it is only available to specific operators that have as inputs a broadcasted stream and a non-broadcasted one, and3. such an operator can have multiple broadcast states with different names.
案例剖析
DataStream链接 BroadcastStream
//第一个流类型 第二个流类型 输出类型
class UserDefineBroadcastProcessFunction(tag:OutputTag[String],msd:MapStateDescriptor[String,String]) extends BroadcastProcessFunction[String,String,String]{
//处理正常流 高吞吐 ,通常在改法读取广播状态
override def processElement(value: String,
ctx: BroadcastProcessFunction[String, String, String]#ReadOnlyContext,
out: Collector[String]): Unit = {
//获取状态 只读
val readOnlyMapstate = ctx.getBroadcastState(msd)
if(readOnlyMapstate.contains("rule")){
val rule=readOnlyMapstate.get("rule")
if(value.contains(rule)){//将数据写出去
out.collect(rule+"\t"+value)
}else{
ctx.output(tag,rule+"\t"+value)
}
}else{//使用Side out将数据输出
ctx.output(tag,value)
}
}
//处理广播流,通常在这里修改需要广播的状态 低吞吐
override def processBroadcastElement(value: String,
ctx: BroadcastProcessFunction[String, String, String]#Context,
out: Collector[String]): Unit = {
val mapstate = ctx.getBroadcastState(msd)
mapstate.put("rule",value)
}
}
//仅仅输出满足规则的数据
object FlinkBroadcastNonKeyedStream {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
//吞吐量高
val inputs = env.socketTextStream("CentOS", 9999)
//定义需要广播流 吞吐量低
val bcsd=new MapStateDescriptor[String,String]("bcsd",createTypeInformation[String],createTypeInformation[String])
val broadcaststream = env.socketTextStream("CentOS", 8888)
.broadcast(bcsd)
val tag = new OutputTag[String]("notmatch")
val datastream = inputs.connect(broadcaststream)
.process(new UserDefineBroadcastProcessFunction(tag, bcsd))
datastream.print("满足条件")
datastream.getSideOutput(tag).print("不满足")
env.execute("Window Stream WordCount")
}
}
KeyedStream链接 BroadcastStream
//key类型 第一个流类型 第二个流类型 输出类型
class UserDefineKeyedBroadcastProcessFunction(msd:MapStateDescriptor[String,Double])
extends KeyedBroadcastProcessFunction[String,OrderItem,Rule,User]{
var userTotalCost:ReducingState[Double]=_
override def open(parameters: Configuration): Unit = {
val rsd = new ReducingStateDescriptor[Double]("userTotalCost", new ReduceFunction[Double] {
override def reduce(value1: Double, value2: Double): Double = value1 + value2
}, createTypeInformation[Double])
userTotalCost=getRuntimeContext.getReducingState(rsd)
}
override def processElement(value: OrderItem,
ctx: KeyedBroadcastProcessFunction[String, OrderItem, Rule, User]#ReadOnlyContext,
out: Collector[User]): Unit = {
//计算出当前品类别下用户的总消费
userTotalCost.add(value.count*value.price)
val ruleState = ctx.getBroadcastState(msd)
var u=User(value.id,value.name)
//设定的有奖励规则
if(ruleState!=null && ruleState.contains(value.category)){
if(userTotalCost.get() >= ruleState.get(value.category)){//达到了奖励阈值
out.collect(u)
userTotalCost.clear()
}else{
println("不满足条件:"+u+" 当前总消费:"+userTotalCost.get()+" threshold:"+ruleState.get(value.category))
}
}
}
override def processBroadcastElement(value: Rule, ctx: KeyedBroadcastProcessFunction[String, OrderItem, Rule, User]#Context, out: Collector[User]): Unit = {
val broadcastState = ctx.getBroadcastState(msd)
broadcastState.put(value.category,value.threshold)
}
}
case class OrderItem(id:String,name:String,category:String,count:Int,price:Double)
case class Rule(category:String,threshold:Double)
case class User(id:String,name:String)
object FlinkBroadcastKeyedStream {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
//id name 品类 数量 单价 -- 订单项
//1 zhangsan 水果 2 4.5
//吞吐量高
val inputs = env.socketTextStream("CentOS", 9999)
.map(line=>line.split("\\s+"))
.map(ts=>OrderItem(ts(0),ts(1),ts(2),ts(3).toInt,ts(4).toDouble))
.keyBy(orderItem=> orderItem.category+":"+orderItem.id)
//品类 阈值 水果 8.0 -- 奖 励
val bcsd=new MapStateDescriptor[String,Double]("bcsd",createTypeInformation[String],createTypeInformation[Double])
val broadcaststream = env.socketTextStream("CentOS", 8888)
.map(line=>line.split("\\s+"))
.map(ts=>Rule(ts(0),ts(1).toDouble))
.broadcast(bcsd)
inputs.connect(broadcaststream)
.process(new UserDefineKeyedBroadcastProcessFunction(bcsd))
.print("奖励:")
env.execute("Window Stream WordCount")
}
}
Queryable State
Architecture
Client连接其中的一个代理服务区然后发送查询请求给Proxy服务器,查询指定key所对应的状态数据,底层Flink按照KeyGroup的方式管理Keyed State,这些KeyGroup被分配给了所有的TaskMnager的服务。每个TaskManage服务多个KeyGroup状态的存储。为了找到查询key所在的KeyGroup所属地TaskManager服务,Proxy服务会去询问JobManager查询TaskManager的信息,然后直接访问TaskManager上的QueryableStateServer服务器获取状态数据,最后将获取的状态数据返回给Client端。
-
QueryableStateClient- 运行在Flink集群以外,负责提交用户的查询给Flink集群
-
QueryableStateClientProxy- 运行在Flink集群中的TaskManager中的一个代理服务,负责接收客户端的查询,代理负责相应TaskManager获取请求的state,并将其state返回给客户端
-
QueryableStateServer -运行在Flink集群中的TaskManager中服务,仅仅负责读取当前TaskManage主机上存储到状态数据。
The client connects to one of the proxies and sends a request for the state associated with a specific key,
k
. As stated in Working with State, keyed state is organized in Key Groups, and eachTaskManager
is assigned a number of these key groups. To discover whichTaskManager
is responsible for the key group holdingk
, the proxy will ask theJobManager
. Based on the answer, the proxy will then query theQueryableStateServer
running on thatTaskManager
for the state associated withk
, and forward the response back to the client.
激活 Queryable State
- 将Flink的
opt/
拷贝flink-queryable-state-runtime_2.11-1.10.0.jar
到Flink的lib/
目录.
[root@CentOS flink-1.10.0]# cp opt/flink-queryable-state-runtime_2.11-1.10.0.jar lib/
在fink-conf-yaml文件添加如下配置
queryable-state.enable: true
[root@CentOS flink-1.10.0]# ./bin/start-cluster.sh
Starting cluster.
Starting standalonesession daemon on host CentOS.
Starting taskexecutor daemon on host CentOS.
查看TaskManager启动日志
Making State Queryable
为了使State对外界可见,需要使用以下命令显式地使其可查询:
-
创建QueryableStateStream,该QueryableStateStream充当一个Sink的输出,仅仅是将数据存储到state中。
-
或者stateDescriptor.setQueryable(String queryableStateName)方法使得我们的状态可查询。
Queryable State Stream
用户可以调用keyedstream的.asQueryableState(stateName, stateDescriptor)方法,提供一个可以查询状态。
// ValueState
QueryableStateStream asQueryableState(
String queryableStateName,
ValueStateDescriptor stateDescriptor)
// Shortcut for explicit ValueStateDescriptor variant
QueryableStateStream asQueryableState(String queryableStateName)
// FoldingState
QueryableStateStream asQueryableState(
String queryableStateName,
FoldingStateDescriptor stateDescriptor)
// ReducingState
QueryableStateStream asQueryableState(
String queryableStateName,
ReducingStateDescriptor stateDescriptor)
Note: There is no queryable
ListState
sink as it would result in an ever-growing list which may not be cleaned up and thus will eventually consume too much memory.
返回的QueryableStateStream可以看作是一个Sink,因为无法对QueryableStateStream进一步转换。在内部,QueryableStateStream被转换为运算符,该运算符使用所有传入记录来更新可查询状态实例。更新逻辑由asQueryableState调用中提供的StateDescriptor的类型隐含。在类似以下的程序中,keyedstream的所有记录将通过ValueState.update(value)用于更新状态实例:
stream.keyBy(0).asQueryableState("query-name")
object FlinkWordCountQueryableStream {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
//间隔5s执行一次checkpoint 精准一次
env.enableCheckpointing(5000,CheckpointingMode.EXACTLY_ONCE)
//设置检查点超时 4s
env.getCheckpointConfig.setCheckpointTimeout(4000)
//开启本次检查点 与上一次完成的检查点时间间隔不得小于 2s 优先级高于 checkpoint interval
env.getCheckpointConfig.setMinPauseBetweenCheckpoints(2000)
//如果检查点失败,任务宣告退出 setFailOnCheckpointingErrors(true)
env.getCheckpointConfig.setTolerableCheckpointFailureNumber(0)
//设置如果任务取消,系统该如何处理检查点数据
//RETAIN_ON_CANCELLATION:如果取消任务的时候,没有加--savepoint,系统会保留检查点数据
//DELETE_ON_CANCELLATION:取消任务,自动是删除检查点(不建议使用)
env.getCheckpointConfig.enableExternalizedCheckpoints(ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
var rsd=new ReducingStateDescriptor[(String,Int)]("reducestate",new ReduceFunction[(String, Int)] {
override def reduce(v1: (String, Int), v2: (String, Int)): (String, Int) = {
(v1._1,(v1._2+v2._2))
}
},createTypeInformation[(String,Int)])
env.socketTextStream("CentOS", 9999)
.flatMap(line => line.split("\\s+"))
.map(word => (word, 1))
.keyBy(0)
.asQueryableState("wordcount", rsd)//状态名字,后期查询需要
//5.执行流计算任务
env.execute("Stream WordCount")
}
}
Managed Keyed State
class WordCountMapFunction extends RichMapFunction[(String,Int),(String,Int)]{
var vs:ValueState[Int]=_
override def open(parameters: Configuration): Unit = {
//1.创建对应状态描述符
val vsd = new ValueStateDescriptor[Int]("wordcount", createTypeInformation[Int])
vsd.setQueryable("query-wc")
//2.获取RuntimeContext
var context: RuntimeContext = getRuntimeContext
//3.获取指定类型状态
vs=context.getState(vsd)
}
override def map(value: (String, Int)): (String, Int) = {
//获取历史值
val historyData = vs.value()
//更新状态
vs.update(historyData+value._2)
//返回最新值
(value._1,vs.value())
}
}
object FlinkWordCountQueryable {
def main(args: Array[String]): Unit = {
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
//2.创建DataStream - 细化
val text = env.socketTextStream("CentOS", 9999)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.map(new WordCountMapFunction)
//4.将计算的结果在控制打印
counts.print()
//5.执行流计算任务
env.execute("Stream WordCount")
}
}
Querying State
- 引入依赖
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-core</artifactId>
<version>1.10.0</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-queryable-state-client-java</artifactId>
<version>1.10.0</version>
</dependency>
- 查询代码如下
//链接proxy服务器
val client = new QueryableStateClient("CentOS", 9069)
var jobID=JobID.fromHexString("dc60cd61dc2d591014c062397e3bd6b9")
var queryName="wordcount" //状态名字
var queryKey="this" //用户需要查询的 key
var rsd=new ReducingStateDescriptor[(String,Int)]("reducestate",new ReduceFunction[(String, Int)] {
override def reduce(v1: (String, Int), v2: (String, Int)): (String, Int) = {
(v1._1,(v1._2+v2._2))
}
},createTypeInformation[(String,Int)])
val resultFuture = client.getKvState(jobID, queryName, queryKey, createTypeInformation[String], rsd)
//同步获取结果
val state: ReducingState[(String, Int)] = resultFuture.get()
println("结果:"+state.get())
client.shutdownAndWait()
异步获取结果
resultFuture.thenAccept(new Consumer[ReducingState[(String, Int)]] {
override def accept(t: ReducingState[(String, Int)]): Unit = {
println("结果:"+t.get())
}
})
Thread.sleep(10000)
client.shutdownAndWait()
Windows
窗口计算是流计算的核心,窗口将流数据切分成有限大小的“buckets”,我们可以对这个“buckets”中的有限数据做运算。
Windows are at the heart of processing infinite streams. Windows split the stream into “buckets” of finite size, over which we can apply computations.
Keyed Windows
stream
.keyBy(...) <- keyed versus non-keyed windows
.window(...) <- 必须指定: "window assigner"
[.trigger(...)] <- 可选: "trigger" (else default trigger) 决定了窗口何时触发计算
[.evictor(...)] <- 可选: "evictor" (else no evictor) 剔除器,剔除窗口内的元素
[.allowedLateness(...)] <- 可选: "lateness" (else zero) 是否允许有迟到
[.sideOutputLateData(...)] <- 可选: "output tag" (else no side output for late data)
.reduce/aggregate/fold/apply() <- 必须: "Window Function" 对窗口的数据做运算
[.getSideOutput(...)] <- 可选: "output tag" 获取迟到的数据
Non-Keyed Windows
stream
.windowAll(...) <- 必须指定: "window assigner"
[.trigger(...)] <- 可选: "trigger" (else default trigger) 决定了窗口何时触发计算
[.evictor(...)] <- 可选: "evictor" (else no evictor) 剔除器,剔除窗口内的元素
[.allowedLateness(...)] <- 可选: "lateness" (else zero) 是否允许有迟到
[.sideOutputLateData(...)] <- 可选: "output tag" (else no side output for late data)
.reduce/aggregate/fold/apply() <- 必须: "Window Function" 对窗口的数据做运算
[.getSideOutput(...)] <- 可选: "output tag" 获取迟到的数据
Window Lifecycle
当有第一个元素落入到窗口中的时候窗口就被创建,当时间(水位线)越过窗口的EndTime的时候,该窗口认定为是就绪状态,可以应用WindowFunction对窗口中的元素进行运算。当前的时间(水位线)越过了窗口的EndTime+allowed lateness时间,该窗口会被删除。只有time-based windows 才有生命周期的概念,因为Flink还有一种类型的窗口global window不是基于时间的,因此没有生命周期的概念。
In a nutshell, a window is created as soon as the first element that should belong to this window arrives, and the window is completely removed when the time (event or processing time) passes its end timestamp plus the user-specified
allowed lateness
(see Allowed Lateness). Flink guarantees removal only for time-based windows and not for other types, e.g. global windowsIn addition, each window will have aTrigger
(see Triggers) and a function (ProcessWindowFunction
,ReduceFunction
,AggregateFunction
orFoldFunction
) (see Window Functions) attached to it. The function will contain the computation to be applied to the contents of the window, while theTrigger
specifies the conditions under which the window is considered ready for the function to be applied.Apart from the above, you can specify anEvictor
(see Evictors) which will be able to remove elements from the window after the trigger fires and before and/or after the function is applied.
Keyed vs Non-Keyed Windows
Keyed Windows:在某一个时刻,会触发多个window任务,取决于Key的种类。
Non-Keyed Windows:因为没有key概念,所以任意时刻只有一个window任务执行。
In the case of keyed streams, any attribute of your incoming events can be used as a key (more details here). Having a keyed stream will allow your windowed computation to be performed in parallel by multiple tasks, as each logical keyed stream can be processed independently from the rest. All elements referring to the same key will be sent to the same parallel task.In case of non-keyed streams, your original stream will not be split into multiple logical streams and all the windowing logic will be performed by a single task, i.e. with parallelism of 1.
Window Assigners
Window Assigner定义了如何将元素分配给窗口,这是通过在window(...)
/windowAll()
指定一个Window Assigner实现。
The window assigner defines how elements are assigned to windows. This is done by specifying the
WindowAssigner
of your choice in thewindow(...)
(for keyed streams) or thewindowAll()
(for non-keyed streams) call.Time-based windows have a start timestamp (inclusive) and an end timestamp (exclusive) that together describe the size of the window. In code, Flink usesTimeWindow
when working with time-based windows which has methods for querying the start- and end-timestamp and also an additional methodmaxTimestamp()
that returns the largest allowed timestamp for a given windows.
Tumbling Windows
滚动窗口分配器将每个元素分配给指定窗口大小的窗口。滚动窗口具有固定的大小,并且不重叠。例如,如果您指定大小为5分钟的翻滚窗口,则将评估当前窗口,并且每五分钟将启动一个新窗口,如下图所示。
val env = StreamExecutionEnvironment.getExecutionEnvironment
val text = env.socketTextStream("CentOS", 9999)
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.window(TumblingProcessingTimeWindows.of(Time.seconds(5)))
.reduce((v1,v2)=>(v1._1,v1._2+v2._2))
.print()
env.execute("Tumbling Window Stream WordCount")
Sliding Windows
滑动窗口分配器将元素分配给固定长度的窗口。类似于滚动窗口分配器,窗口的大小由窗口大小参数配置。附加的窗口滑动参数控制滑动窗口启动的频率。因此,如果幻灯片小于窗口大小,则滑动窗口可能会重叠。在这种情况下,元素被分配给多个窗口。
object FlinkProcessingTimeSlidingWindow {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
val text = env.socketTextStream("CentOS", 9999)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.window(SlidingProcessingTimeWindows.of(Time.seconds(4),Time.seconds(2)))
.aggregate(new UserDefineAggregateFunction)
.print()
//5.执行流计算任务
env.execute("Sliding Window Stream WordCount")
}
}
class UserDefineAggregateFunction extends AggregateFunction[(String,Int),(String,Int),(String,Int)]{
override def createAccumulator(): (String, Int) = ("",0)
override def add(value: (String, Int), accumulator: (String, Int)): (String, Int) = {
(value._1,value._2+accumulator._2)
}
override def getResult(accumulator: (String, Int)): (String, Int) = accumulator
override def merge(a: (String, Int), b: (String, Int)): (String, Int) = {
(a._1,a._2+b._2)
}
}
Session Windows
会话窗口分配器按活动会话对元素进行分组。与滚动窗口和滑动窗口相比,会话窗口不重叠且没有固定的开始和结束时间。相反,当会话窗口在一定时间段内未接收到元素时(即,发生不活动间隙时),它将关闭。
object FlinkProcessingTimeSessionWindow {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
val text = env.socketTextStream("CentOS", 9999)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(t=>t._1)
.window(ProcessingTimeSessionWindows.withGap(Time.seconds(5)))
.apply(new UserDefineWindowFunction)
.print()
//5.执行流计算任务
env.execute("Session Window Stream WordCount")
}
}
class UserDefineWindowFunction extends WindowFunction[(String,Int),(String,Int),String,TimeWindow]{
override def apply(key: String,
window: TimeWindow,
input: Iterable[(String, Int)],
out: Collector[(String, Int)]): Unit = {
val sdf = new SimpleDateFormat("HH:mm:ss")
var start=sdf.format(window.getStart)
var end=sdf.format(window.getEnd)
var sum = input.map(_._2).sum
out.collect((s"${key}\t${start}~${end}",sum))
}
}
Global Windows
全局窗口分配器将具有相同键的所有元素分配给同一单个全局窗口。仅当您还指定自定义触发器时,此窗口方案才有用。否则,将不会执行任何计算,因为全局窗口没有可以处理聚合元素的自然终点。
object FlinkGlobalWindow {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
val text = env.socketTextStream("CentOS", 9999)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(t=>t._1)
.window(GlobalWindows.create())
.trigger(CountTrigger.of(4))
.apply(new UserDefineGlobalWindowFunction)
.print()
//5.执行流计算任务
env.execute("Global Window Stream WordCount")
}
}
class UserDefineGlobalWindowFunction extends WindowFunction[(String,Int),(String,Int),String,GlobalWindow]{
override def apply(key: String,
window: GlobalWindow,
input: Iterable[(String, Int)],
out: Collector[(String, Int)]): Unit = {
var sum = input.map(_._2).sum
out.collect((s"${key}",sum))
}
}
Window Functions
定义窗口分配器后,我们需要指定要在每个窗口上执行的计算。这是Window Function的职责,一旦系统确定窗口已准备好进行处理,就可以处理每个窗口的元素。窗口函数可以是ReduceFunction,AggregateFunction,FoldFunction、ProcessWindowFunction或WindowFunction(古董)之一。其中ReduceFunction和AggregateFunction在运行效率上比ProcessWindowFunction要高,因为前俩个方法执行的是增量计算,只要有数据抵达窗口,系统就会调用ReduceFunction,AggregateFunction实现增量计算;ProcessWindowFunction在窗口触发之前会一直缓存接收数据,只有当窗口就绪的时候才会对窗口中的元素做批量计算,但是该方法可以获取窗口的元数据信息。但是可以通过将ProcessWindowFunction与ReduceFunction,AggregateFunction或FoldFunction结合使用来获得窗口元素的增量聚合以及ProcessWindowFunction接收的其他窗口元数据,从而减轻这种情况。
ReduceFunction
class UserDefineReduceFunction extends ReduceFunction[(String,Int)]{
override def reduce(v1: (String, Int), v2: (String, Int)): (String, Int) = {
println("reduce:"+v1+"\t"+v2)
(v1._1,v2._2+v1._2)
}
}
object FlinkProcessingTimeTumblingWindowWithReduceFunction {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
val text = env.socketTextStream("CentOS", 9999)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.window(TumblingProcessingTimeWindows.of(Time.seconds(5)))
.reduce(new UserDefineReduceFunction)
.print()
//5.执行流计算任务
env.execute("Tumbling Window Stream WordCount")
}
}
AggregateFunction
class UserDefineAggregateFunction extends AggregateFunction[(String,Int),(String,Int),(String,Int)]{
override def createAccumulator(): (String, Int) = ("",0)
override def add(value: (String, Int), accumulator: (String, Int)): (String, Int) = {
println("add:"+value+"\t"+accumulator)
(value._1,value._2+accumulator._2)
}
override def getResult(accumulator: (String, Int)): (String, Int) = accumulator
override def merge(a: (String, Int), b: (String, Int)): (String, Int) = {
println("merge:"+a+"\t"+b)
(a._1,a._2+b._2)
}
}
object FlinkProcessingTimeTumblingWindowWithAggregateFunction {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
val text = env.socketTextStream("CentOS", 9999)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.window(TumblingProcessingTimeWindows.of(Time.seconds(5)))
.aggregate(new UserDefineAggregateFunction)
.print()
//5.执行流计算任务
env.execute("Tumbling Window Stream WordCount")
}
}
FoldFunction
class UserDefineFoldFunction extends FoldFunction[(String,Int),(String,Int)]{
override def fold(accumulator: (String, Int), value: (String, Int)): (String, Int) = {
println("fold:"+accumulator+"\t"+value)
(value._1,accumulator._2+value._2)
}
}
val env = StreamExecutionEnvironment.getExecutionEnvironment
val text = env.socketTextStream("CentOS", 9999)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(0)
.window(TumblingProcessingTimeWindows.of(Time.seconds(5)))
.fold(("",0),new UserDefineFoldFunction)
.print()
//5.执行流计算任务
env.execute("Tumbling Window Stream WordCount")
注意 :FoldFunction不可以用在Session Window中
ProcessWindowFunction
class UserDefineProcessWindowFunction extends ProcessWindowFunction[(String,Int),(String,Int),String,TimeWindow]{
val sdf=new SimpleDateFormat("HH:mm:ss")
override def process(key: String,
context: Context,
elements: Iterable[(String, Int)],
out: Collector[(String, Int)]): Unit = {
val w = context.window//获取窗口元数据
val start =sdf.format(w.getStart)
val end = sdf.format(w.getEnd)
val total=elements.map(_._2).sum
out.collect((key+"\t["+start+"~"+end+"]",total))
}
}
object FlinkProcessingTimeTumblingWindowWithProcessWindowFunction {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
val text = env.socketTextStream("CentOS", 9999)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(t=>t._1)
.window(TumblingProcessingTimeWindows.of(Time.seconds(5)))
.process(new UserDefineProcessWindowFunction)
.print()
//5.执行流计算任务
env.execute("Tumbling Window Stream WordCount")
}
}
ProcessWindowFunction & Reduce/Aggregte/Fold
class UserDefineProcessWindowFunction2 extends ProcessWindowFunction[(String,Int),(String,Int),String,TimeWindow]{
val sdf=new SimpleDateFormat("HH:mm:ss")
override def process(key: String,
context: Context,
elements: Iterable[(String, Int)],
out: Collector[(String, Int)]): Unit = {
val w = context.window//获取窗口元数据
val start =sdf.format(w.getStart)
val end = sdf.format(w.getEnd)
val list = elements.toList
println("list:"+list)
val total=list.map(_._2).sum
out.collect((key+"\t["+start+"~"+end+"]",total))
}
}
class UserDefineReduceFunction2 extends ReduceFunction[(String,Int)]{
override def reduce(v1: (String, Int), v2: (String, Int)): (String, Int) = {
println("reduce:"+v1+"\t"+v2)
(v1._1,v2._2+v1._2)
}
}
object FlinkProcessingTimeTumblingWindowWithReduceFucntionAndProcessWindowFunction {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
val text = env.socketTextStream("CentOS", 9999)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(t=>t._1)
.window(TumblingProcessingTimeWindows.of(Time.seconds(5)))
.reduce(new UserDefineReduceFunction2,new UserDefineProcessWindowFunction2)
.print()
//5.执行流计算任务
env.execute("Tumbling Window Stream WordCount")
}
}
Per-window state In ProcessWindowFunction
class UserDefineProcessWindowFunction3 extends ProcessWindowFunction[(String,Int),(String,Int),String,TimeWindow]{
val sdf=new SimpleDateFormat("HH:mm:ss")
var wvsd:ValueStateDescriptor[Int]=_
var gvsd:ValueStateDescriptor[Int]=_
override def open(parameters: Configuration): Unit = {
wvsd=new ValueStateDescriptor[Int]("ws",createTypeInformation[Int])
gvsd=new ValueStateDescriptor[Int]("gs",createTypeInformation[Int])
}
override def process(key: String,
context: Context,
elements: Iterable[(String, Int)],
out: Collector[(String, Int)]): Unit = {
val w = context.window//获取窗口元数据
val start =sdf.format(w.getStart)
val end = sdf.format(w.getEnd)
val list = elements.toList
//println("list:"+list)
val total=list.map(_._2).sum
var wvs:ValueState[Int]=context.windowState.getState(wvsd)
var gvs:ValueState[Int]=context.globalState.getState(gvsd)
wvs.update(wvs.value()+total)
gvs.update(gvs.value()+total)
println("Window Count:"+wvs.value()+"\t"+"Global Count:"+gvs.value())
out.collect((key+"\t["+start+"~"+end+"]",total))
}
}
object FlinkProcessingTimeTumblingWindowWithProcessWindowFunctionState {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
val text = env.socketTextStream("CentOS", 9999)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(t=>t._1)
.window(TumblingProcessingTimeWindows.of(Time.seconds(5)))
.process(new UserDefineProcessWindowFunction3)
.print()
//5.执行流计算任务
env.execute("Tumbling Window Stream WordCount")
}
}
WindowFunction (Legacy)
在某些可以使用ProcessWindowFunction的地方,您也可以使用WindowFunction。这是ProcessWindowFunction的较旧版本,提供的上下文信息较少,并且没有某些高级功能,例如,每个窗口的keyed State。
class UserDefineWindowFunction extends WindowFunction[(String,Int),(String,Int),String,TimeWindow]{
override def apply(key: String,
window: TimeWindow,
input: Iterable[(String, Int)],
out: Collector[(String, Int)]): Unit = {
val sdf = new SimpleDateFormat("HH:mm:ss")
var start=sdf.format(window.getStart)
var end=sdf.format(window.getEnd)
var sum = input.map(_._2).sum
out.collect((s"${key}\t${start}~${end}",sum))
}
}
object FlinkProcessingTimeSessionWindowWithWindowFunction {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
val text = env.socketTextStream("CentOS", 9999)
//3.执行DataStream的转换算子
val counts = text.flatMap(line=>line.split("\\s+"))
.map(word=>(word,1))
.keyBy(t=>t._1)
.window(ProcessingTimeSessionWindows.withGap(Time.seconds(5)))
.apply(new UserDefineWindowFunction)
.print()
//5.执行流计算任务
env.execute("Session Window Stream WordCount")
}
}
Triggers
触发器确定窗口(由窗口分配器形成)何时准备好由Window Function处理。每个WindowAssigner都带有一个默认触发器。如果默认触发器不符合您的需求,则可以使用trigger(…)指定自定义触发器。
A
Trigger
determines when a window (as formed by the window assigner) is ready to be processed by the window function. EachWindowAssigner
comes with a defaultTrigger
. If the default trigger does not fit your needs, you can specify a custom trigger usingtrigger(...)
.
Trigger抽象类具有以下五种方法,这些方法允许触发器对不同事件作出响应,继而根据响应的返回值决定窗口的行为:
public abstract class Trigger<T, W extends Window> implements Serializable {
//一旦有元素落入窗口中,系统回调 onElement,根据返回值决定窗口的动作】
/**
* @param element:落入到窗口的元素
* @param timestamp:元素抵达窗口时间
* @param window:窗口对象
* @param ctx:上下文对象,使用该对象主要用于设置定时器Timer(EventTime/ProcessingTime)
*/
public abstract TriggerResult onElement(T element, long timestamp, W window, TriggerContext ctx) [throws Exception;]
// ProcessingTime Timer定时回调
/**
* @param time:Timer触发的时间
* @param window:窗口对象
* @param ctx:上下文对象,使用该对象主要用于设置定时器Timer(EventTime/ProcessingTime)
*/
public abstract TriggerResult onProcessingTime(long time, W window, TriggerContext ctx) throws Exception;
// EventTime Timer定时回调
/**
* @param time:Timer触发的时间
* @param window:窗口对象
* @param ctx:上下文对象,使用该对象主要用于设置定时器Timer(EventTime/ProcessingTime)
*/
public abstract TriggerResult onEventTime(long time, W window, TriggerContext ctx) throws Exception;
//表示窗口是否支持状态合并,主要用于兼容会话窗口
public boolean canMerge() {
return false;
}
/**
* 系统完成窗口的合并以后,回调该方法
*
* @param window 合并以后的新窗口
* @param ctx 用于设置Timer以及获取state信息
*/
public void onMerge(W window, OnMergeContext ctx) throws Exception {
throw new UnsupportedOperationException("This trigger does not support merging.");
}
//当窗口被删除的时候回调,可以在该方法中销毁窗口的相关的状态信息
public abstract void clear(W window, TriggerContext ctx) throws Exception;
//...
}
其中前三个方法的返回值决定窗口的行为,可以看出前三个方法的返回值类型是TriggerResult,由于TriggerResult是一个枚举类型,该类型定义常见的窗口行为。
public enum TriggerResult {
CONTINUE(false, false),//不触发,也不删除窗口元素
FIRE_AND_PURGE(true, true),//触发,删除窗口元素
FIRE(true, false),//触发,不删除窗口元素 -默认行为
PURGE(false, true);//不触发窗口,要删除窗口中的元素
...
}
Built-in and Custom TriggersFlink带有一些内置触发器如:
- The (already mentioned)
EventTimeTrigger
fires based on the progress of event-time as measured by watermarks.
public class EventTimeTrigger extends Trigger<Object, TimeWindow> {
private static final long serialVersionUID = 1L;
private EventTimeTrigger() {}
@Override
public TriggerResult onElement(Object element, long timestamp, TimeWindow window, TriggerContext ctx) throws Exception {
//比对当前水位线时间>=窗口截止时间
if (window.maxTimestamp() <= ctx.getCurrentWatermark()) {
// if the watermark is already past the window fire immediately
return TriggerResult.FIRE;
} else {//注册事件时间定时器
ctx.registerEventTimeTimer(window.maxTimestamp());
return TriggerResult.CONTINUE;
}
}
//事件时间定时器回调!
@Override
public TriggerResult onEventTime(long time, TimeWindow window, TriggerContext ctx) {
return time == window.maxTimestamp() ?
TriggerResult.FIRE :
TriggerResult.CONTINUE;
}
//忽略,由于onElement,没有注册处理时间定时器
@Override
public TriggerResult onProcessingTime(long time, TimeWindow window, TriggerContext ctx) throws Exception {
return TriggerResult.CONTINUE;
}
//清除窗口定时器!
@Override
public void clear(TimeWindow window, TriggerContext ctx) throws Exception {
ctx.deleteEventTimeTimer(window.maxTimestamp());
}
@Override
public boolean canMerge() {
return true;
}
//兼容会话窗口,合并后产生新的窗口,所以需要重新注册事件时间定时器
@Override
public void onMerge(TimeWindow window,
OnMergeContext ctx) {
// only register a timer if the watermark is not yet past the end of the merged window
// this is in line with the logic in onElement(). If the watermark is past the end of
// the window onElement() will fire and setting a timer here would fire the window twice.
long windowMaxTimestamp = window.maxTimestamp();
if (windowMaxTimestamp > ctx.getCurrentWatermark()) {
ctx.registerEventTimeTimer(windowMaxTimestamp);
}
}
@Override
public String toString() {
return "EventTimeTrigger()";
}
public static EventTimeTrigger create() {
return new EventTimeTrigger();
}
}
- The
ProcessingTimeTrigger
fires based on processing time.
public class ProcessingTimeTrigger extends Trigger<Object, TimeWindow> {
private static final long serialVersionUID = 1L;
private ProcessingTimeTrigger() {}
@Override
public TriggerResult onElement(Object element, long timestamp, TimeWindow window, TriggerContext ctx) {
//注册处理时间定时器
ctx.registerProcessingTimeTimer(window.maxTimestamp());
return TriggerResult.CONTINUE;
}
//忽略,由于onElement,没有注册事件时间定时器
@Override
public TriggerResult onEventTime(long time, TimeWindow window, TriggerContext ctx) throws Exception {
return TriggerResult.CONTINUE;
}
//时间到了直接出发即可!
@Override
public TriggerResult onProcessingTime(long time, TimeWindow window, TriggerContext ctx) {
return TriggerResult.FIRE;
}
//清除事件时间定时器
@Override
public void clear(TimeWindow window, TriggerContext ctx) throws Exception {
ctx.deleteProcessingTimeTimer(window.maxTimestamp());
}
@Override
public boolean canMerge() {
return true;
}
//兼容会话窗口,合并后产生新的窗口,所以需要重新注册处理时间定时器
@Override
public void onMerge(TimeWindow window,
OnMergeContext ctx) {
// only register a timer if the time is not yet past the end of the merged window
// this is in line with the logic in onElement(). If the time is past the end of
// the window onElement() will fire and setting a timer here would fire the window twice.
long windowMaxTimestamp = window.maxTimestamp();
if (windowMaxTimestamp > ctx.getCurrentProcessingTime()) {
ctx.registerProcessingTimeTimer(windowMaxTimestamp);
}
}
@Override
public String toString() {
return "ProcessingTimeTrigger()";
}
public static ProcessingTimeTrigger create() {
return new ProcessingTimeTrigger();
}
}
- The
CountTrigger
fires once the number of elements in a window exceeds the given limit.
public class CountTrigger<W extends Window> extends Trigger<Object, W> {
private static final long serialVersionUID = 1L;
//设置最大值
private final long maxCount;
//累计总计数
private final ReducingStateDescriptor<Long> stateDesc =
new ReducingStateDescriptor<>("count", new Sum(), LongSerializer.INSTANCE);
private CountTrigger(long maxCount) {
this.maxCount = maxCount;
}
@Override
public TriggerResult onElement(Object element, long timestamp, W window, TriggerContext ctx) throws Exception {
ReducingState<Long> count = ctx.getPartitionedState(stateDesc);
count.add(1L);//每接收一个元素,累加器加1
if (count.get() >= maxCount) {//如果达到最大值,则触发执行
count.clear();
return TriggerResult.FIRE;
}
return TriggerResult.CONTINUE;
}
//忽略
@Override
public TriggerResult onEventTime(long time, W window, TriggerContext ctx) {
return TriggerResult.CONTINUE;
}
//忽略
@Override
public TriggerResult onProcessingTime(long time, W window, TriggerContext ctx) throws Exception {
return TriggerResult.CONTINUE;
}
//清除计数
@Override
public void clear(W window, TriggerContext ctx) throws Exception {
ctx.getPartitionedState(stateDesc).clear();
}
@Override
public boolean canMerge() {
return true;
}
//如果是会话窗口,在合并的时候,计数也需要合并
@Override
public void onMerge(W window, OnMergeContext ctx) throws Exception {
ctx.mergePartitionedState(stateDesc);
}
@Override
public String toString() {
return "CountTrigger(" + maxCount + ")";
}
public static <W extends Window> CountTrigger<W> of(long maxCount) {
return new CountTrigger<>(maxCount);
}
private static class Sum implements ReduceFunction<Long> {
private static final long serialVersionUID = 1L;
@Override
public Long reduce(Long value1, Long value2) throws Exception {
return value1 + value2;
}
}
}
- The
PurgingTrigger
takes as argument another trigger and transforms it into a purging one.
If you need to implement a custom trigger, you should check out the abstract Trigger class. Please note that the API is still evolving and might change in future versions of Flink.
public class PurgingTrigger<T, W extends Window> extends Trigger<T, W> {
private static final long serialVersionUID = 1L;
//目标trigger(被代理的Trigger)
private Trigger<T, W> nestedTrigger;
private PurgingTrigger(Trigger<T, W> nestedTrigger) {
this.nestedTrigger = nestedTrigger;
}
//将FIRE包装为FIRE_AND_PURGE
@Override
public TriggerResult onElement(T element, long timestamp, W window, TriggerContext ctx) throws Exception {
TriggerResult triggerResult = nestedTrigger.onElement(element, timestamp, window, ctx);
return triggerResult.isFire() ? TriggerResult.FIRE_AND_PURGE : triggerResult;
}
//将FIRE包装为FIRE_AND_PURGE
@Override
public TriggerResult onEventTime(long time, W window, TriggerContext ctx) throws Exception {
TriggerResult triggerResult = nestedTrigger.onEventTime(time, window, ctx);
return triggerResult.isFire() ? TriggerResult.FIRE_AND_PURGE : triggerResult;
}
//将FIRE包装为FIRE_AND_PURGE
@Override
public TriggerResult onProcessingTime(long time, W window, TriggerContext ctx) throws Exception {
TriggerResult triggerResult = nestedTrigger.onProcessingTime(time, window, ctx);
return triggerResult.isFire() ? TriggerResult.FIRE_AND_PURGE : triggerResult;
}
@Override
public void clear(W window, TriggerContext ctx) throws Exception {
nestedTrigger.clear(window, ctx);
}
@Override
public boolean canMerge() {
return nestedTrigger.canMerge();
}
@Override
public void onMerge(W window, OnMergeContext ctx) throws Exception {
nestedTrigger.onMerge(window, ctx);
}
@Override
public String toString() {
return "PurgingTrigger(" + nestedTrigger.toString() + ")";
}
public static <T, W extends Window> PurgingTrigger<T, W> of(Trigger<T, W> nestedTrigger) {
return new PurgingTrigger<>(nestedTrigger);
}
@VisibleForTesting
public Trigger<T, W> getNestedTrigger() {
return nestedTrigger;
}
}
Evictors(剔除器)
Flink的窗口模型允许除了WindowAssigner和Trigger之外还指定一个可选的Evictor。可以使用evictor(…)方法来完成此操作。Evictor可以在Trigger触发后,应用Window Function之前或之后从窗口中删除元素。
public interface Evictor<T, W extends Window> extends Serializable {
/**
* 可选,用于在调用 windowing function之前驱除元素.
*
* @param elements 当前窗口中的所有元素.
* @param size 当前窗口元素的大小.
* @param window 窗口对象
* @param evictorContext The context for the Evictor
*/
void evictBefore(Iterable<TimestampedValue<T>> elements, int size, W window, EvictorContext evictorContext);
/**
* 可选,用于在调用 windowing function之后驱除元素.
*
* @param elements 当前窗口中的所有元素.
* @param size 当前窗口元素的大小.
* @param window 窗口对象
* @param evictorContext The context for the Evictor
*/
void evictAfter(Iterable<TimestampedValue<T>> elements, int size, W window, EvictorContext evictorContext);
}
pre-implemented evictors
-
CountEvictor
: keeps up to a user-specified number of elements from the window and discards the remaining ones from the beginning of the window buffer.
public class CountEvictor<W extends Window> implements Evictor<Object, W> {
private static final long serialVersionUID = 1L;
private final long maxCount;//最大数量
private final boolean doEvictAfter;//触发的位置
private CountEvictor(long count, boolean doEvictAfter) {
this.maxCount = count;
this.doEvictAfter = doEvictAfter;
}
private CountEvictor(long count) {
this.maxCount = count;
this.doEvictAfter = false;
}
public static <W extends Window> CountEvictor<W> of(long maxCount) {
return new CountEvictor<>(maxCount);
}
public static <W extends Window> CountEvictor<W> of(long maxCount, boolean doEvictAfter) {
return new CountEvictor<>(maxCount, doEvictAfter);
}
@Override
public void evictBefore(Iterable<TimestampedValue<Object>> elements, int size, W window, EvictorContext ctx) {
if (!doEvictAfter) {
evict(elements, size, ctx);
}
}
@Override
public void evictAfter(Iterable<TimestampedValue<Object>> elements, int size, W window, EvictorContext ctx) {
if (doEvictAfter) {
evict(elements, size, ctx);
}
}
private void evict(Iterable<TimestampedValue<Object>> elements, int size, EvictorContext ctx) {
//window中的元素总数是否大于 maxCount
if (size <= maxCount) {
return;
} else {
int evictedCount = 0;//需要剔除的元素个数计数器
for (Iterator<TimestampedValue<Object>> iterator = elements.iterator();iterator.hasNext();){
iterator.next();
evictedCount++;
if (evictedCount > size - maxCount) {//剔除的元素数量够了,结束
break;
} else {//从迭代器中删除 元素
iterator.remove();
}
}
}
}
}
-
DeltaEvictor
: takes aDeltaFunction
and athreshold
, computes the delta between the last element in the window buffer and each of the remaining ones, and removes the ones with a delta greater or equal to the threshold.
public class DeltaEvictor<T, W extends Window> implements Evictor<T, W> {
private static final long serialVersionUID = 1L;
DeltaFunction<T> deltaFunction;//计算差值
private double threshold;//阈值
private final boolean doEvictAfter;//剔除位置
private DeltaEvictor(double threshold, DeltaFunction<T> deltaFunction) {
this.deltaFunction = deltaFunction;
this.threshold = threshold;
this.doEvictAfter = false;
}
private DeltaEvictor(double threshold, DeltaFunction<T> deltaFunction, boolean doEvictAfter) {
this.deltaFunction = deltaFunction;
this.threshold = threshold;
this.doEvictAfter = doEvictAfter;
}
public static <T, W extends Window> DeltaEvictor<T, W> of(double threshold, DeltaFunction<T> deltaFunction) {
return new DeltaEvictor<>(threshold, deltaFunction);
}
public static <T, W extends Window> DeltaEvictor<T, W> of(double threshold, DeltaFunction<T> deltaFunction, boolean doEvictAfter) {
return new DeltaEvictor<>(threshold, deltaFunction, doEvictAfter);
}
@Override
public void evictBefore(Iterable<TimestampedValue<T>> elements, int size, W window, EvictorContext ctx) {
if (!doEvictAfter) {
evict(elements, size, ctx);
}
}
@Override
public void evictAfter(Iterable<TimestampedValue<T>> elements, int size, W window, EvictorContext ctx) {
if (doEvictAfter) {
evict(elements, size, ctx);
}
}
private void evict(Iterable<TimestampedValue<T>> elements, int size, EvictorContext ctx) {
//获取窗口中最后一个元素
TimestampedValue<T> lastElement = Iterables.getLast(elements);
//迭代遍历窗口中的其余元素和最后一个元素计算差值
for (Iterator<TimestampedValue<T>> iterator = elements.iterator(); iterator.hasNext();){
TimestampedValue<T> element = iterator.next();//获取窗口的元素
if (deltaFunction.getDelta(element.getValue(), lastElement.getValue()) >= this.threshold) {
iterator.remove();//剔除差值大于等于阈值的元素
}
}
}
@Override
public String toString() {
return "DeltaEvictor(" + deltaFunction + ", " + threshold + ")";
}
}
public interface DeltaFunction<DATA> extends Serializable {
double getDelta(DATA oldDataPoint, DATA newDataPoint);
}
-
TimeEvictor
: takes as argument aninterval
in milliseconds and for a given window, it finds the maximum timestampmax_ts
among its elements and removes all the elements with timestamps smaller thanmax_ts - interval
.
public class TimeEvictor<W extends Window> implements Evictor<Object, W> {
private static final long serialVersionUID = 1L;
private final long windowSize;//时间间隔 毫秒
private final boolean doEvictAfter;
public TimeEvictor(long windowSize) {
this.windowSize = windowSize;
this.doEvictAfter = false;
}
public TimeEvictor(long windowSize, boolean doEvictAfter) {
this.windowSize = windowSize;
this.doEvictAfter = doEvictAfter;
}
/
public static <W extends Window> TimeEvictor<W> of(Time windowSize) {
return new TimeEvictor<>(windowSize.toMilliseconds());
}
public static <W extends Window> TimeEvictor<W> of(Time windowSize, boolean doEvictAfter) {
return new TimeEvictor<>(windowSize.toMilliseconds(), doEvictAfter);
}
@Override
public void evictBefore(Iterable<TimestampedValue<Object>> elements, int size, W window, EvictorContext ctx) {
if (!doEvictAfter) {
evict(elements, size, ctx);
}
}
@Override
public void evictAfter(Iterable<TimestampedValue<Object>> elements, int size, W window, EvictorContext ctx) {
if (doEvictAfter) {
evict(elements, size, ctx);
}
}
private void evict(Iterable<TimestampedValue<Object>> elements, int size, EvictorContext ctx) {
//如果不是基于时间的窗口,直接返回
if (!hasTimestamp(elements)) {
return;
}
//获取所有元素中的 最大时间
long currentTime = getMaxTimestamp(elements);
//拿最大时间减去 windowSize获取evictCutoff ,所有时间 小于或等于该值的元素剔除掉
long evictCutoff = currentTime - windowSize;
for (Iterator<TimestampedValue<Object>> iterator = elements.iterator(); iterator.hasNext(); ) {
TimestampedValue<Object> record = iterator.next();
if (record.getTimestamp() <= evictCutoff) {
iterator.remove();
}
}
}
//判断元素是否包含时间
private boolean hasTimestamp(Iterable<TimestampedValue<Object>> elements) {
Iterator<TimestampedValue<Object>> it = elements.iterator();
if (it.hasNext()) {
return it.next().hasTimestamp();
}
return false;
}
//计算最大时间
private long getMaxTimestamp(Iterable<TimestampedValue<Object>> elements) {
long currentTime = Long.MIN_VALUE;
for (Iterator<TimestampedValue<Object>> iterator = elements.iterator(); iterator.hasNext();){
TimestampedValue<Object> record = iterator.next();
currentTime = Math.max(currentTime, record.getTimestamp());
}
return currentTime;
}
@Override
public String toString() {
return "TimeEvictor(" + windowSize + ")";
}
}
案例1
class KeyWordEvictor(keyWord:String,doEvictorAfter:Boolean=false) extends Evictor[String,TimeWindow]{
override def evictBefore(elements: lang.Iterable[TimestampedValue[String]], size: Int, window: TimeWindow, evictorContext: Evictor.EvictorContext): Unit = {
if(!doEvictorAfter){
evict(elements,size,window,evictorContext)
}
}
override def evictAfter(elements: lang.Iterable[TimestampedValue[String]], size: Int, window: TimeWindow, evictorContext: Evictor.EvictorContext): Unit = {
if(doEvictorAfter){
evict(elements,size,window,evictorContext)
}
}
private def evict(elements: lang.Iterable[TimestampedValue[String]], size: Int, window: TimeWindow, evictorContext: Evictor.EvictorContext): Unit={
val iterator = elements.iterator()
while(iterator.hasNext){
val element = iterator.next()
if(element.getValue.contains(keyWord)){
iterator.remove()
}
}
}
}
class KeyWordTrigger(keyWord:String) extends Trigger[String,TimeWindow]{
override def onElement(element: String, timestamp: Long, window: TimeWindow, ctx: Trigger.TriggerContext): TriggerResult = {
println("onElement:"+element)
if(element.contains(keyWord)){
TriggerResult.FIRE//触发并清除窗口中的元素,并不删除窗口
}else{
TriggerResult.CONTINUE
}
}
override def onProcessingTime(time: Long, window: TimeWindow, ctx: Trigger.TriggerContext): TriggerResult = {
TriggerResult.CONTINUE
}
override def onEventTime(time: Long, window: TimeWindow, ctx: Trigger.TriggerContext): TriggerResult = {
TriggerResult.CONTINUE
}
override def canMerge: Boolean = true
override def clear(window: TimeWindow, ctx: Trigger.TriggerContext): Unit = {
println("窗口被清除了")
}
}
class UserDefineAllWindowFunction extends AllWindowFunction[String,String,TimeWindow]{
override def apply(window: TimeWindow, input: Iterable[String], out: Collector[String]): Unit = {
out.collect(input.mkString(","))
}
}
object FlinkWindowEvictor {
def main(args: Array[String]): Unit = {
//1.创建流计算执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.socketTextStream("CentOS", 9999)
.windowAll(TumblingProcessingTimeWindows.of(Time.seconds(10)))
.trigger(new KeyWordTrigger("end"))
.evictor(new KeyWordEvictor(keyWord = "end",false))
.apply(new UserDefineAllWindowFunction)
.print()
env.execute("FlinkWindowTrigger")
}
}
EventTime(面试重要)
概述
Flink时间窗口的计算中,支持多种时间的概念:ProcesssingTime、IngestionTime、EventTime
如果在Flink中用户不做任何设置,默认使用的是ProcessingTime,其中ProcesssingTime、IngestionTime都是由计算节点产生。不同的是IngestionTime是DataSource组件在产生记录的时候指定时间,而ProcesssingTime记录抵达计算算子的时间,由于以上两种时间都是系统自动产生,因此使用起来难度较低,用户无需关心时钟问题,也不会出现迟到的数据。但是以上两种时间不能够准确的表达数据产生的实际时间,因此一般来说如果系统对时间的概念要求比较苛刻,这个时候就推荐使用EventTime。所谓的EventTime指的是数据产生的实际时间,系统不在参考计算节点的系统时钟。
虽然EventTime可以准确的表达事件所在的窗口,但是由于提取的是事件时间是嵌入在数据记录内部的时间,因此可能会因为网络延迟或者故障等因素导致数据不能按照数据产生的时间顺序抵达计算节点,这就为窗口关闭产生问题。因为计算节点并不知道何时该关闭该窗口。
Watermarker
因此在基于EventTime语义的窗口计算提出watermarker概念,该概念用于告知计算节点目前的系统时钟,一旦水位线越过该窗口endtime,则系统就会认定该窗口的是ready的就可对窗口实施WindowFunction计算。当水位线的时间越过窗口的endtime+允许迟到的时间,则窗口会被消亡。
Watermarker(T) = 计算节点所获取的最大事件时间 - 最大乱序时间
Watermarker实现有两种方式:固定频次(推荐)、Per Event计算
- 固定频次(推荐)
class UserDefineAssignerWithPeriodicWatermarks extends AssignerWithPeriodicWatermarks[(String,Long)]{
private var maxSeenTimestamp:Long = -1L; //设置为计算节点所看到最大的EventTime值
private val maxOrdernessTimestamp:Long = 2000L;//设置乱序时间2s,水位线始终比EventTime晚2s
/**
* 系统定期的调用 env.getConfig.setAutoWatermarkInterval(1000)设置系统计算水位线的频次
* @return
*/
override def getCurrentWatermark: Watermark = {
val currentWatermaker=maxSeenTimestamp-maxOrdernessTimestamp
println(s"newly compute watermarker:${new SimpleDateFormat("HH:mm:ss").format(currentWatermaker)}")
new Watermark(currentWatermaker)
}
/**
* 每接收一个record,系统调用该方法抽取EventTime
* @param element
* @param previousElementTimestamp
* @return
*/
override def extractTimestamp(element: (String, Long), previousElementTimestamp: Long): Long = {
maxSeenTimestamp=Math.max(maxSeenTimestamp,element._2)
element._2
}
}
- Per Event计算
class UserDefineAssignerWithPunctuatedWatermarks extends AssignerWithPunctuatedWatermarks[(String,Long)]{
private var maxSeenTimestamp:Long = -1L; //设置为计算节点所看到最大的EventTime值
private val maxOrdernessTimestamp:Long = 2000L;//设置乱序时间2s,水位线始终比EventTime晚2s
/**
* 每接收一个record,系统调用该方法抽取最新的水位线值
* @param lastElement
* @param extractedTimestamp
* @return
*/
override def checkAndGetNextWatermark(lastElement: (String, Long), extractedTimestamp: Long): Watermark = {
maxSeenTimestamp=Math.max(maxSeenTimestamp,extractedTimestamp)
val currentWatermaker=maxSeenTimestamp-maxOrdernessTimestamp
println(s"newly compute watermarker:${new SimpleDateFormat("HH:mm:ss").format(currentWatermaker)}")
new Watermark(currentWatermaker)
}
/**
* 每接收一个record,系统调用该方法抽取EventTime
* @param element
* @param previousElementTimestamp
* @return
*/
override def extractTimestamp(element: (String, Long), previousElementTimestamp: Long): Long = {
element._2
}
}
测试案例1
object FlinkTumblingWindowsEventTime1{
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
//默认Flink用的是处理时间,必须设置EventTime
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
env.setParallelism(1)//并行度设置为1,方便测试和观察
env.getConfig.setAutoWatermarkInterval(1000)//水位线1s执行一次
//key 时间
env.socketTextStream("CentOS", 9999)
.map(_.split("\\s+"))
.map(ts=>(ts(0),ts(1).toLong))
.assignTimestampsAndWatermarks(new UserDefineAssignerWithPeriodicWatermarks(2000))//设置水位线计算策略
.keyBy(t =>t._1)
.window(TumblingEventTimeWindows.of(Time.seconds(5)))
.apply(new UserDefineEventWindowFunction)
.print()
env.execute("FlinkTumblingWindowsEventTime")
}
}
class UserDefineEventWindowFunction extends WindowFunction[(String,Long),String,String,TimeWindow]{
private var sdf:SimpleDateFormat=new SimpleDateFormat("yyyy-MM-dd HH:mm:ss")
override def apply(key: String, window: TimeWindow,
input: Iterable[(String, Long)],
out: Collector[String]): Unit = {
println(s"【${sdf.format(window.getStart)} - ${sdf.format(window.getEnd)}】")
out.collect( input.map(t=> sdf.format(t._2)).mkString(" | ") )
}
}
测试案例2
class UserDefineEventWindowFunction2 extends WindowFunction[(String,Long),String,String,TimeWindow]{
private var sdf:SimpleDateFormat=new SimpleDateFormat("yyyy-MM-dd HH:mm:ss")
override def apply(key: String, window: TimeWindow,
input: Iterable[(String, Long)],
out: Collector[String]): Unit = {
println(s"【${sdf.format(window.getStart)} - ${sdf.format(window.getEnd)}】")
out.collect( input.map(t=> sdf.format(t._2)).mkString(" | ") )
}
}
object FlinkTumblingWindowsEventTime2 {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
//默认Flink用的是处理时间,必须设置EventTime
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
env.setParallelism(1)//并行度设置为1,方便测试和观察
//key 时间
env.socketTextStream("CentOS", 9999)
.map(_.split("\\s+"))
.map(ts=>(ts(0),ts(1).toLong))
.assignTimestampsAndWatermarks(
new UserDefineAssignerWithPunctuatedWatermarks(2000))//设置水位线计算策略
.keyBy(t =>t._1)
.window(TumblingEventTimeWindows.of(Time.seconds(5)))
.apply(new UserDefineEventWindowFunction2)
.print()
env.execute("FlinkTumblingWindowsEventTime")
}
}
Allowed Lateness
在Flink中默认情况下水位线一旦没过窗口的EndTime,这个时候窗口就被理解为就绪状态,系统会调用WindowFunction实现对窗口元素的聚合运算,然后丢弃窗口。原因是当用户不设置late时间默认值0.
窗口删除条件:Watermarker >= Window EndTime + Allow Late Time
窗口触发条件:Watermarker >= Window EndTime因此flink可以通过设置 Allow Late Time来处理迟到的时间,只要窗口还没有被删除,则迟到的数据可以再次加入窗口计算。
class UserDefineAssignerWithPunctuatedWatermarks(maxOrdernessTime:Long) extends AssignerWithPunctuatedWatermarks[(String,Long)]{
private var maxSeenEventTime=0L
private var sdf:SimpleDateFormat=new SimpleDateFormat("yyyy-MM-dd HH:mm:ss")
//每接收一条记录就会计算一次
override def checkAndGetNextWatermark(lastElement: (String, Long), extractedTimestamp: Long): Watermark = {
return new Watermark(maxSeenEventTime-maxOrdernessTime)
}
override def extractTimestamp(element: (String, Long), previousElementTimestamp: Long): Long = {
//将事件的最大值,赋值给maxSeenEventTime
maxSeenEventTime=Math.max(maxSeenEventTime,element._2)
println(s"事件时间:${sdf.format(element._2)} 水位线:${sdf.format(maxSeenEventTime-maxOrdernessTime)}")
return element._2
}
}
class UserDefineEventWindowFunction extends WindowFunction[(String,Long),String,String,TimeWindow]{
private var sdf:SimpleDateFormat=new SimpleDateFormat("yyyy-MM-dd HH:mm:ss")
override def apply(key: String, window: TimeWindow,
input: Iterable[(String, Long)],
out: Collector[String]): Unit = {
println(s"【${sdf.format(window.getStart)} - ${sdf.format(window.getEnd)}】")
out.collect( input.map(t=> sdf.format(t._2)).mkString(" | ") )
}
}
object FlinkTumblingWindowsLateData {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
//默认Flink用的是处理时间,必须设置EventTime
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
env.setParallelism(1)//并行度设置为1,方便测试和观察
//key 时间
env.socketTextStream("CentOS", 9999)
.map(_.split("\\s+"))
.map(ts=>(ts(0),ts(1).toLong))
.assignTimestampsAndWatermarks(new UserDefineAssignerWithPunctuatedWatermarks(2000))//设置水位线计算策略
.keyBy(t =>t._1)
.window(TumblingEventTimeWindows.of(Time.seconds(5)))
.allowedLateness(Time.seconds(2)) //允许迟到2s
.apply(new UserDefineEventWindowFunction)
.print()
env.execute("FlinkTumblingWindowsLateData")
}
}
Getting late data as a side output
使用Flink的SideOut功能,您可以获取最近被丢弃的数据流。 首先,您需要使用窗口流上的sideOutputLateData(OutputTag)指定要获取最新数据。然后,您可以根据窗口化操作的结果获取SideOut流:
class UserDefineEventWindowFunction extends WindowFunction[(String,Long),String,String,TimeWindow]{
private var sdf:SimpleDateFormat=new SimpleDateFormat("yyyy-MM-dd HH:mm:ss")
override def apply(key: String, window: TimeWindow,
input: Iterable[(String, Long)],
out: Collector[String]): Unit = {
println(s"【${sdf.format(window.getStart)} - ${sdf.format(window.getEnd)}】")
out.collect( input.map(t=> sdf.format(t._2)).mkString(" | ") )
}
}
class UserDefineAssignerWithPunctuatedWatermarks(maxOrdernessTime:Long) extends AssignerWithPunctuatedWatermarks[(String,Long)]{
private var maxSeenEventTime=0L
private var sdf:SimpleDateFormat=new SimpleDateFormat("yyyy-MM-dd HH:mm:ss")
//每接收一条记录就会计算一次
override def checkAndGetNextWatermark(lastElement: (String, Long), extractedTimestamp: Long): Watermark = {
return new Watermark(maxSeenEventTime-maxOrdernessTime)
}
override def extractTimestamp(element: (String, Long), previousElementTimestamp: Long): Long = {
//将事件的最大值,赋值给maxSeenEventTime
maxSeenEventTime=Math.max(maxSeenEventTime,element._2)
println(s"事件时间:${sdf.format(element._2)} 水位线:${sdf.format(maxSeenEventTime-maxOrdernessTime)}")
return element._2
}
}
object FlinkTumblingWindowsLateData {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
//默认Flink用的是处理时间,必须设置EventTime
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
env.setParallelism(1)//并行度设置为1,方便测试和观察
val lateTag = new OutputTag[(String,Long)]("latedata")
//key 时间
var stream= env.socketTextStream("CentOS", 9999)
.map(_.split("\\s+"))
.map(ts=>(ts(0),ts(1).toLong))
.assignTimestampsAndWatermarks(new UserDefineAssignerWithPunctuatedWatermarks(2000))//设置水位线计算策略
.keyBy(t =>t._1)
.window(TumblingEventTimeWindows.of(Time.seconds(5)))
.allowedLateness(Time.seconds(2)) //允许迟到2s
.sideOutputLateData(lateTag)
.apply(new UserDefineEventWindowFunction)
stream.print("窗口输出")
stream.getSideOutput(lateTag).printToErr("迟到数据")
env.execute("FlinkTumblingWindowsLateData")
}
}
Join操作
Window Join
窗口join将共享相同key
并位于同一窗口
中的两个流的元素join在一起。可以使用WindowAssigner定义这些窗口,并根据两个流中的元素对其进行评估。 然后将双方的元素传递到用户定义的JoinFunction或FlatJoinFunction,在此用户可以发出满足join条件的结果。代码结构如下:
streamA.join(streamB)
.where(<KeySelector>)//streamA某个字段
.equalTo(<KeySelector>)//streamB某个字段
.window(<WindowAssigner>)//指定窗口分配器
.apply(<JoinFunction>) //运用join Function
Note
-
创建两个流的元素的成对组合的行为就像一个
inner-join
,这意味着如果一个流中的元素没有与另一流中要连接的元素对应的元素,则不会发出该元素。 -
那些加入窗口的元素将以最大的时间戳作为时间戳。例如,以[5,10)为边界的窗口将导致连接的元素具有9作为其时间戳。
Tumbling Window Join
当执行滚动窗口join时,所有具有公共key和公共滚动窗口的元素都按成对组合联接,并传递到JoinFunction或FlatJoinFunction。因为它的行为就像一个内部联接,所以在其滚动窗口中不发射一个流中没有其他流元素的元素!
object FlinkTumblingWindowJoin {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
//默认Flink用的是处理时间,必须设置EventTime
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
env.setParallelism(1)//并行度设置为1,方便测试和观察
//001 zhangsan 时间戳
var user= env.socketTextStream("CentOS", 9999)
.map(line=>line.split("\\s+"))
.map(ts=>(ts(0),ts(1),ts(2).toLong))
.assignTimestampsAndWatermarks(new UserAssignerWithPunctuatedWatermarks(2000))
//apple 001 时间戳
var order= env.socketTextStream("CentOS", 8888)
.map(line=>line.split("\\s+"))
.map(ts=>(ts(0),ts(1),ts(2).toLong))
.assignTimestampsAndWatermarks(new OrderAssignerWithPunctuatedWatermarks(2000))
user.join(order)
.where(t=>t._1)
.equalTo(t=>t._2)
.window(TumblingEventTimeWindows.of(Time.seconds(5)))
.allowedLateness(Time.seconds(2))
.apply((v1,v2)=>(v1._1,v1._2,v2._1))
.print("连接结果")
env.execute("FlinkTumblingWindowJoin")
}
}
Sliding Window Join
执行滑动窗口连接时,所有具有公共键和公共滑动窗口的元素都按成对组合进行连接,并传递给JoinFunction或FlatJoinFunction。在当前滑动窗口中,一个流中没有其他流元素的元素不会被发出!请注意,某些元素可能在一个滑动窗口中连接,但不能在另一个窗口中连接!
object FlinkSlidingWindowJoin {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
//默认Flink用的是处理时间,必须设置EventTime
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
env.setParallelism(1)//并行度设置为1,方便测试和观察
//001 zhangsan 时间戳
var user= env.socketTextStream("CentOS", 9999)
.map(line=>line.split("\\s+"))
.map(ts=>(ts(0),ts(1),ts(2).toLong))
.assignTimestampsAndWatermarks(new UserAssignerWithPunctuatedWatermarks(2000))
//apple 001 时间戳
var order= env.socketTextStream("CentOS", 8888)
.map(line=>line.split("\\s+"))
.map(ts=>(ts(0),ts(1),ts(2).toLong))
.assignTimestampsAndWatermarks(new OrderAssignerWithPunctuatedWatermarks(2000))
user.join(order)
.where(t=>t._1)
.equalTo(t=>t._2)
.window(SlidingEventTimeWindows.of(Time.seconds(4),Time.seconds(2)))
.allowedLateness(Time.seconds(2))
.apply((v1,v2)=>(v1._1,v1._2,v2._1))
.print("连接结果")
env.execute("FlinkSlidingWindowJoin")
}
}
Session Window Join
在执行会话窗口连接时,具有“组合”时满足会话条件的相同键的所有元素将以成对组合的方式连接在一起,并传递给JoinFunction或FlatJoinFunction。再次执行内部联接,因此,如果有一个会话窗口仅包含一个流中的元素,则不会发出任何输出!
object FlinkSessionWindowJoin {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
//默认Flink用的是处理时间,必须设置EventTime
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
env.setParallelism(1)//并行度设置为1,方便测试和观察
//001 zhangsan 时间戳
var user= env.socketTextStream("CentOS", 9999)
.map(line=>line.split("\\s+"))
.map(ts=>(ts(0),ts(1),ts(2).toLong))
.assignTimestampsAndWatermarks(new UserAssignerWithPunctuatedWatermarks(2000))
//apple 001 时间戳
var order= env.socketTextStream("CentOS", 8888)
.map(line=>line.split("\\s+"))
.map(ts=>(ts(0),ts(1),ts(2).toLong))
.assignTimestampsAndWatermarks(new OrderAssignerWithPunctuatedWatermarks(2000))
user.join(order)
.where(t=>t._1)
.equalTo(t=>t._2)
.window(EventTimeSessionWindows.withGap(Time.seconds(2)))
.allowedLateness(Time.seconds(2))
.apply((v1,v2)=>(v1._1,v1._2,v2._1))
.print("连接结果")
env.execute("FlinkSessionWindowJoin")
}
}
Interval Join
Interval Join使用公共key连接两个流(现在将它们分别称为A和B)的元素,并且流B的元素具有与流A的元素时间戳相对时间间隔的时间戳。
b.timestamp ∈ [a.timestamp + lowerBound; a.timestamp + upperBound]
或者
a.timestamp + lowerBound <= b.timestamp <= a.timestamp + upperBound
其中a和b是A和B的元素,它们共享一个公共key。只要lowerBound始终小于或等于upperBound,则lowerBound和upperBound都可以为负或正。Interval Join当前仅执行内部联接。将一对元素传递给ProcessJoinFunction时,将为它们分配两个元素的较大时间戳。
class UserDefineProcessJoinFunction extends ProcessJoinFunction[(String,String,Long),(String,String,Long),String]{
val sdf:SimpleDateFormat=new SimpleDateFormat("yyyy-MM-dd HH:mm:ss")
override def processElement(left: (String, String, Long),
right: (String, String, Long),
ctx: ProcessJoinFunction[(String, String, Long), (String, String, Long), String]#Context,
out: Collector[String]): Unit = {
val leftTimestamp = ctx.getLeftTimestamp
val rightTimestamp = ctx.getRightTimestamp
val timestamp=ctx.getTimestamp
println(s"left:${sdf.format(leftTimestamp)},right:${sdf.format(rightTimestamp)} time:${sdf.format(timestamp)}")
out.collect(s"${left._1} ${left._2} ${right._1}")
}
}
object FlinkIntervalJoin {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
//默认Flink用的是处理时间,必须设置EventTime
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
env.setParallelism(1)//并行度设置为1,方便测试和观察
//001 zhangsan 时间戳
var user= env.socketTextStream("localhost", 8888)
.map(line=>line.split("\\s+"))
.map(ts=>(ts(0),ts(1),ts(2).toLong))
.assignTimestampsAndWatermarks(new UserAssignerWithPunctuatedWatermarks(2000))
.keyBy(t=>t._1)
//apple 001 时间戳
var order= env.socketTextStream("localhost", 7777)
.map(line=>line.split("\\s+"))
.map(ts=>(ts(0),ts(1),ts(2).toLong))
.assignTimestampsAndWatermarks(new OrderAssignerWithPunctuatedWatermarks(2000))
.keyBy(t=>t._2)
user.intervalJoin(order)
.between(Time.seconds(0),Time.seconds(4))//当水位线的时间大于 user时间+4s的时候,user数据失效
.process(new ProcessJoinFunction[(String,String,Long),(String,String,Long),String] {
override def processElement(left: (String, String, Long),
right: (String, String, Long),
ctx: ProcessJoinFunction[(String, String, Long), (String, String, Long), String]#Context,
out: Collector[String]): Unit = {
out.collect(left._1+" "+left._2+" "+right._1)
}
})
.print("连接结果")
env.execute("FlinkSlidingJoin")
}
}
Flink HA搭建
Overview
JobManager协调每个Flink部署。它负责调度和资源管理。 默认情况下,每个Flink集群只有一个JobManager实例。这将创建一个单点故障(SPOF- single point of failure):如果JobManager崩溃,则无法提交任何新程序,并且正在运行的程序也会失败。 使用JobManager高可用性,您可以从JobManager故障中恢复,从而消除SPOF。您可以为Standalone群集和YARN群集配置高可用性。
Standalone Cluster High Availability
独立群集的JobManager高可用性的总体思想是,随时都有一个leader的JobManager,并且有多个备用JobManager可以在leader失败的情况下接管leader。这样可以确保没有单点故障,并且只要backup的JobManager处于leader地位,程序就可以正常执行。备用JobManager实例和主JobManager实例之间没有明显区别。每个JobManager都可以充当master角色或standby角色。 作为示例,请考虑以下三个JobManager实例的设置:
搭建过程
- 时钟同步
[root@CentOSX ~]# yum install -y ntp
[root@CentOSX ~]# ntpdate time.apple.com
13 Mar 17:09:10 ntpdate[6581]: step time server 17.253.84.253 offset 2169739.408920 sec
[root@CentOSX ~]# clock -w
- IP和主机映射
[root@CentOSX ~]# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.52.130 CentOSA
192.168.52.131 CentOSB
192.168.52.132 CentOSC
- SSH免密码认证
[root@CentOSX ~]# ssh-keygen -t rsa
[root@CentOSX ~]# ssh-copy-id CentOSA
[root@CentOSX ~]# ssh-copy-id CentOSB
[root@CentOSX ~]# ssh-copy-id CentOSC
- 关闭防火墙
[root@CentOSX ~]# systemctl stop firewalld
[root@CentOSX ~]# systemctl disable firewalld
- 安装JDK,配置JAVA_HOME
[root@CentOSX ~]# rpm -ivh jdk-8u171-linux-x64.rpm
[root@CentOSX ~]# vi .bashrc
JAVA_HOME=/usr/java/latest
PATH=$PATH:$JAVA_HOME/bin
CLASSPATH=.
export JAVA_HOME
export CLASSPATH
export PATH
[root@CentOSX ~]# source .bashrc
- 安装Zookeeper集群 -启动ZK集群
[root@CentOSX ~]# tar -zxf zookeeper-3.4.6.tar.gz -C /usr/
[root@CentOSX ~]# mkdir /root/zkdata
[root@CentOSA ~]# echo 1 >> /root/zkdata/myid
[root@CentOSB ~]# echo 2 >> /root/zkdata/myid
[root@CentOSC ~]# echo 3 >> /root/zkdata/myid
[root@CentOSX ~]# touch /usr/zookeeper-3.4.6/conf/zoo.cfg
[root@CentOSX ~]# vi /usr/zookeeper-3.4.6/conf/zoo.cfg
tickTime=2000
dataDir=/root/zkdata
clientPort=2181
initLimit=5
syncLimit=2
server.1=CentOSA:2887:3887
server.2=CentOSB:2887:3887
server.3=CentOSC:2887:3887
[root@CentOSX ~]# /usr/zookeeper-3.4.6/bin/zkServer.sh start zoo.cfg
[root@CentOSX ~]# /usr/zookeeper-3.4.6/bin/zkServer.sh status zoo.cfg
JMX enabled by default
Using config: /usr/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: `follower|leader`
[root@CentOSX ~]# jps
5879 `QuorumPeerMain`
7423 Jps
- 安装HDFS-HA
[root@CentOSX ~]# tar -zxf hadoop-2.9.2.tar.gz -C /usr/
[root@CentOSX ~]# vi .bashrc
HADOOP_HOME=/usr/hadoop-2.9.2
JAVA_HOME=/usr/java/latest
PATH=$PATH:$JAVA_HOME/bin:$M2_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
CLASSPATH=.
export JAVA_HOME
export PATH
export CLASSPATH
export HADOOP_HOME
export HADOOP_CLASSPATH=`hadoop classpath`
[root@CentOSX ~]# source .bashrc
[root@CentOSX ~]# vi /usr/hadoop-2.9.2/etc/hadoop/core-site.xml
<!--配置Namenode服务ID-->
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/hadoop-2.9.2/hadoop-${user.name}</value>
</property>
<property>
<name>fs.trash.interval</name>
<value>30</value>
</property>
<!--配置ZK服务信息-->
<property>
<name>ha.zookeeper.quorum</name>
<value>CentOSA:2181,CentOSB:2181,CentOSC:2181</value>
</property>
<!--配置SSH秘钥位置-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
[root@CentOSX ~]# vi /usr/hadoop-2.9.2/etc/hadoop/hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<!--开启自动故障转移-->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!--解释core-site.xml内容-->
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>CentOSA:9000</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>CentOSB:9000</value>
</property>
<!--配置日志服务器的信息-->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://CentOSA:8485;CentOSB:8485;CentOSC:8485/mycluster</value>
</property>
<!--实现故障转切换的实现类-->
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
[root@CentOSX ~]# vi /usr/hadoop-2.9.2/etc/hadoop/slaves
CentOSA
CentOSB
CentOSC
启动HDFS(集群初始化启动)
[root@CentOSX ~]# hadoop-daemon.sh start journalnode (等待10s钟)
[root@CentOSA ~]# hdfs namenode -format
[root@CentOSA ~]# hadoop-daemon.sh start namenode
[root@CentOSB ~]# hdfs namenode -bootstrapStandby
[root@CentOSB ~]# hadoop-daemon.sh start namenode
#注册Namenode信息到zookeeper中,只需要在CentOSA或者B上任意一台执行一下指令
[root@CentOSA|B ~]# hdfs zkfc -formatZK
[root@CentOSA ~]# hadoop-daemon.sh start zkfc
[root@CentOSB ~]# hadoop-daemon.sh start zkfc
[root@CentOSX ~]# hadoop-daemon.sh start datanode
搭建配置Flink
[root@CentOSX ~]# tar -zxf flink-1.10.0-bin-scala_2.11.tgz -C /usr/
[root@CentOSX ~]# vi /usr/flink-1.10.0/conf/flink-conf.yaml
taskmanager.numberOfTaskSlots: 4
parallelism.default: 3
queryable-state.enable: true
#==============================================================================
# High Availability
#==============================================================================
high-availability: zookeeper
high-availability.storageDir: hdfs://mycluster/flink/ha/
high-availability.zookeeper.quorum: CentOSA:2181,CentOSB:2181,CentOSC:2181
high-availability.zookeeper.path.root: /flink
high-availability.cluster-id: /default_ns
#==============================================================================
# Fault tolerance and checkpointing
#==============================================================================
state.backend: rocksdb
state.checkpoints.dir: hdfs://mycluster/flink-checkpoints
state.savepoints.dir: hdfs://mycluster/flink-savepoints
state.backend.incremental: false
state.backend.rocksdb.ttl.compaction.filter.enabled: true
[root@CentOSX ~]# vi /usr/flink-1.10.0/conf/masters
CentOSA:8081
CentOSB:8081
CentOSC:8081
[root@CentOSX ~]# vi /usr/flink-1.10.0/conf/slaves
CentOSA
CentOSB
CentOSC
[root@CentOSA|B|C flink-1.10.0]# ./bin/start|stop-cluster.sh
Starting HA cluster with 3 masters.
Starting standalonesession daemon on host CentOSA.
Starting standalonesession daemon on host CentOSB.
Starting standalonesession daemon on host CentOSC.
Starting taskexecutor daemon on host CentOSA.
Starting taskexecutor daemon on host CentOSB.
Starting taskexecutor daemon on host CentOSC.
查询哪个机器是JobManager Leader?
- 可以查看所有JobManager上的日志文件搜索
leadership
- 只用Leader上能够查看到TaskManager的日志输出!
用户可以leader主机执行,测试故障切换
[root@CentOSX flink-1.10.0]# ./bin/jobmanager.sh start|stop
项目必读
电子对账单:https://www.jianshu.com/p/e1cca7ca63d6
电商用户行为分析:https://cloud.tencent.com/developer/article/1647322
用户购物路径追踪:http://shiyanjun.cn/archives/1857.html