Spark Streaming处理Flume数据练习

把Flume Source(netcat类型),从终端上不断给Flume
Source发送消息,Flume把消息汇集到Sink(avro类型),由Sink把消息推送给Spark Streaming并处理后输出
版本信息:spark2.4.0 Flume 1.7.0

(基于pyspark)

一、Flume安装

①、文件导入

# 将apache-flume-1.7.0-bin.tar.gz解压到/usr/local目录下
sudo tar -zxvf apache-flume-1.7.0-bin.tar.gz -C /usr/local
#将解压的文件修改名字为flume,简化操作
sudo mv ./apache-flume-1.7.0-bin ./flume   
#把/usr/local/flume目录的权限赋予当前登录Linux系统的用户,这里假设是hadoop用户
sudo chown -R hadoop:hadoop ./flume 

②、变量配置

#配置环境变量
sudo vim ~/.bashrc

#加入下面路径
export FLUME_HOME=/usr/local/flume                   
export FLUME_CONF_DIR=$FLUME_HOME/conf
export PATH=$PATH:$FLUME_HOME/bin

③、flume-env.sh 配置文件修改

cd /usr/local/flume/conf 
sudo cp ./flume-env.sh.template ./flume-env.sh
sudo vim ./flume-env.sh

#加入java路径,根据各自路径配置
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64;

④、查看Flume版本

cd /usr/local/flume
./bin/flume-ng version

Spark Streaming处理Flume数据练习

二、Avro中anent配置文件建立

cd /usr/local/flume/conf2.sudo 
vim ./flume-to-spark.conf

新建文件flume-to-spark.conf

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

#receive message from port 33333
# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 33333

#send message through port 44444
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = localhost
a1.sinks.k1.port = 44444

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000000
a1.channels.c1.transactionCapacity = 1000000

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

1.Flume suorce类为netcat,绑定到localhost的33333端口, 消息可以通过telnet localhost 33333 发送到flume suorce

2.Flume Sink类为avro,绑定44444端口,flume sink通过 localhost 44444端口把消息发送出来。而spark streaming程序一直监听44444端口。

三、spark配置

①、下载spark-streaming-kafka-0-8_2.11-2.4.0.jar

2.11对应scala,2.4.0对应spark版本(根据自己配置版本下载)

下载地址:

https://repo1.maven.org/maven2/org/apache/spark/spark-streaming-flume_2.11/2.4.1/spark-streaming-flume_2.11-2.4.1.jar

把这个jar文件放到/usr/local/spark/jars/flume目录下

②、sudo cp ./spark-streaming-kafka-0-8_2.11-2.4.0.jar /usr/local/spark/jars/flume/

③、修改spark目录下conf/spark-env.sh文件中的SPARK_DIST_CLASSPATH变量.把flume的相关jar包添加到此文件中。

export SPARK_DIST_CLASSPATH=$(/usr/local/hadoop/bin/hadoopclasspath):$(/usr/local/hbase/bin/hbaseclasspath):/usr/local/spark/jars/flume/*:/usr/local/flume/lib/*

四、编写spark程序使用Flume数据源
创建python文件

cd /usr/local/spark/mycode
mkdir flume
cd flume
sudo vim FlumeEventCount.py

代码如下:

from __future__ import print_function
import sys
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from pyspark.streaming.flume import FlumeUtils
import pyspark

if __name__ == "__main__":
    if len(sys.argv) != 3:
        print("Usage: flume_wordcount.py <hostname> <port>", file=sys.stderr)
        exit(-1)
    sc = SparkContext(appName="FlumeEventCount")
    ssc = StreamingContext(sc, 10)
    hostname= sys.argv[1]
    port = int(sys.argv[2])
    stream=FlumeUtils.createStream(ssc,hostname,port,pyspark.StorageLevel.MEMORY_AND_DISK_SER_2)
    stream.pprint()
    stream.count().map(lambda cnt : "Recieve " + str(cnt) +" Flume events!!!!").pprint()
    ssc.start()
    ssc.awaitTermination()

五、效果测试

首先启动Spark streaming程序(基于pyspark) (终端1)

入参为本地localhose 端口44444(该端口对应flume-to-spark.conf中的sinks端口)

/usr/local/spark/bin/spark-submit --driver-class-path /usr/local/spark/jars/*:/usr/local/spark/jars/flume/* ./FlumeEventCount.py localhost 44444

然后启动一个新的终端,启动Flume Agent (终端2)

cd /usr/local/flume
bin/flume-ng agent --conf ./conf --conf-file ./conf/spark-streaming.conf --name a1 -Dflume.root.logger=INFO,console

最后再启动一个新的终端连接33333端口 (终端3)

telnet localhost 33333#输入hello world

终端1结果如下:(分开返回两条信息)
Spark Streaming处理Flume数据练习
Spark Streaming处理Flume数据练习


学习交流,有任何问题还请随时评论指出交流。

上一篇:flume拦截器


下一篇:Flume-1.9.0的安装部署