例子参考资料:http://www.aboutyun.com/thread-8917-1-1.html
自定义sink实现和属性注入:http://www.coderli.com/flume-ng-sink-properties/
自定义拦截器:http://blog.csdn.net/xiao_jun_0820/article/details/38333171
自定义kafkasink:www.itnose.net/detail/6187977.html
1. 使用avro发送指定文件
(1)在conf文件夹下创建avro.conf文件,写入如下配置
vim /usr/local/hadoop/apache-flume-1.6.0-bin/conf/avro.conf
a1.sources = r1
a1.sinks = k1
a1.channels = c1 # Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.channels = c1
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 4141 # Describe the sink
a1.sinks.k1.type = logger # Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100 # Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
(2)启动flume agent a1
进入bin执行命令
./flume-ng agent -c . -f /usr/local/hadoop/apache-flume-1.6.-bin/conf/avro.conf -n a1 -Dflume.root.logger=INFO,console
(3)创建用于传送的日志文件并写入文字
在/usr/local/hadoop/apache-flume-1.6.0-bin文件夹下创建 log.00 文件,写入"hahahahah"
(4)使用avro-client发送文件
再启动一个控制台,进入bin执行命令
./flume-ng avro-client -c . -H localhost -p -F /usr/local/hadoop/apache-flume-1.6.-bin/log.
可在控制台1见如下日志,说明已经成功传送
// :: INFO ipc.NettyServer: [id: 0xa681f3fa, /127.0.0.1: => /127.0.0.1:] OPEN
// :: INFO ipc.NettyServer: [id: 0xa681f3fa, /127.0.0.1: => /127.0.0.1:] BOUND: /127.0.0.1:
// :: INFO ipc.NettyServer: [id: 0xa681f3fa, /127.0.0.1: => /127.0.0.1:] CONNECTED: /127.0.0.1:
// :: INFO ipc.NettyServer: [id: 0xa681f3fa, /127.0.0.1: :> /127.0.0.1:] DISCONNECTED
// :: INFO ipc.NettyServer: [id: 0xa681f3fa, /127.0.0.1: :> /127.0.0.1:] UNBOUND
// :: INFO ipc.NettyServer: [id: 0xa681f3fa, /127.0.0.1: :> /127.0.0.1:] CLOSED
// :: INFO ipc.NettyServer: Connection to /127.0.0.1: disconnected.
// :: INFO sink.LoggerSink: Event: { headers:{} body: 2E 2F 6C 6D hahahahah ./flum }
2.使用EXEC(监控单个日志文件)
EXEC执行一个给定的命令获得输出的源,如果要使用tail命令,必选使得file足够大才能看到输出内容输出内容
(1)创建agent配置文件,在 /conf 下新建 exec_tail.conf
a1.sources = r1
a1.sinks = k1
a1.channels = c1 # Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.channels = c1
a1.sources.r1.command = tail -F /usr/local/hadoop/apache-flume-1.6.-bin/log_exec_tail
#注意,上面这一行就是要监控的日志文件的位置 # Describe the sink
a1.sinks.k1.type = logger # Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity =
a1.channels.c1.transactionCapacity = # Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
(2)启动flume agent a1
进入bin执行命令
./flume-ng agent -c . -f /usr/local/hadoop/apache-flume-1.6.-bin/conf/exec_tail.conf -n a1 -Dflume.root.logger=INFO,console
(3)创建用于传送的日志文件并写入文字
在/usr/local/hadoop/apache-flume-1.6.0-bin文件夹下创建log_exec_tail文件,并在其中生成足够多的日志
> for i in {..}; do echo "test line $i" >> /usr/local/hadoop/apache-flume-1.6.-bin/log_exec_tail; done;
可在控制台1看见如下日志
//前面的省略 // :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
3.使用Spool(监控整个目录)
Spool监测配置的目录下新增的文件,并将文件中的数据读取出来。需要注意两点:
1) 拷贝到spool目录下的文件不可以再打开编辑。
2) spool目录下不可包含相应的子目录
(1)在conf文件夹下创建spool.conf文件,写入如下配置
vim /usr/local/hadoop/apache-flume-1.6.0-bin/conf/spool.conf
a1.sources = r1
a1.sinks = k1
a1.channels = c1 # Describe/configure the source
a1.sources.r1.type = spooldir
a1.sources.r1.channels = c1
#要监控的目录(注意 一旦写入这个目录,文件就不能更改)
a1.sources.r1.spoolDir = /usr/local/hadoop/apache-flume-1.6.-bin/logs
a1.sources.r1.fileHeader = true # Describe the sink
a1.sinks.k1.type = logger # Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity =
a1.channels.c1.transactionCapacity = # Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
(2)启动flume agent a1
进入bin执行命令
> ./flume-ng agent -c . -f /usr/local/hadoop/apache-flume-1.6.-bin/conf/spool.conf -n a1 -Dflume.root.logger=INFO,console
(3)向被监控的文件夹下传入日志文件
生成10个文件
> for i in {..}; do echo "test line $i" >> /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text$i.log; done;
查看控制台,可见如下日志
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text1.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text1.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text1.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text10.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text10.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text10.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text2.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text2.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text2.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text3.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text3.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text3.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text4.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text4.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text4.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text5.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text5.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text5.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text6.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text6.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text6.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text7.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text7.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text7.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text8.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text8.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text8.log.COMPLETED
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text9.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text9.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text9.log.COMPLETED
注意:发送完毕的日志文件会后缀名会添加“.COMPLETED”