Flume配置Failover Sink Processor

 

 

 

1 官网内容

Flume配置Failover Sink Processor

2 看一张图一目了然

Flume配置Failover Sink Processor

 

 

Flume配置Failover Sink Processor

3 详细配置

  source配置文件

  

#配置文件:
	a1.sources= r1
	a1.sinks= k1 k2
	a1.channels= c1
	
	#负载平衡
	a1.sinkgroups = g1
	a1.sinkgroups.g1.sinks = k1 k2
	a1.sinkgroups.g1.processor.type = failover
	a1.sinkgroups.g1.processor.priority.k1 = 5
	a1.sinkgroups.g1.processor.priority.k2 = 10
	a1.sinkgroups.g1.processor.maxpenalty = 1000
	
	
	
	#Describe/configure the source
	a1.sources.r1.type= exec
	a1.sources.r1.command= tail -F /tmp/logs/test.log
	
	
	#Describe the sink
	a1.sinks.k1.type= avro
	a1.sinks.k1.hostname= 127.0.0.1
	a1.sinks.k1.port= 50001
	
	a1.sinks.k2.type= avro
	a1.sinks.k2.hostname= 127.0.0.1
	a1.sinks.k2.port= 50002
	
	# Usea channel which buffers events in memory
	a1.channels.c1.type= memory
	a1.channels.c1.capacity= 1000
	a1.channels.c1.transactionCapacity= 100
	
	# set channel
	a1.sinks.k1.channel= c1
	a1.sinks.k2.channel= c1
	a1.sources.r1.channels= c1

  sink1配置文件

	
	# Name the components on this agent
	a2.sources = r1
	a2.sinks = k1
	a2.channels = c1
	
	# Describe/configure the source
	a2.sources.r1.type = avro
	a2.sources.r1.channels = c1
	a2.sources.r1.bind = 127.0.0.1
	a2.sources.r1.port = 50001
	
	# Describe the sink
	a2.sinks.k1.type = logger
	a2.sinks.k1.channel = c1
	
	# Use a channel which buffers events inmemory
	a2.channels.c1.type = memory
	a2.channels.c1.capacity = 1000
	a2.channels.c1.transactionCapacity = 100

  sink2配置

# Name the components on this agent
	a3.sources = r1
	a3.sinks = k1
	a3.channels = c1
	
	# Describe/configure the source
	a3.sources.r1.type = avro
	a3.sources.r1.channels = c1
	a3.sources.r1.bind = 127.0.0.1
	a3.sources.r1.port = 50002
	
	# Describe the sink
	a3.sinks.k1.type = logger
	a3.sinks.k1.channel = c1
	
	# Use a channel which buffers events inmemory
	a3.channels.c1.type = memory
	a3.channels.c1.capacity = 1000
	a3.channels.c1.transactionCapacity = 100
	
	

 

4 启动服务

  

先启动sink1 sink2 再启动source
			
	flume-ng agent -c conf -f /mnt/software/flume-1.6.0/flume-conf/failOver/sink2.conf -n a3 -Dflume.root.logger=DEBUG,console
	flume-ng agent -c conf -f /mnt/software/flume-1.6.0/flume-conf/failOver/sink1.conf -n a2 -Dflume.root.logger=DEBUG,console
	flume-ng agent -c conf -f /mnt/software/flume-1.6.0/flume-conf/failOver/load_source_case.conf -n a1 -Dflume.root.logger=DEBUG,console
	
	

  5 效果测试

 启动后第一次走了sink2
  
	   	: /127.0.0.1:42828
	19/02/21 23:45:47 INFO sink.LoggerSink: Event: { headers:{} body: 68 61 64 6F 6F 70                               hadoop }
	19/02/21 23:45:47 INFO sink.LoggerSink: Event: { headers:{} body: 7A 68 61 6E 67 6A 69 6E                         zhangjin }
	19/02/21 23:45:47 INFO sink.LoggerSink: Event: { headers:{} body: 78 78 78 78                                     xxxx }
	19/02/21 23:45:47 INFO sink.LoggerSink: Event: { headers:{} body: 79 79 79 79                                     yyyy }
	19/02/21 23:45:47 INFO sink.LoggerSink: Event: { headers:{} body: 7A 68 61 6E 67 6A 69 6E                         zhangjin }
	19/02/21 23:45:47 INFO sink.LoggerSink: Event: { headers:{} body: 78 78 78 78                                     xxxx }
	19/02/21 23:45:47 INFO sink.LoggerSink: Event: { headers:{} body: 79 79 79 79                                     yyyy }
	19/02/21 23:45:47 INFO sink.LoggerSink: Event: { headers:{} body: 7A 68 61 6E 67 6A 69 6E                         zhangjin }
	19/02/21 23:45:47 INFO sink.LoggerSink: Event: { headers:{} body: 78 78 78 78                                     xxxx }
	19/02/21 23:45:47 INFO sink.LoggerSink: Event: { headers:{} body: 79 79 79 79                                     yyyy }		
	
挂掉sink2,之后source感知到sink2挂了
	
	Caused by: java.net.ConnectException: Connection refused
		at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
		at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
		at org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:148)
		at org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:104)
		at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:78)
		at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
		at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:41)
		... 3 more	
		
		
数据发往sink1
	
		19/02/21 23:45:41 INFO ipc.NettyServer: [id: 0x77bfe0b5, /127.0.0.1:47142 => /127.0.0.1:50001] BOUND: /127.0.0.1:50001
	19/02/21 23:45:41 INFO ipc.NettyServer: [id: 0x77bfe0b5, /127.0.0.1:47142 => /127.0.0.1:50001] CONNECTED: /127.0.0.1:47142
	19/02/21 23:47:14 INFO sink.LoggerSink: Event: { headers:{} body: 7A 68 61 6E 67 6A 69 6E                         zhangjin }
	19/02/21 23:47:14 INFO sink.LoggerSink: Event: { headers:{} body: 78 78 78 78                                     xxxx }
	19/02/21 23:47:14 INFO sink.LoggerSink: Event: { headers:{} body: 79 79 79 79                                     yyyy }
		
		

  6 总结,从效果来看sink2挂了之后,数据发往sink1,实现了失败迁移的功能。

  

 

上一篇:C/C++返回内部静态成员的陷阱


下一篇:Linuxsocket编程中调用 inet_ntoa 函数产生的段错误 “Segmentation fault (core dumped)” 的原因及解决办法