我试图将集群应用程序中每个节点的输出整合到一个简单、一目了然的位置。我不需要将数据永久存储,我只想在同一地点看到所有的标准输出。最终,我会希望存储更少的信息,可能是使用日志文件,但现在,我只想要app->stdout->irc,而flume似乎是一个很好的选择。
我看到的所有使用exec源代码的示例都显示了使用tail的命令,尽管文档使您看起来可以使用任何输出到standard out的进程。我的config(见下文)以命令的形式运行我的应用程序,但是为了排除故障,它运行一个简单的shell脚本,以设置的间隔回显“test”。
我已经运行了所有程序,irc接收器加入了irc通道,但它从不发送任何消息。日志中的最后一项是exec正在启动。
编辑:flume版本flume-ng-1.2.0+24.43-1~挤压
flume.config文件:
agent.sources = exec1
agent.channels = mem1
agent.sinks = irc1
agent.sources.exec1.type = exec
agent.sources.exec1.command = sh /var/lib/app/test.sh
agent.sources.exec1.channels = mem1
agent.sinks.irc1.type = irc
agent.sinks.irc1.hostname = 192.168.17.16
agent.sinks.irc1.nick = flume
agent.sinks.irc1.chan = agents
agent.sinks.irc1.channel = mem1
agent.channels.mem1.type = memory
agent.channels.mem1.capacity = 100
log4j.属性:
flume.root.logger=INFO,LOGFILE
flume.log.dir=/var/log/flume-ng
flume.log.file=flume.log
log4j.logger.org.apache.flume.lifecycle = INFO
log4j.logger.org.jboss = WARN
log4j.logger.org.mortbay = INFO
log4j.logger.org.apache.avro.ipc.NettyTransceiver = WARN
log4j.rootLogger=${flume.root.logger}
log4j.appender.LOGFILE=org.apache.log4j.RollingFileAppender
log4j.appender.LOGFILE.MaxFileSize=100MB
log4j.appender.LOGFILE.MaxBackupIndex=10
log4j.appender.LOGFILE.File=${flume.log.dir}/${flume.log.file}
log4j.appender.LOGFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.LOGFILE.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d (%t) [%p - %l] %m%n
测试.sh:
# !/bin/bash
x=1
while [ $x -ge 1 ]
do
echo "Test $x"
x=$(( $x + 1 ))
sleep 5
done
flume.log:
2013-01-31 12:45:08,184 INFO nodemanager.DefaultLogicalNodeManager: Node manager starting
2013-01-31 12:45:08,184 INFO properties.PropertiesFileConfigurationProvider: Configuration provider starting
2013-01-31 12:45:08,184 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 9
2013-01-31 12:45:08,186 INFO properties.PropertiesFileConfigurationProvider: Reloading configuration file:/etc/flume-ng/conf/flume.conf
2013-01-31 12:45:08,194 INFO conf.FlumeConfiguration: Processing:irc1
2013-01-31 12:45:08,194 INFO conf.FlumeConfiguration: Added sinks: irc1 Agent: agent
2013-01-31 12:45:08,194 INFO conf.FlumeConfiguration: Processing:irc1
2013-01-31 12:45:08,194 INFO conf.FlumeConfiguration: Processing:irc1
2013-01-31 12:45:08,194 INFO conf.FlumeConfiguration: Processing:irc1
2013-01-31 12:45:08,194 INFO conf.FlumeConfiguration: Processing:irc1
2013-01-31 12:45:08,207 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: [agent]
2013-01-31 12:45:08,208 INFO properties.PropertiesFileConfigurationProvider: Creating channels
2013-01-31 12:45:08,249 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: mem1, registered successfully.
2013-01-31 12:45:08,249 INFO properties.PropertiesFileConfigurationProvider: created channel mem1
2013-01-31 12:45:08,262 INFO sink.DefaultSinkFactory: Creating instance of sink: irc1, type: irc
2013-01-31 12:45:08,266 INFO nodemanager.DefaultLogicalNodeManager: Starting new configuration:{ sourceRunners:{exec1=EventDrivenSourceRunner: { source:org.apache.flume.source.ExecSource@498665a0 }} sinkRunners:{irc1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@167a1116 counterGroup:{ name:null counters:{} } }} channels:{mem1=org.apache.flume.channel.MemoryChannel@27f7c6e1} }
2013-01-31 12:45:08,266 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel mem1
2013-01-31 12:45:08,266 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: mem1 started
2013-01-31 12:45:08,266 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink irc1
2013-01-31 12:45:08,267 INFO irc.IRCSink: IRC sink starting
2013-01-31 12:45:08,267 INFO nodemanager.DefaultLogicalNodeManager: Starting Source exec1
2013-01-31 12:45:08,267 INFO source.ExecSource: Exec source starting with command:sh /var/lib/app/test.sh
编辑批大小似乎一直是个问题,因为它一直等到20条消息(默认值?),这是100秒,直到我看到任何输出。现在batchsize=1时,标准记录器输出结果,但irc抱怨出现nullpointerexception,可能是因为event.body在某种程度上为null?
1条答案
按热度按时间06odsfpq1#
irc接收器的文档(在这里找到:flume 1.x用户指南)错误地说不需要配置splitlines。它在代码中没有默认值,因此必须对其进行配置。
查看源代码(可以在这里找到:ircsink.java),还必须指定“splitlines”,否则会出现nullpointerexception。有代码可以处理“splitchars”为null,但不能处理splitlines。报告为flume-1892(编辑:此问题于1月解决。这应该不再是一个问题。)