info hdfs.hdfseventsink:调用了writer回调

gxwragnw  于 2021-06-04  发布在  Hadoop
关注(0)|答案(0)|浏览(237)

我用谷歌搜索了这个错误。我没有办法。
我有一个伪分布式hadoop flume。它是停靠的应用程序。我正试着从flume在控制台上写东西。它起作用了。我正试着用hdfs写。它说writer回调失败了。
Flume.conf

a2.sources = r1
  a2.sinks = k1
  a2.channels = c1

 a2.sources.r1.type = netcat
 a2.sources.r1.bind = localhost
 a2.sources.r1.port = 5140

 a2.sinks.k1.type = hdfs
 a2.sinks.k1.hdfs.fileType = DataStream
 a2.sinks.k1.hdfs.writeFormat = Text
 a2.sinks.k1.hdfs.path = hdfs://localhost:8020/user/root/syslog/%y-%m-%d/%H%M/%S
 a2.sinks.k1.hdfs.filePrefix = events
 a2.sinks.k1.hdfs.roundUnit = minute
 a2.sinks.k1.hdfs.useLocalTimeStamp = true

 # Use a channel which buffers events in memory
 a2.channels.c1.type = memory
 a2.channels.c1.capacity = 10000
 a2.channels.c1.transactionCapacity = 100

 # Bind the source and sink to the channel
 a2.sources.r1.channels = c1
 a2.sinks.k1.channel = c1

Flume运行命令

/usr/bin/flume-ng agent --conf-file /etc/flume-ng/conf/flume.conf --name a1 -Dflume.root.logger=INFO,console

所有hadoop服务都在运行。如何解决这个错误?你知道吗?

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题