将目录假脱机为源,将hdfs假脱机为接收器被卡住,并且不会给出任何错误消息

uujelgoq  于 2021-06-01  发布在  Hadoop
关注(0)|答案(0)|浏览(183)

我正在尝试使用flume将一些“.log”文件从本地文件系统加载到hdfs。我使用假脱机目录作为源,hdfs作为接收器。当我使用下面的命令运行代理时
bin/flume ng agent--conf/home/flume/conf--conf file/home/flume/conf/test.conf--代理名称
当我执行上面的命令时,它只在下面列出,什么都没有发生(卡住)。
信息:sourcing environment configuration script/home/flume/conf/flume-env.sh信息:包括通过hdfs access+exec/usr/java/jdk1.8.0_/bin/java-xmx20m-cp的(/home/hadoop-2.7.2/bin/hadoop)找到的hadoop库'/home/flume/conf:/home/flume/lib/:/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/:/usr/local/hadoop/share/hadoop/hdfs/:/usr/local/hadoop/share/hadoop/yarn/:/usr/local hadoop/share/hadoop/mapreduce/*'-djava.library.path=:/usr/java/packages/lib/amd64:/lib64:/lib:/usr/lib org.apache.flume.node.application--conf文件/home/flume/conf/test.conf—名称代理
请在下面查找conf文件的内容

agent.sources = src1
agent.channels = chan1
agent.sinks = sink1
agent.sources.src1.type = spooldir
agent.sources.src1.spoolDir = /home//FlumeTesting/flume_sink
agent.sources.src1.basenameHeader = true
agent.sources.src1.deletePolicy = immediate
agent.sources.src1.fileHeader = true
agent.channels.chan1.type = memory
agent.channels.chan1.capacity = 10000
agent.sinks.sink1.type = hdfs
agent.sinks.sink1.hdfs.path = hdfs://localhost:9000/flume_sink
agent.sinks.sink1.hdfs.fileType = DataStream
agent.sinks.sink1.hdfs.rollCount = 1000
agent.sinks.sink1.hdfs.rollSize = 5000
agent.sinks.sink1.hdfs.idleTimeout = 60
agent.sinks.sink1.rollInterval = 500
agent.sinks.sink1.hdfs.filePrefix = %{basename}
agent.sinks.sink1.hdfs.fileSuffix = .log
agent.sources.src1.channels = chan1
agent.sinks.sink1.channel = chan1

请纠正我,如果有任何错误在这个代码或如果我错过了什么。
提前谢谢

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题