在将数据从本地路径复制到hdfs sink时,我在hdfs位置的文件中获得了一些垃圾数据。
我的flume配置文件:
# spool.conf: A single-node Flume configuration
# Name the components on this agent
a1.sources = s1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.s1.type = spooldir
a1.sources.s1.spoolDir = /home/cloudera/spool_source
a1.sources.s1.channels = c1
# Describe the sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.channel = c1
a1.sinks.k1.hdfs.path = flumefolder/events
a1.sinks.k1.hdfs.filetype = Datastream
# Format to be written
a1.sinks.k1.hdfs.writeFormat = Text
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
我正在将文件从本地路径“/home/cloudera/spool\u source”复制到hdfs路径“flumefolder/events”。
Flume命令:
flume-ng agent --conf-file spool.conf --name a1 -Dflume.root.logger=INFO,console
文件“salary.txt”位于本地路径“/home/cloudera/spool\u source”是:
GR1,Emp1,Jan,31,2500
GR3,Emp3,Jan,18,2630
GR4,Emp4,Jan,31,3000
GR4,Emp4,Feb,28,3000
GR1,Emp1,Feb,15,2500
GR2,Emp2,Feb,28,2800
GR2,Emp2,Mar,31,2800
GR3,Emp3,Mar,31,3000
GR1,Emp1,Mar,15,2500
GR2,Emp2,Apr,31,2630
GR3,Emp3,Apr,17,3000
GR4,Emp4,Apr,31,3200
GR7,Emp7,Apr,21,2500
GR11,Emp11,Apr,17,2000
在目标路径“flumefolder/events”处,使用垃圾值复制数据,如下所示:
1 W��ȩGR1,Emp1,Jan,31,2500W��ȲGR3,Emp3,Jan,18,2630W��ȷGR4,Emp4,Jan,31,3000W��ȻGR4,Emp4,Feb,28,3000W��ȽGR1,Emp1,Feb,15,2500W����GR2,Emp2,Feb,28,2800W����GR2,Emp2,Mar,31,2800W����GR3,Emp3,Mar,31,3000W����GR1,Emp1,Mar,15,2500W����GR2,Emp2,
我的配置文件spool.conf有什么问题,我无法找出原因。
1条答案
按热度按时间3vpjnl9f1#
flume配置区分大小写,因此将filetype行更改为filetype,并修复datastream值,因为它也区分大小写
您当前的设置意味着正在使用序列文件的默认值,因此使用奇数字符