flume在hdfs中写入大文件的配置

fdbelqdn  于 2021-06-04  发布在  Flume
关注(0)|答案(0)|浏览(257)

我的配置文件:

agent1.sources = source1
agent1.channels = channel1
agent1.sinks = sink1

agent1.sources.source1.type = spooldir
agent1.sources.source1.spoolDir = /var/SpoolDir

agent1.sinks.sink1.type = hdfs
agent1.sinks.sink1.hdfs.path =     hdfs://templatecentosbase.oneglobe.com:8020/user/Banking4
agent1.sinks.sink1.hdfs.filePrefix = Banking_Details
agent1.sinks.sink1.hdfs.fileSuffix = .avro
agent1.sinks.sink1.hdfs.serializer = avro_event
agent1.sinks.sink1.hdfs.serializer = DataStream

# agent1.sinks.sink1.hdfs.callTimeout = 20000

agent1.sinks.sink1.hdfs.rollCount = 0
agent1.sinks.sink1.hdfs.rollsize = 100000000

# agent1.sinks.sink1.hdfs.txnEventMax = 40000

agent1.sinks.sink1.hdfs.rollInterval = 0

# agent1.sinks.sink1.serializer.codeC =

agent1.channels.channel1.type = memory
agent1.channels.channel1.capacity = 100000000
agent1.channels.channel1.transactionCapacity = 100000000

agent1.sources.source1.channels = channel1
agent1.sinks.sink1.channel = channel1

有人能帮我解决这个问题吗。源文件的大小接近400mb,它用hdfs写入位和块。示例(1.5mb到2mb)

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题