下面是我的配置,它对非压缩数据非常有效
agent.sinks.test.type = hdfs
agent.sinks.test.hdfs.useLocalTimeStamp = true
agent.sinks.test.hdfs.path = s3n://AccessKeys@test/%{topic}/utc=%s
agent.sinks.test.hdfs.roundUnit = minute
agent.sinks.test.hdfs.round = true
agent.sinks.test.hdfs.roundValue = 10
agent.sinks.test.hdfs.fileSuffix = .avro
agent.sinks.test.serializer =
com.test.flume.sink.serializer.GenericRecordAvroEventSerializer$Builder
agent.sinks.test.hdfs.fileType = DataStream
agent.sinks.test.hdfs.maxOpenFiles=100
agent.sinks.test.hdfs.appendTimeout = 5000
agent.sinks.test.hdfs.callTimeout = 4000
agent.sinks.test.hdfs.rollInterval = 60
agent.sinks.test.hdfs.rollSize = 0
agent.sinks.test.hdfs.rollCount = 1000
agent.sinks.test.hdfs.batchSize = 1000
agent.sinks.test.hdfs.threadsPoolSize=100
我正在尝试使用gzip将压缩添加到这个文件中,如下所示
agent.sinks.test.type = hdfs
agent.sinks.test.hdfs.useLocalTimeStamp = true
agent.sinks.test.hdfs.path = s3n://AccessKeys@test/%{topic}/utc=%s
agent.sinks.test.hdfs.roundUnit = minute
agent.sinks.test.hdfs.round = true
agent.sinks.test.hdfs.roundValue = 10
agent.sinks.test.hdfs.fileSuffix = .avro
agent.sinks.test.serializer =
com.test.flume.sink.serializer.GenericRecordAvroEventSerializer$Builder
agent.sinks.test.hdfs.fileType = CompressedStream
agent.sinks.test.hdfs.codeC = gzip
agent.sinks.test.hdfs.maxOpenFiles=100
agent.sinks.test.hdfs.appendTimeout = 10000
agent.sinks.test.hdfs.callTimeout = 4000
agent.sinks.test.hdfs.rollInterval = 60
agent.sinks.test.hdfs.rollSize = 0
agent.sinks.test.hdfs.rollCount = 1000
agent.sinks.test.hdfs.batchSize = 1000
agent.sinks.test.hdfs.threadsPoolSize=100
上面所有的数据都存储在s3中,当我试图从hive中检索数据时,我得到了下面的错误。线程“main”java.io.ioexception中的异常:不是avro数据文件
你能告诉我为什么我的配置不起作用吗?
暂无答案!
目前还没有任何答案,快来回答吧!