flume:java.io.ioexception:不是数据文件

ncecgwcz  于 2021-06-02  发布在  Hadoop
关注(0)|答案(1)|浏览(475)

今晚我们有一个磁盘空间满的问题,今天我在我的flume日志中收到这个错误:

22 Feb 2017 10:24:56,180 ERROR [pool-6-thread-1] (org.apache.flume.client.avro.ReliableSpoolingFileEventReader.openFile:504)  - Exception opening file: /.../flume_spool/data.../data_2017-02-21_17-15-00_8189
java.io.IOException: Not a data file.
        at org.apache.avro.file.DataFileStream.initialize(DataFileStream.java:102)
        at org.apache.avro.file.DataFileReader.<init>(DataFileReader.java:97)
        at org.apache.avro.file.DataFileWriter.appendTo(DataFileWriter.java:160)
        at org.apache.avro.file.DataFileWriter.appendTo(DataFileWriter.java:149)
        at org.apache.flume.serialization.DurablePositionTracker.<init>(DurablePositionTracker.java:141)
        at org.apache.flume.serialization.DurablePositionTracker.getInstance(DurablePositionTracker.java:76)
        at org.apache.flume.client.avro.ReliableSpoolingFileEventReader.openFile(ReliableSpoolingFileEventReader.java:478)
        at org.apache.flume.client.avro.ReliableSpoolingFileEventReader.getNextFile(ReliableSpoolingFileEventReader.java:459)
        at org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents(ReliableSpoolingFileEventReader.java:229)
        at org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:227)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

Flume版本:1.5.2

kq4fsx7k

kq4fsx7k1#

这个 java.io.IOException: Not a data file 异常是由于存在保存元数据以进行处理的临时目录。
此目录由flume.conf中的spooldir源定义中的trackerdir指令控制(默认情况下,spooldir中的.flumespool)。
我们最终得到了空的元数据文件,而这些文件没有avro(我们使用的是avro接收器)期望看到的2个字节。实际上,实际的数据文件没有任何问题,只是元数据文件有问题。
因此,解决方案是delete.flumespool,问题自行解决(当然,在从磁盘释放一点空间之后)
进入你的spool文件夹: /.../flume_spool/data... 命令行: find . -type f -empty 我想你会发现: .flumespool/.flumespool-main.meta 那么 rm .flumespool/.flumespool-main.meta 来源

相关问题