将记录附加到主题时发生apache kafka错误

fsi0uk1n  于 2021-06-06  发布在  Kafka
关注(0)|答案(1)|浏览(403)

我试图通过connectapi使用一千万行大小(600mb)的csv文件。连接开始消耗完成370万条记录。之后我得到以下错误。

[2018-11-01 07:28:49,889] ERROR Error while appending records to topic-test-0 in dir /tmp/kafka-logs (kafka.server.LogDirFailureChannel)
java.io.IOException: No space left on device
        at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
        at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:60)
        at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
        at sun.nio.ch.IOUtil.write(IOUtil.java:65)
        at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:211)
        at org.apache.kafka.common.record.MemoryRecords.writeFullyTo(MemoryRecords.java:95)
        at org.apache.kafka.common.record.FileRecords.append(FileRecords.java:151)
        at kafka.log.LogSegment.append(LogSegment.scala:138)
        at kafka.log.Log.$anonfun$append$2(Log.scala:868)
        at kafka.log.Log.maybeHandleIOException(Log.scala:1837)
        at kafka.log.Log.append(Log.scala:752)
        at kafka.log.Log.appendAsLeader(Log.scala:722)
        at kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:634)
        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
        at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:257)
        at kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:622)
        at kafka.server.ReplicaManager.$anonfun$appendToLocalLog$2(ReplicaManager.scala:745)
        at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
        at scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:138)
        at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:236)
        at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:229)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
        at scala.collection.mutable.HashMap.foreach(HashMap.scala:138)
        at scala.collection.TraversableLike.map(TraversableLike.scala:234)
        at scala.collection.TraversableLike.map$(TraversableLike.scala:227)
        at scala.collection.AbstractTraversable.map(Traversable.scala:104)
        at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:733)
        at kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:472)
        at kafka.server.KafkaApis.handleProduceRequest(KafkaApis.scala:489)
        at kafka.server.KafkaApis.handle(KafkaApis.scala:106)
        at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
        at java.lang.Thread.run(Thread.java:748)
[2018-11-01 07:28:49,893] INFO [ReplicaManager broker=0] Stopping serving replicas in dir /tmp/kafka-logs (kafka.server.ReplicaManager)
[2018-11-01 07:28:49,897] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,topic-test-0,__consumer_offsets-25,__consumer_offsets

我有一个主题名主题测试
机器规格:
操作系统:centos 7
内存:16gb
高清:80gb
我看到一些博客谈论log.dirs是server.property,但事情并不清楚,因为它需要如何输入。我也要创建分区吗?我不认为这是同一个数据文件。

j1dl9f46

j1dl9f461#

将记录附加到dir/tmp/kafka日志(kafka.server.logdirfailurechannel)java.io.ioexception中的topic-test-0时出错:设备上没有剩余空间当您使用kafka topic中的巨大文件或流时,会出现此情况。转到默认日志目录/tmp/kafka logs,

[root@ENT-CL-015243 kafka-logs]# df -h
Filesystem                          Size  Used Avail Use% Mounted on
/dev/mapper/vg_rhel6u4x64-lv_root   61G   8.4G   49G  15% /
tmpfs                    7.7G     0  7.7G   0% /dev/shm
/dev/sda1                    485M   37M  423M   9% /boot
/dev/mapper/vg_rhel6u4x64-lv_home   2.0G   68M  1.9G   4% /home
/dev/mapper/vg_rhel6u4x64-lv_tmp    4.0G  315M  3.5G   9% /tmp
/dev/mapper/vg_rhel6u4x64-lv_var    7.9G  252M  7.3G   4% /var

如您所见,在我的例子中,只有3.5gb/tmp空间可用,我面临这个问题。我在根目录中创建了/klogs,并在server.properties中更改了log.dirs=/klogs/kafka日志

相关问题