即使将“auto.offset.reset”设置为“latest”获取错误offsetAutoFrangeException

q9yhzks0  于 2021-06-04  发布在  Kafka
关注(0)|答案(1)|浏览(877)

我使用spark-sql-2.4.1版本和kafka 0.10V。
当我试图按消费者来消费数据时。即使在将“auto.offset.reset”设置为“latest”后,也会出现以下错误

org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions: {COMPANY_INBOUND-16=168}
    at org.apache.kafka.clients.consumer.internals.Fetcher.throwIfOffsetOutOfRange(Fetcher.java:348)
    at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:396)
    at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:999)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:937)
    at org.apache.spark.sql.kafka010.InternalKafkaConsumer.fetchData(KafkaDataConsumer.scala:470)
    at org.apache.spark.sql.kafka010.InternalKafkaConsumer.org$apache$spark$sql$kafka010$InternalKafkaConsumer$$fetchRecord(KafkaDataConsumer.scala:361)
    at org.apache.spark.sql.kafka010.InternalKafkaConsumer$$anonfun$get$1.apply(KafkaDataConsumer.scala:251)
    at org.apache.spark.sql.kafka010.InternalKafkaConsumer$$anonfun$get$1.apply(KafkaDataConsumer.scala:234)
    at org.apache.spark.util.UninterruptibleThread.runUninterruptibly(UninterruptibleThread.scala:77)
    at org.apache.spark.sql.kafka010.InternalKafkaConsumer.runUninterruptiblyIfPossible(KafkaDataConsumer.scala:209)
    at org.apache.spark.sql.kafka010.InternalKafkaConsumer.get(KafkaDataConsumer.scala:234)

问题出在哪里?为什么设置不起作用?应该如何修复?
第2部分:

.readStream()
                      .format("kafka")
                      .option("startingOffsets", "latest")
                      .option("enable.auto.commit", false)
                      .option("maxOffsetsPerTrigger", 1000)
                      .option("auto.offset.reset", "latest")
                      .option("failOnDataLoss", false)
                      .load();
whhtz7ly

whhtz7ly1#

spark structured streaming忽略auto.offset.reset,请改用startingoffset选项
auto.offset.reset:设置源选项startingoffset以指定从何处开始。结构化流媒体管理哪些偏移量是内部消耗的,而不是依赖Kafka消费者来完成。这将确保动态订阅新主题/分区时不会丢失任何数据。请注意,startingoffset仅在新的流式查询启动时适用,并且恢复总是从查询停止的地方开始。
来源

相关问题