无法从kafka中的消费者向死信主题发送消息

ehxuflar  于 2021-06-04  发布在  Kafka
关注(0)|答案(3)|浏览(598)

“我正在尝试将一条消息路由到Kafka的死信主题,以防处理相应消息时出现任何故障。我已经为这个功能设置了seektocurinterrorhandler和deadletterpublishingrecover。
执行此操作时,使用者引发以下异常:

2020-08-07 12:09:38.841 ERROR 1 --- [ntainer#2-0-C-1] o.s.k.support.LoggingProducerListener    : Exception thrown when sending a message with key='a6558a22-470d-4708-b297-814996a42045' and payload='{123, 34, 101, 118, 101, 110, 116, 78, 97, 109, 101, 34, 58, 34, 116, 101, 115, 116, 95, 101, 120, 1...' to topic test_execution.DLT and partition 2:

org.apache.kafka.common.errors.TimeoutException: Topic test_execution.DLT not present in metadata after 60000 ms.

2020-08-07 12:09:38.846 ERROR 1 --- [ntainer#2-0-C-1] o.s.k.l.DeadLetterPublishingRecoverer    : Dead-letter publication failed for: ProducerRecord(topic=test_execution.DLT, partition=2, headers=RecordHeaders(headers = [RecordHeader(key = kafka_dlt-original-topic, value = [116, 101, 115, 116, 95, 101, 120, 101, 99, 117, 116, 105, 111, 110]), RecordHeader(key = kafka_dlt-original-partition, value = [0, 0, 0, 2]), RecordHeader(key = kafka_dlt-original-offset, value = [0, 0, 0, 0, 0, 23, 15, -72]), RecordHeader(key = kafka_dlt-original-timestamp, value = [0, 0, 1, 115, -57, 103, -70, -126]), RecordHeader(key = kafka_dlt-original-timestamp-type, value = [67, 114, 101, 97, 116, 101, 84, 105, 109, 101]), RecordHeader(key = kafka_dlt-exception-fqcn, value = [111, 114, 103, 46, 115, 112, 114, 105, 110, 103, 102, 114, 97, 109, 101, 119, 111, 114, 107, 46, 107, 97, 102, 107, 97, 46, 108, 105, 115, 116, 101, 110, 101, 114, 46, 76, 105, 115, 116, 101, 110, 101, 114, 69, 120, 101, 99, 117, 116, 105, 111, 110, 70, 97, 105, 108, 101, 100, 69, 120, 99, 101, 112, 116, 105, 111, 110]), RecordHeader(key = kafka_dlt-exception-message, value = [    
org.springframework.kafka.KafkaException: Send failed; nested exception is org.apache.kafka.common.errors.TimeoutException: Topic test_execution.DLT not present in metadata after 60000 ms.
        at org.springframework.kafka.core.KafkaTemplate.doSend(KafkaTemplate.java:570) ~[spring-kafka-2.5.2.RELEASE.jar:2.5.2.RELEASE]
        at org.springframework.kafka.core.KafkaTemplate.send(KafkaTemplate.java:385) ~[spring-kafka-2.5.2.RELEASE.jar:2.5.2.RELEASE]
        at org.springframework.kafka.listener.DeadLetterPublishingRecoverer.publish(DeadLetterPublishingRecoverer.java:278) ~[spring-kafka-2.5.2.RELEASE.jar:2.5.2.RELEASE]
        at org.springframework.kafka.listener.DeadLetterPublishingRecoverer.accept(DeadLetterPublishingRecoverer.java:214) ~[spring-kafka-2.5.2.RELEASE.jar:2.5.2.RELEASE]
        at org.springframework.kafka.listener.DeadLetterPublishingRecoverer.accept(DeadLetterPublishingRecoverer.java:54) ~[spring-kafka-2.5.2.RELEASE.jar:2.5.2.RELEASE]
        at org.springframework.kafka.listener.FailedRecordProcessor.getSkipPredicate(FailedRecordProcessor.java:167) ~[spring-kafka-2.5.2.RELEASE.jar:2.5.2.RELEASE]
        at org.springframework.kafka.listener.SeekToCurrentErrorHandler.handle(SeekToCurrentErrorHandler.java:104) ~[spring-kafka-2.5.2.RELEASE.jar:2.5.2.RELEASE]
        at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeErrorHandler(KafkaMessageListenerContainer.java:1887) ~[spring-kafka-2.5.2.RELEASE.jar:2.5.2.RELEASE]
        at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:1792) ~[spring-kafka-2.5.2.RELEASE.jar:2.5.2.RELEASE]
        at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords(KafkaMessageListenerContainer.java:1719) ~[spring-kafka-2.5.2.RELEASE.jar:2.5.2.RELEASE]
        at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:1617) ~[spring-kafka-2.5.2.RELEASE.jar:2.5.2.RELEASE]
        at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:1348) ~[spring-kafka-2.5.2.RELEASE.jar:2.5.2.RELEASE]
        at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1064) ~[spring-kafka-2.5.2.RELEASE.jar:2.5.2.RELEASE]
        at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:972) ~[spring-kafka-2.5.2.RELEASE.jar:2.5.2.RELEASE]
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[na:na]
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]
        at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
Caused by: org.apache.kafka.common.errors.TimeoutException: Topic test_execution.DLT not present in metadata after 60000 ms.

我已经在kafka集群中创建了test_execution.dlt主题,显然我能够从consoler producer生成到这个主题的消息。
使用者在docker容器中运行,kafka集群是一个3VM设置。以下是Kafka消费者使用的配置:

@Bean
    public Map<String, Object> consumerConfigs() {
        Map<String, Object> props = new HashMap<>();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer.class);
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer.class);
        props.put("spring.kafka.consumer.properties.spring.deserializer.key.delegate.class", StringDeserializer.class);
        props.put("spring.kafka.consumer.properties.spring.deserializer.value.delegate.class", JsonDeserializer.class);
        props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 10);
        props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 60000);
        props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
        props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
        props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
        props.put(JsonDeserializer.TRUSTED_PACKAGES, "*");
        return props;
    }
    @Bean
    public ConsumerFactory consumerFactory() {
        return new DefaultKafkaConsumerFactory<>(consumerConfigs(),  new ErrorHandlingDeserializer<>(new StringDeserializer()),
                new ErrorHandlingDeserializer<>(new JsonDeserializer<>(AutomationEvent.class,false)));
    }

    @Bean
    public SeekToCurrentErrorHandler errorHandler(KafkaOperations kafkaOperations) {
        return new SeekToCurrentErrorHandler(new DeadLetterPublishingRecoverer(kafkaOperations), new FixedBackOff(10000, 3));
    }

我是不是漏了什么?我是否需要修改任何要更新的服务器配置?

qyuhtwio

qyuhtwio1#

消费者不必向dlt发送任何信息,它是由框架处理的,只是主题必须存在于之前
按照下面的讨论->DeadlettPublishingRecoverer-Deadletter发布失败,topicpartition中的name主题出现InvalidTopiceException,结尾为\u err

qfe3c7zg

qfe3c7zg2#

test_execution.dlt不存在
框架不会自动为您创建死信主题;它必须已经存在。
您可以通过添加
NewTopic @Bean .
请参见此答案以获取示例。
编辑
默认情况下,我们会将记录发送到同一个分区,因此dlt必须至少具有与原始主题相同的分区,除非您提供了目标解析程序。
请参阅文档。
默认情况下,死信记录被发送到名为.dlt的主题(原始主题名称的后缀为.dlt)以及与原始记录相同的分区。因此,在使用默认解析器时,死信主题必须至少具有与原始主题相同的分区。如果返回的topicpartition有一个负分区,则producerrecord中没有设置该分区,因此kafka选择了该分区。

aor9mmx1

aor9mmx13#

如果要从使用者向主题发送消息,请确保还指定了生产者配置。这个的属性是 spring.kafka.producer.bootstrap-servers 此属性是必需的,否则producer组件将在默认情况下尝试连接到locahost,这将导致找不到主题。

相关问题