kafka死消息队列和重试

kupeojn6  于 2021-06-04  发布在  Kafka
关注(0)|答案(1)|浏览(644)

我有一个配置:

@Configuration
@EnableKafka
public class ConsumerConfig {

    final DlqErrorHandler dlqErrorHandler;

    public ConsumerConfig(DlqErrorHandler dlqErrorHandler) {
        this.dlqErrorHandler = dlqErrorHandler;
    }

    @Bean
    public ConsumerFactory<String, String> consumerFactory() {
        Map<String, Object> config = new HashMap<>();

        config.put(org.apache.kafka.clients.consumer.ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "127.0.0.1:9092");
        config.put(org.apache.kafka.clients.consumer.ConsumerConfig.GROUP_ID_CONFIG, "group_id_two");
        config.put(org.apache.kafka.clients.consumer.ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        config.put(org.apache.kafka.clients.consumer.ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);

        return new DefaultKafkaConsumerFactory<>(config);
    }

    @Bean
    public ConcurrentKafkaListenerContainerFactory concurrentKafkaListenerContainerFactory() {
        ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(consumerFactory());
        factory.setBatchListener(false);
        factory.getContainerProperties().setAckOnError(false);
        factory.setConcurrency(2);
        factory.setErrorHandler(dlqErrorHandler);
        return factory;
    }
}

有一个错误处理程序的实现:

@Component
public class DlqErrorHandler implements ContainerAwareErrorHandler {
    private final KafkaTemplate kafkaTemplate;

    public DlqErrorHandler(KafkaTemplate<String, String> kafkaTemplate) {
        this.kafkaTemplate = kafkaTemplate;
    }

    @Override
    public void handle(Exception e, List<ConsumerRecord<?, ?>> list, Consumer<?, ?> consumer, MessageListenerContainer messageListenerContainer) {
        ConsumerRecord<?, ?> record = list.get(0);

        try {
            kafkaTemplate.send("dlqTopic", record.key(), record.value());
            consumer.seek(new TopicPartition(record.topic(), record.partition()), record.offset() + 1);
        } catch (Exception exception) {
            consumer.seek(new TopicPartition(record.topic(), record.partition()), record.offset());
            throw new KafkaException("Seek to current after exception", exception);
        }
    }
}

有两个侦听器:

@Component
public class KafkaConsumer {
    @KafkaListener(topics = "batchProcessingWithRetryPolicy", containerFactory = "concurrentKafkaListenerContainerFactory")
    public void consume(String message) {
        System.out.println(message + " NORMAL");
        if (message.equals("TEST ERROR")) {
            throw new RuntimeException("EEEEEEEEEEEERRRRRRRRRRRRRRRRRRRRRRROOOOOOOOOOOOOOOOOOORRRRRR");
        }
    }

    @KafkaListener(topics = "dlqTopic", containerFactory = "concurrentKafkaListenerContainerFactory")
    public void consumeTwo(String message) {
        System.out.println(message + " DQL");
        if (message.length() > 0) {
            throw new RuntimeException("EEEEEEEEEEEERRRRRRRRRRRRRRRRRRRRRRROOOOOOOOOOOOOOOOOOORRRRRR ");
        }
    }
}

我的问题是:
1)

factory.getContainerProperties().setAckOnError(false);

方法setackonerror-已弃用。如何替换这行代码,以便在处理消息时出错后的第一个侦听器不会重复尝试,而是将此消息发送给dql。
如何为dql(dlqerrorhandler)设置发送消息之间的重复次数和时间间隔限制?也就是说,在第一个错误之后,消息出现在dql中,然后我想以30秒的间隔再尝试3次,如果不起作用,那么继续。

7gcisfzg

7gcisfzg1#

ackOnError 替换为 ErrorHandler.isAckAfterHandle() .
错误处理程序实现不完整-您还需要在剩余记录列表中查找其他分区。
你为什么不直接用 SeekToCurrentErrorHandler 以及 DeadLetterPublishingRecoverer 由框架提供。它们支持您的用例。

相关问题