我试图用事务生产者/消费者来准确地理解Kafka。
我遇到了下面的例子。但是,我还是很难完全理解。这个代码正确吗?
producer.sendoffsetstotransaction-此代码的作用是什么?是否应该对同一目标主题执行此操作?
什么是consumer.commitsync();//将再次读取相同的消息并生成重复的消息?
public class ExactlyOnceLowLevel {
public void runConsumer() throws Exception {
final KafkaConsumer<byte[], byte[]> consumer = createConsumer();
final Producer<Long, String> producer = createProducer();
producer.initTransactions();
consumer.subscribe(Collections.singletonList(TOPIC));
while (true) {
final ConsumerRecords<byte[], byte[]> records = consumer.poll(Duration.ofMillis(100));
try {
final Map<TopicPartition, OffsetAndMetadata> currentOffsets = new HashMap<>();
producer.beginTransaction();
for (final ConsumerRecord<byte[], byte[]> record : records) {
System.out.printf("Received Message topic =%s, partition =%s, offset = %d, key = %s, value = %s\n", record.topic(), record.partition(),
record.offset(), record.key(), record.value());
final ProducerRecord<Long, String> producerRecord =
new ProducerRecord<>(TOPIC_1, new BigInteger(record.key()).longValue(), record.value().toString());
// send returns Future
final RecordMetadata metadata = producer.send(producerRecord).get();
currentOffsets.put(new TopicPartition(TOPIC_1, record.partition()), new OffsetAndMetadata(record.offset()));
}
producer.sendOffsetsToTransaction(currentOffsets, "my-transactional-consumer-group"); // a bit annoying here to reference group id rwice
producer.commitTransaction();
consumer.commitSync();
currentOffsets.clear();
// EXACTLY ONCE!
}
catch (ProducerFencedException | OutOfOrderSequenceException | AuthorizationException e) {
e.printStackTrace();
// We can't recover from these exceptions, so our only option is to close the producer and exit.
producer.close();
}
catch (final KafkaException e) {
e.printStackTrace();
// For all other exceptions, just abort the transaction and try again.
producer.abortTransaction();
}
finally {
producer.flush();
producer.close();
}
}
}
private static KafkaConsumer<byte[], byte[]> createConsumer() {
final Properties consumerConfig = new Properties();
consumerConfig.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS);
consumerConfig.put(ConsumerConfig.GROUP_ID_CONFIG, "my-group");
consumerConfig.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
consumerConfig.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
consumerConfig.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class.getName());
consumerConfig.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, IsolationLevel.READ_COMMITTED); // this has to be
return new KafkaConsumer<>(consumerConfig);
}
private static Producer<Long, String> createProducer() {
final Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongDeserializer.class.getName());
props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true");
props.put(ProducerConfig.RETRIES_CONFIG, 3); // this is now safe !!!!
props.put(ProducerConfig.ACKS_CONFIG, "all"); // this has to be all
props.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, 1); // this has to be 1
return new KafkaProducer<>(props);
}
public static void main(final String... args) throws Exception {
final ExactlyOnceLowLevel example = new ExactlyOnceLowLevel();
example.runConsumer();
}
}
1条答案
按热度按时间gkn4icbw1#
在kafka事务中使用读/处理/写模式时,不应尝试向使用者提交偏移量。正如你所暗示的,这可能会引起问题。
在这个用例中,需要将偏移添加到事务中,并且您应该只使用
sendOffsetsToTransaction()
去做那件事。该方法确保只有在整个事务成功时才提交这些偏移量。参见javadoc:向使用者组协调器发送指定偏移的列表,并将这些偏移标记为当前事务的一部分。只有在事务提交成功的情况下,这些偏移才会被视为已提交。提交的偏移量应该是应用程序将使用的下一条消息,即lastprocessedmessageoffset+1。
当您需要将消耗的消息和生成的消息一起批处理时,应该使用此方法,通常是在消耗-转换-生成模式中。因此,指定的consumergroupid应该与所用使用者的配置参数group.id相同。注意,使用者应该具有enable.auto.commit=false,并且不应该手动提交偏移量(通过同步或异步提交)。