Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", "your_consumer_group");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("session.timeout.ms", "30000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList("foo", "bar"));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(1000);
for (ConsumerRecord<String, String> record : records) {
System.out.printf("offset = %d, key = %s, value = %s", record.offset(), record.key(), record.value());
}
// After data is consumed, you make your thread sleep until next 30 min:
Thread.sleep(30 * 60 * 1000);
}
1条答案
按热度按时间gzjq41n41#
如果您不想实时处理数据,您可能需要重新考虑,Kafka是否适合您。但是,您可以尝试以下方法:
如果您希望在每小时的第30分钟或第0分钟执行实时批处理,可以改用此睡眠:
它会让你的消费者在
00:00
,00:30
,01:00
,01:30
,等。有关详细信息,请单击此链接:https://kafka.apache.org/0100/javadoc/index.html?org/apache/kafka/clients/consumer/kafkaconsumer.html再说一次,你可能不想这样使用Kafka。最好将数据转储到某个存储区(例如,按日期时间分区的Parquet文件),并每30分钟对其进行一次批处理。