kafka-broker:组协调器不可用

svmlkihl  于 2021-06-08  发布在  Kafka
关注(0)|答案(1)|浏览(400)

我有以下结构:

zookeeper: 3.4.12
kafka: kafka_2.11-1.1.0
server1: zookeeper + kafka
server2: zookeeper + kafka
server3: zookeeper + kafka

由kafka topics shell脚本创建了具有复制因子3和分区3的topic。

./kafka-topics.sh --create --zookeeper localhost:2181 --topic test-flow --partitions 3 --replication-factor 3

并使用组localconsumers。领导没事的时候就行了。

./kafka-topics.sh --describe --zookeeper localhost:2181 --topic test-flow
Topic:test-flow PartitionCount:3    ReplicationFactor:3 Configs:
    Topic: test-flow    Partition: 0    Leader: 3   Replicas: 3,2,1 Isr: 3,2,1
    Topic: test-flow    Partition: 1    Leader: 1   Replicas: 1,3,2 Isr: 1,3,2
    Topic: test-flow    Partition: 2    Leader: 2   Replicas: 2,1,3 Isr: 2,1,3

消费者日志

Received FindCoordinator response ClientResponse(receivedTimeMs=1529508772673, latencyMs=217, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=1, clientId=consumer-1, correlationId=0), responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', error=NONE, node=myserver3:9092 (id: 3 rack: null)))

但是如果leader坏了-我在consumer中得到错误(systemctl stop kafka):
节点3不可用。好 啊

./kafka-topics.sh --describe --zookeeper localhost:2181 --topic test-flow
Topic:test-flow PartitionCount:3    ReplicationFactor:3 Configs:
    Topic: test-flow    Partition: 0    Leader: 2   Replicas: 3,2,1 Isr: 2,1
    Topic: test-flow    Partition: 1    Leader: 1   Replicas: 1,3,2 Isr: 1,2
    Topic: test-flow    Partition: 2    Leader: 2   Replicas: 2,1,3 Isr: 2,1

消费者日志

Received FindCoordinator response 
ClientResponse(receivedTimeMs=1529507314193, latencyMs=36, 
disconnected=false, 
requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=1, 
clientId=consumer-1, correlationId=149), 
responseBody=FindCoordinatorResponse(throttleTimeMs=0, 
errorMessage='null', error=COORDINATOR_NOT_AVAILABLE, node=:-1 (id: -1 
rack: null)))

- Group coordinator lookup failed: The coordinator is not available.
- Coordinator discovery failed, refreshing metadata

在leader关闭或重新连接到其他使用者组之前,使用者无法连接。
不明白为什么会这样?消费者应该重新平衡到另一个经纪人,但事实并非如此。

u0sqgete

u0sqgete1#

尝试将属性添加到server.conf并清除zookeeper缓存。应该会有帮助

offsets.topic.replication.factor=3
default.replication.factor=3

这个问题的根本原因是无法在节点之间分配主题偏移量。
自动生成的主题:\消费者\偏移量
你可以通过

$ ./kafka-topics.sh --describe --zookeeper localhost:2181 --topic __consumer_offsets

请注意这篇文章:https://kafka.apache.org/documentation/#prodconfig
默认情况下,它会用rf-1创建消费偏移量
重要的是在kafka/集群启动之前配置复制因子。否则,在重新配置示例时可能会带来一些问题,比如在您的案例中。

相关问题