notenoughreplicasexception:当前isr集(2)的大小不足以满足min.isr要求3

r7s23pms  于 2021-06-04  发布在  Kafka
关注(0)|答案(1)|浏览(641)

我有以下设置代理:3-所有都已启动并运行min.insync.replicas=3。
我用以下配置创建了一个主题
bin\windows\kafka topics--zookeeper 127.0.0.1:2181--topic topic ack all--create--partitions 4--replication factor 3
我用“ack=all”触发了producer,producer可以发送消息。但是,当我启动consumer时,问题就开始了
bin\windows\kafka控制台使用者--引导服务器localhost:9094,localhost:9092 --topic 主题全部--从头开始
错误是
notenoughreplicasexception:当前isr集(2)的大小不足以满足3个notenoughreplicas的最小isr要求exception:the size 当前isr集合的最小值(3)不足以满足分区的最小isr要求3
我在这里看到两种错误。我浏览了文档,也了解了“min.isr”,但是,这些错误消息并不清楚。
当前isr集是什么意思?每一个主题和它的意义是不同的吗?
我猜min.isr和min.insync.replicas是一样的。我希望是至少应该有和“复制因子”一样的价值?
更新#1

Topic: topic-ack-all    PartitionCount: 4       ReplicationFactor: 3    Configs:            
        Topic: topic-ack-all    Partition: 0    Leader: 1       Replicas: 1,2,3 Isr: 1,2,3  
        Topic: topic-ack-all    Partition: 1    Leader: 1       Replicas: 2,3,1 Isr: 1,2,3  
        Topic: topic-ack-all    Partition: 2    Leader: 1       Replicas: 3,1,2 Isr: 1,2,3  
        Topic: topic-ack-all    Partition: 3    Leader: 1       Replicas: 1,3,2 Isr: 1,2,3

更新2

Topic: __consumer_offsets       PartitionCount: 50      ReplicationFactor: 1    Configs: compression.type=producer,cleanup.policy=compact,segment.bytes=104857600
    Topic: __consumer_offsets       Partition: 0    Leader: 2       Replicas: 2     Isr: 2
    Topic: __consumer_offsets       Partition: 1    Leader: 3       Replicas: 3     Isr: 3
    Topic: __consumer_offsets       Partition: 2    Leader: 1       Replicas: 1     Isr: 1
    Topic: __consumer_offsets       Partition: 3    Leader: 2       Replicas: 2     Isr: 2
    Topic: __consumer_offsets       Partition: 4    Leader: 3       Replicas: 3     Isr: 3
    Topic: __consumer_offsets       Partition: 5    Leader: 1       Replicas: 1     Isr: 1
    Topic: __consumer_offsets       Partition: 6    Leader: 2       Replicas: 2     Isr: 2
    Topic: __consumer_offsets       Partition: 7    Leader: 3       Replicas: 3     Isr: 3
    Topic: __consumer_offsets       Partition: 8    Leader: 1       Replicas: 1     Isr: 1
    Topic: __consumer_offsets       Partition: 9    Leader: 2       Replicas: 2     Isr: 2
    Topic: __consumer_offsets       Partition: 10   Leader: 3       Replicas: 3     Isr: 3
    Topic: __consumer_offsets       Partition: 11   Leader: 1       Replicas: 1     Isr: 1
    Topic: __consumer_offsets       Partition: 12   Leader: 2       Replicas: 2     Isr: 2
    Topic: __consumer_offsets       Partition: 13   Leader: 3       Replicas: 3     Isr: 3
    Topic: __consumer_offsets       Partition: 14   Leader: 1       Replicas: 1     Isr: 1
    Topic: __consumer_offsets       Partition: 15   Leader: 2       Replicas: 2     Isr: 2
    Topic: __consumer_offsets       Partition: 16   Leader: 3       Replicas: 3     Isr: 3
    Topic: __consumer_offsets       Partition: 17   Leader: 1       Replicas: 1     Isr: 1
    Topic: __consumer_offsets       Partition: 18   Leader: 2       Replicas: 2     Isr: 2
    Topic: __consumer_offsets       Partition: 19   Leader: 3       Replicas: 3     Isr: 3
    Topic: __consumer_offsets       Partition: 20   Leader: 1       Replicas: 1     Isr: 1
    Topic: __consumer_offsets       Partition: 21   Leader: 2       Replicas: 2     Isr: 2
    Topic: __consumer_offsets       Partition: 22   Leader: 3       Replicas: 3     Isr: 3
    Topic: __consumer_offsets       Partition: 23   Leader: 1       Replicas: 1     Isr: 1
    Topic: __consumer_offsets       Partition: 24   Leader: 2       Replicas: 2     Isr: 2
    Topic: __consumer_offsets       Partition: 25   Leader: 3       Replicas: 3     Isr: 3
    Topic: __consumer_offsets       Partition: 26   Leader: 1       Replicas: 1     Isr: 1
    Topic: __consumer_offsets       Partition: 27   Leader: 2       Replicas: 2     Isr: 2
    Topic: __consumer_offsets       Partition: 28   Leader: 3       Replicas: 3     Isr: 3
    Topic: __consumer_offsets       Partition: 29   Leader: 1       Replicas: 1     Isr: 1
    Topic: __consumer_offsets       Partition: 30   Leader: 2       Replicas: 2     Isr: 2
    Topic: __consumer_offsets       Partition: 31   Leader: 3       Replicas: 3     Isr: 3
    Topic: __consumer_offsets       Partition: 32   Leader: 1       Replicas: 1     Isr: 1
    Topic: __consumer_offsets       Partition: 33   Leader: 2       Replicas: 2     Isr: 2
    Topic: __consumer_offsets       Partition: 34   Leader: 3       Replicas: 3     Isr: 3
    Topic: __consumer_offsets       Partition: 35   Leader: 1       Replicas: 1     Isr: 1
    Topic: __consumer_offsets       Partition: 36   Leader: 2       Replicas: 2     Isr: 2
    Topic: __consumer_offsets       Partition: 37   Leader: 3       Replicas: 3     Isr: 3
    Topic: __consumer_offsets       Partition: 38   Leader: 1       Replicas: 1     Isr: 1
    Topic: __consumer_offsets       Partition: 39   Leader: 2       Replicas: 2     Isr: 2
    Topic: __consumer_offsets       Partition: 40   Leader: 3       Replicas: 3     Isr: 3
    Topic: __consumer_offsets       Partition: 41   Leader: 1       Replicas: 1     Isr: 1
    Topic: __consumer_offsets       Partition: 42   Leader: 2       Replicas: 2     Isr: 2
    Topic: __consumer_offsets       Partition: 43   Leader: 3       Replicas: 3     Isr: 3
    Topic: __consumer_offsets       Partition: 44   Leader: 1       Replicas: 1     Isr: 1
    Topic: __consumer_offsets       Partition: 45   Leader: 2       Replicas: 2     Isr: 2
    Topic: __consumer_offsets       Partition: 46   Leader: 3       Replicas: 3     Isr: 3
    Topic: __consumer_offsets       Partition: 47   Leader: 1       Replicas: 1     Isr: 1
    Topic: __consumer_offsets       Partition: 48   Leader: 2       Replicas: 2     Isr: 2
    Topic: __consumer_offsets       Partition: 49   Leader: 3       Replicas: 3     Isr: 3
toiithl6

toiithl61#

从粘贴的输出来看 __consumer_offsets 主题是用单个副本创建的。
看起来你的经纪人也是用 min.insync.replicas = 3.
有了这个配置,就可以 NotEnoughReplicasException 如果消费者提交了补偿。
这个 __consumer_offsets 主题是在第一个使用者连接到集群时自动创建的。在这种情况下,一种常见的结束方式是,当您只运行一个代理时运行一个消费者。在这种情况下 __consumer_offsets 将使用单个副本创建。
假设它是一个dev环境,返回有效状态的最简单方法是删除 __consumer_offsets 并在3个代理都启动时重新运行您的消费者。

相关问题