重新启动后将删除kafka主题

6bc51xsx  于 2021-06-07  发布在  Kafka
关注(0)|答案(1)|浏览(392)

我有一个kubernetes集群,有一个zookeeper pod和三个kafka broker pod。
zk的部署描述符是:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: zookeeper
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: zookeeper
    spec:
      containers:
      - env:
        - name: ZOOKEEPER_ID
          value: "1"
        - name: ZOOKEEPER_SERVER_1
          value: zookeeper
        - name: ZOOKEEPER_CLIENT_PORT
          value: "2181"
        - name: ZOOKEEPER_TICK_TIME
          value: "2000"
        name: zookeeper
        image: confluentinc/cp-zookeeper:5.0.1
        ports:
        - containerPort: 2181
        volumeMounts:
        - mountPath: /var/lib/zookeeper/
          name: zookeeper-data
      nodeSelector:
        noderole: kafka1
      restartPolicy: Always
      volumes:
      - name: zookeeper-data
        persistentVolumeClaim:
          claimName: zookeeper-volume-claims

对于Kafka,经纪人如下所示(每个经纪人有一个经纪人,有相应的经纪人姓名、听众和持续的数量声明):

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kafka1
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        name: kafka1
    spec:
      containers:
      - env:
        - name: KAFKA_AUTO_CREATE_TOPICS_ENABLE
          value: "true"
        - name: KAFKA_ADVERTISED_LISTENERS
          value: "PLAINTEXT://<ip>:9092"
        - name: KAFKA_LISTENERS
          value: "PLAINTEXT://0.0.0.0:9092"
        - name: KAFKA_ZOOKEEPER_CONNECT
          value: <ip>:2181
        - name: KAFKA_BROKER_ID
          value: "1"
        name: kafka1
        image: confluentinc/cp-enterprise-kafka:5.0.1
        ports:
        - containerPort: 9092
        volumeMounts:
        - mountPath: /var/lib/kafka
          name: kafka1-data
      nodeSelector:
        noderole: kafka2
      restartPolicy: Always
      volumes:
      - name: kafka1-data
        persistentVolumeClaim:
          claimName: kafka1-volume-claim

集群已经启动并运行,我可以创建主题、发布和使用消息。
文件log.1存在于/var/lib/zookeeper/log/version-2中

-rw-r--r-- 1 root root 67108880 Jan 18 11:34 log.1

如果我遇到一个经纪人:

kubectl exec -it kafka3-97454b745-wddpv bash

我可以看到主题的两个部分:

drwxr-xr-x 2 root root 4096 Jan 21 10:34 test1-1
drwxr-xr-x 2 root root 4096 Jan 21 10:35 test1-0

当我在分配了zookeeper ant代理后重新启动虚拟机时,问题就来了。一个用于zk,一个用于每个代理(符合我的kubernetes集群的三个vm)
重启后,在每个代理中,都没有主题:

root@kafka3-97454b745-wddpv:/var/lib/kafka/data# ls -lrt
total 24
-rw-r--r-- 1 root root    0 Jan 21 10:56 cleaner-offset-checkpoint
-rw-r--r-- 1 root root   54 Jan 21 10:56 meta.properties
drwxr-xr-x 2 root root 4096 Jan 21 10:56 __confluent.support.metrics-0
drwxr-xr-x 2 root root 4096 Jan 21 10:56 _schemas-0
-rw-r--r-- 1 root root   49 Jan 21 11:10 recovery-point-offset-checkpoint
-rw-r--r-- 1 root root    4 Jan 21 11:10 log-start-offset-checkpoint
-rw-r--r-- 1 root root   49 Jan 21 11:11 replication-offset-checkpoint

在zookeeper中:

root@zookeeper-84bb68d45b-cklwm:/var/lib/zookeeper/log/version-2# ls -lrt
total 16
-rw-r--r-- 1 root root 67108880 Jan 21 10:56 log.1

如果我列出主题,它们就不见了。
kubernetes群集正在azure上运行。
我假设没有与持久卷相关的问题,因为当我在其中手动创建一个文件时,在重新启动之后,该文件仍然存在。我想这和我的Kafka配置有关。正如你所看到的,我正在使用融合的docker图像。
任何帮助都将不胜感激。

yr9zkbsy

yr9zkbsy1#

这只是装载路径上的错误配置。路径必须指向数据和事务日志文件夹,而不是父文件夹。

相关问题