将Debezium/connect:1.7的Docker镜像与具有SSL/SASL安全性的Kafka集群连接

qxsslcnc  于 2023-10-15  发布在  Apache
关注(0)|答案(1)|浏览(106)

标题是我的奋斗。我看过很多例子,人们在一个“类似”的情况下...但我这里的问题是,我甚至无法部署映像并使REST端点可用,因为映像本身无法与Kafka连接。
我在文档中找不到任何地方,如何在Docker镜像中提供API_KEYAPI_SECRET,以便debezium/connect:1.7镜像可以连接Kafka.我的docker-compose看起来像这样:

wmv-debezium-connect:
    container_name: wmv-debezium-connect
    build:
      context: .
      dockerfile: Dockerfile
    env_file:
      - .env
    restart: always
    # depends_on:
    #   kafka:
    #     condition: service_healthy
    #   mssql:
    #     condition: service_healthy
    ports:
      - ${REST_PORT:-8083}:${REST_PORT:-8083}
    expose:
      - "${REST_PORT:-8083}"
    healthcheck:
      test:
        [
          "CMD",
          "curl",
          "--silent",
          "--fail",
          "-X",
          "GET",
          "http://${DEBEZIUM_CONNECT_HOST:-wmv-debezium-connect}:${REST_PORT:-8083}/connectors",
        ]
      start_period: 10s
      interval: 10s
      timeout: 10s
      retries: 20

P.S.:depends_on被注解掉了,因为我试图在部署的集群中同时使用数据库和Kafka。
我通过.env文件提供env vars,它看起来像:

BOOTSTRAP_SERVERS=my_kafka_cluster_url:9092

# this is what I tried, but no success... Also tried without `CONNECT_` prefix
CONNECT_KAFKA_SECURITY_PROTOCOL=SASL_SSL
CONNECT_KAFKA_SASL_MECHANISM=PLAIN
CONNECT_KAFKA_SASL_JAAS_CONFIG='org.apache.kafka.common.security.plain.PlainLoginModule required username="my_api_key" password="my_api_secret";'

CONFIG_STORAGE_TOPIC=dbz-config
OFFSET_STORAGE_TOPIC=dbz-offset
STATUS_STORAGE_TOPIC=dbz-status
DB_KAFKA_HISTORY_TOPIC=data-changes
GROUP_ID=debezium-connect
KAFKA_API_KEY=my_api_key
KAFKA_API_SECRET=my_api_secret
REST_PORT=8083
DEBEZIUM_CONNECT_HOST=wmv-debezium-connect
DEBEZIUM_CONNECTOR_NAME=db-connector

# Database vars for connector (to use in the REST API)
DB_HOST_NAME=my_db_host
DB_SERVER_NAME=wmv
DB_PASSWORD=my_db_password
DB_USER=us
DB_PORT=1433
DB_NAME=MyDBName

有没有人经历过这种情况,知道如何在部署此映像时提供SSL/SASL的信息?部署映像后,我跟踪日志,发现映像初始化的Kafka配置不正确(因为缺少这些安全数据)

2023-08-24 07:28:02,965 INFO   ||  AdminClientConfig values: 
bootstrap.servers = [my_kafka_cluster_url:9092]
client.dns.lookup = use_all_dns_ips
client.id = 
connections.max.idle.ms = 300000
default.api.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS

我有点要疯了,很难找到一个解决方案,
感谢任何帮助!!

pbpqsu0x

pbpqsu0x1#

对于那些和我一样被困在这里的人:基本上,您可以将任何内部docker变量some.variable替换为CONNECT_SOME_VARIABLE
原来是这样!

相关问题