我试图配置ssl以融合kafka docker平台,但在开始说
日志:
命令[/usr/local/bin/dub path/etc/kafka/secrets/kafka.server.keystore.jks exists]失败!Kafka\uKafka经纪人1 \u 13d7835ad32d退出,代码为1
docker配置:
version: '3'
services:
zookeeper1:
image: confluentinc/cp-zookeeper:5.1.0
hostname: zookeeper1
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_SERVERS: 0.0.0.0:2888:3888
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
volumes:
- zookeeper-data:/var/lib/zookeeper/data
- zookeeper-log:/var/lib/zookeeper/log
kafka-broker1:
image: confluentinc/cp-kafka:5.1.0
hostname: kafka-broker1:
ports:
- "9092:9092"
- "9093:9093"
environment:
KAFKA_LISTENERS: "PLAINTEXT://0.0.0.0:9092,SSL://0.0.0.0:9093"
KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafkassl.com:9092,SSL://kafkassl.com:9093"
KAFKA_ZOOKEEPER_CONNECT: zookeeper1:2181
KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 2
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "false"
KAFKA_DELETE_TOPIC_ENABLE: "true"
KAFKA_LOG_RETENTION_HOURS: 168
KAFKA_OFFSETS_RETENTION_MINUTES: 43800
KAFKA_SSL_KEYSTORE_FILENAME: kafka.server.keystore.jks
KAFKA_SSL_TRUSTSTORE_LOCATION: /ssl/kafka.server.truststore.jks
KAFKA_SSL_TRUSTSTORE_PASSWORD: pass
KAFKA_SSL_KEYSTORE_LOCATION: /ssl/kafka.server.keystore.jks
KAFKA_SSL_KEYSTORE_PASSWORD: pass
KAFKA_SSL_KEY_PASSWORD: pass
volumes:
- kafka-data:/var/lib/kafka/data
- /ssl:/etc/kafka/secrets
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
depends_on:
- zookeeper1
volumes:
zookeeper-data:
zookeeper-log:
kafka-data:
3条答案
按热度按时间7tofc5zh1#
fwiw,这里是我用来解决这个问题,我遇到了什么问题。这是我的docker文件的一部分。如果打开kafka\u secret.txt文件,您将只看到p@ssword 在里面。我想讨论的一个问题是-./kafka/secrets:/etc/kafka/secrets被设置为卷而不是绑定挂载。我通过检查集装箱来确认这一点(通过运行docker container ls)获取容器名称。它显示了卷装载而不是绑定装载。为了修复它,我从docker中删除了卷,重新开始。即使我重新创建了Kafka容器,挂在周围的卷也会一直附着在我的Kafka容器上。
iibxawm42#
下面是使用ssl支持启动kafka docker compose的步骤(@senthil已经在他的评论中提供了一些指导)
在docker compose目录中有一个所谓的secrets目录,其中包含用于生成密钥库、信任库和ssl密码的shell脚本。进入docker compose for kafka的根目录并运行此脚本,它将生成所需的文件(例如:
./secrets/create-certs
)将所有生成的文件复制到secrets目录中
将secrets目录的卷从主机装载到已停靠的卷。将以下内容放到卷部分的docker compose文件中
与docker合作
weylhg0b3#
这些步骤在windows中对我很有用:
1-使用windows wsl生成密钥:
cd $(pwd)/examples/kafka-cluster-ssl/secrets
./create-certs.sh(Type yes for all "Trust this certificate? [no]:" prompts.)
2-使用powershell设置环境变量kafka\u ssl\u secrets\u dir:$env:KAFKA_SSL_SECRETS_DIR= "x\cp-docker-images\examples\kafka-cluster-ssl\secrets"
3-使用环境变量运行kafka ssl cluster node:docker run -d --net=host --name=kafka-ssl-1 -e
KAFKA_ZOOKEEPER_CONNECT=localhost:22181,localhost:32181,localhost:42181 -eKAFKA_ADVERTISED_LISTENERS=SSL://localhost:29092 -e KAFKA_SSL_KEYSTORE_FILENAME=kafka.broker1.keystore.jks -e
KAFKA_SSL_KEYSTORE_CREDENTIALS=broker1_keystore_creds -e KAFKA_SSL_KEY_CREDENTIALS=broker1_sslkey_creds -eKAFKA_SSL_TRUSTSTORE_FILENAME=kafka.broker1.truststore.jks -e
KAFKA_SSL_TRUSTSTORE_CREDENTIALS=broker1_truststore_creds -e KAFKA_SECURITY_INTER_BROKER_PROTOCOL=SSL -v${env:KAFKA_SSL_SECRETS_DIR}:/etc/kafka/secrets confluentinc/cp-kafka:5.0.0