kafka身份验证失败,原因是:ssl握手失败

1rhkuytd  于 2021-06-04  发布在  Kafka
关注(0)|答案(2)|浏览(4591)

我必须在kafka中添加ssl加密和身份验证。
我就是这么做的:
为每个代理生成证书kafka: keytool -keystore server.keystore.jks -alias localhost -validity 365 -genkey 创建ca。生成的ca是一个公私密钥对和用于签署其他证书的证书。ca负责签署证书。 openssl req -new -x509 -keyout ca-key -out ca-cert -days 365 使用生成的ca签署所有代理证书从密钥库导出证书: keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file 与ca签署: openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password} 将ca的证书和已签名的证书导入密钥库:
keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert keytool -keystore server.keystore.jks -alias localhost -import -file cert-signed 将ca导入客户端信任库和代理/服务器信任库:
keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert 在configuration server.properties中添加以下行: listeners=PLAINTEXT://localhost:9092, SSL://localhost:9192 ssl.client.auth=required ssl.keystore.location=/home/xrobot/kafka_2.12-2.1.0/certificate/server.keystore.jks ssl.keystore.password=blablabla ssl.key.password=blablabla ssl.truststore.location=/home/xrobot/kafka_2.12-2.1.0/certificate/server.truststore.jks ssl.truststore.password=blablabla security.inter.broker.protocol=SSL 问题是,当我开始Kafka,然后我得到这个错误:

  1. [2019-02-26 19:03:59,783] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
  2. [2019-02-26 19:04:00,011] ERROR [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9192) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
  3. [2019-02-26 19:04:00,178] ERROR [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9192) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
  4. [2019-02-26 19:04:00,319] ERROR [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9192) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)

为什么?
编辑:server.properties:

  1. # Licensed to the Apache Software Foundation (ASF) under one or more
  2. # contributor license agreements. See the NOTICE file distributed with
  3. # this work for additional information regarding copyright ownership.
  4. # The ASF licenses this file to You under the Apache License, Version 2.0
  5. # (the "License"); you may not use this file except in compliance with
  6. # the License. You may obtain a copy of the License at
  7. #
  8. # http://www.apache.org/licenses/LICENSE-2.0
  9. #
  10. # Unless required by applicable law or agreed to in writing, software
  11. # distributed under the License is distributed on an "AS IS" BASIS,
  12. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  13. # See the License for the specific language governing permissions and
  14. # limitations under the License.
  15. # see kafka.server.KafkaConfig for additional details and defaults
  16. ############################# Server Basics #############################
  17. # The id of the broker. This must be set to a unique integer for each broker.
  18. broker.id=0
  19. ############################# Socket Server Settings #############################
  20. # The address the socket server listens on. It will get the value returned from
  21. # java.net.InetAddress.getCanonicalHostName() if not configured.
  22. # FORMAT:
  23. # listeners = listener_name://host_name:port
  24. # EXAMPLE:
  25. # listeners = PLAINTEXT://your.host.name:9092
  26. listeners=PLAINTEXT://localhost:9092, SSL://localhost:9192
  27. ssl.client.auth=required
  28. ssl.keystore.location=/home/xrobot/kafka_2.12-2.1.0/certificate/server.keystore.jks
  29. ssl.keystore.password=onailime
  30. ssl.key.password=onailime
  31. ssl.truststore.location=/home/xrobot/kafka_2.12-2.1.0/certificate/server.truststore.jks
  32. ssl.truststore.password=onailime
  33. security.inter.broker.protocol=SSL
  34. # Hostname and port the broker will advertise to producers and consumers. If not set,
  35. # it uses the value for "listeners" if configured. Otherwise, it will use the value
  36. # returned from java.net.InetAddress.getCanonicalHostName().
  37. # advertised.listeners=PLAINTEXT://your.host.name:9092
  38. # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
  39. # listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
  40. # The number of threads that the server uses for receiving requests from the network and sending responses to the network
  41. num.network.threads=3
  42. # The number of threads that the server uses for processing requests, which may include disk I/O
  43. num.io.threads=8
  44. # The send buffer (SO_SNDBUF) used by the socket server
  45. socket.send.buffer.bytes=102400
  46. # The receive buffer (SO_RCVBUF) used by the socket server
  47. socket.receive.buffer.bytes=102400
  48. # The maximum size of a request that the socket server will accept (protection against OOM)
  49. socket.request.max.bytes=104857600
  50. ############################# Log Basics #############################
  51. # A comma separated list of directories under which to store log files
  52. log.dirs=/home/xrobot/kafka_2.12-2.1.0/data/kafka
  53. # The default number of log partitions per topic. More partitions allow greater
  54. # parallelism for consumption, but this will also result in more files across
  55. # the brokers.
  56. num.partitions=1
  57. # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
  58. # This value is recommended to be increased for installations with data dirs located in RAID array.
  59. num.recovery.threads.per.data.dir=1
  60. ############################# Internal Topic Settings #############################
  61. # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
  62. # For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
  63. offsets.topic.replication.factor=1
  64. transaction.state.log.replication.factor=1
  65. transaction.state.log.min.isr=1
  66. ############################# Log Flush Policy #############################
  67. # Messages are immediately written to the filesystem but by default we only fsync() to sync
  68. # the OS cache lazily. The following configurations control the flush of data to disk.
  69. # There are a few important trade-offs here:
  70. # 1. Durability: Unflushed data may be lost if you are not using replication.
  71. # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
  72. # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
  73. # The settings below allow one to configure the flush policy to flush data after a period of time or
  74. # every N messages (or both). This can be done globally and overridden on a per-topic basis.
  75. # The number of messages to accept before forcing a flush of data to disk
  76. # log.flush.interval.messages=10000
  77. # The maximum amount of time a message can sit in a log before we force a flush
  78. # log.flush.interval.ms=1000
  79. ############################# Log Retention Policy #############################
  80. # The following configurations control the disposal of log segments. The policy can
  81. # be set to delete segments after a period of time, or after a given size has accumulated.
  82. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
  83. # from the end of the log.
  84. # The minimum age of a log file to be eligible for deletion due to age
  85. log.retention.hours=168
  86. # A size-based retention policy for logs. Segments are pruned from the log unless the remaining
  87. # segments drop below log.retention.bytes. Functions independently of log.retention.hours.
  88. # log.retention.bytes=1073741824
  89. # The maximum size of a log segment file. When this size is reached a new log segment will be created.
  90. log.segment.bytes=1073741824
  91. # The interval at which log segments are checked to see if they can be deleted according
  92. # to the retention policies
  93. log.retention.check.interval.ms=300000
  94. ############################# Zookeeper #############################
  95. # Zookeeper connection string (see zookeeper docs for details).
  96. # This is a comma separated host:port pairs, each corresponding to a zk
  97. # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
  98. # You can also append an optional chroot string to the urls to specify the
  99. # root directory for all kafka znodes.
  100. zookeeper.connect=localhost:2181
  101. # Timeout in ms for connecting to zookeeper
  102. zookeeper.connection.timeout.ms=6000
  103. ############################# Group Coordinator Settings #############################
  104. # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
  105. # The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
  106. # The default value for this is 3 seconds.
  107. # We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
  108. # However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
  109. group.initial.rebalance.delay.ms=0

zookeeper.properties属性:

  1. # Licensed to the Apache Software Foundation (ASF) under one or more
  2. # contributor license agreements. See the NOTICE file distributed with
  3. # this work for additional information regarding copyright ownership.
  4. # The ASF licenses this file to You under the Apache License, Version 2.0
  5. # (the "License"); you may not use this file except in compliance with
  6. # the License. You may obtain a copy of the License at
  7. #
  8. # http://www.apache.org/licenses/LICENSE-2.0
  9. #
  10. # Unless required by applicable law or agreed to in writing, software
  11. # distributed under the License is distributed on an "AS IS" BASIS,
  12. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  13. # See the License for the specific language governing permissions and
  14. # limitations under the License.
  15. # the directory where the snapshot is stored.
  16. dataDir=/home/xrobot/kafka_2.12-2.1.0/data/zookeeper
  17. # the port at which the clients will connect
  18. clientPort=2181
  19. # disable the per-ip limit on the number of connections since this is a non-production config
  20. maxClientCnxns=0
osh3o9ms

osh3o9ms1#

可能您的主机名和证书不匹配。将此行添加到server.properties文件。

  1. ssl.endpoint.identification.algorithm=

从kafka版本2.0.0开始,服务器的主机名验证在默认情况下为客户端连接以及代理间连接启用。通过添加此行,可以为ssl.endpoint.identification.algorithm分配一个空字符串。

vq8itlhq

vq8itlhq2#

这是一个旧的线程,但我可以分享一些教训的艰难方式:身份验证失败可能会发生的原因有很多。有必要了解ssl握手失败的原因。带有ssl握手消息的pcap肯定会有所帮助。
如果这是针对连接到代理的客户机。在server.properties中,您有:

  1. ssl.client.auth=required

应该是的

  1. ssl.client.auth=none

如果客户端没有向服务器进行身份验证。在这个问题中,没有描述客户机创建自己的密钥/证书的步骤。
此外,仅出于测试目的,您可以在客户端配置:

  1. enable.ssl.certificate.verification=false

如果此属性为false,则客户端无法使用ca验证服务器的证书。如果ssl hanshake错误是由于服务器的证书未验证造成的,则此属性非常有用。

相关问题