在CloudFoundry中,我可以生成非ssl url的消息(“kafkaurl:9092"). 但它不适用于ssl url(“kafkaurl:9093").
kafka服务器版本0.10.0.1和客户端版本0.10.0.0。
以下是我使用的属性:
props.put(org.apache.kafka.clients.producer.ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, config.getString("obs_q_and_a_db.kafka.metadataBrokerList"))
props.put(org.apache.kafka.clients.producer.ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, classOf[StringSerializer])
props.put(org.apache.kafka.clients.producer.ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, classOf[StringSerializer])
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL")
props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, "mySslFolder/answersapi.kafka.client.keystore.jks")
props.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, "sslTruststorePassword"))
props.put(SslConfigs.SSL_KEYSTORE_LOCATION_CONFIG, "mySslFolder/answersapi.kafka.client.truststore.jks")
props.put(SslConfigs.SSL_KEYSTORE_PASSWORD_CONFIG, "sslKeystorePassword")
props.put(SslConfigs.SSL_KEY_PASSWORD_CONFIG, "sslKeyPassword")
props.setProperty("metadata.broker.list", "kafkaURL:9093"))
props.setProperty("serializer.class", serializerClass)
props.setProperty("message.send.max.retries", maxRetries.toString)
props.setProperty("request.required.acks", requiredAcks.toString)
props.setProperty("producer.type", producerType)
props.setProperty("batch.num.messages", batchNumMessages.toString)
在kafka服务器上使用kafka shell时,在shell中使用以下命令可以很好地使用相同的属性和相同的证书文件(信任库、密钥库文件):
kafka-console-producer --broker-list kafkaURL:9093 --producer.config config --topic myTopicName
下面是错误:
2017-01-18T12:03:29.78-0600 [APP/PROC/WEB/0]OUT java.io.EOFException
2017-01-18T12:03:29.78-0600 [APP/PROC/WEB/0]OUT at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:99)
2017-01-18T12:03:29.78-0600 [APP/PROC/WEB/0]OUT at kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129)
2017-01-18T12:03:29.78-0600 [APP/PROC/WEB/0]OUT at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120)
2017-01-18T12:03:29.78-0600 [APP/PROC/WEB/0]OUT at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:77)
2017-01-18T12:03:29.78-0600 [APP/PROC/WEB/0]OUT at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:74)
2017-01-18T12:03:29.78-0600 [APP/PROC/WEB/0]OUT at kafka.producer.SyncProducer.send(SyncProducer.scala:119)
2017-01-18T12:03:29.78-0600 [APP/PROC/WEB/0]OUT at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
2017-01-18T12:03:29.78-0600 [APP/PROC/WEB/0]OUT at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
2017-01-18T12:03:29.78-0600 [APP/PROC/WEB/0]OUT at kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:68)
2017-01-18T12:03:29.78-0600 [APP/PROC/WEB/0]OUT at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:79)
2017-01-18T12:03:29.78-0600 [APP/PROC/WEB/0]OUT at kafka.utils.Logging$class.swallowError(Logging.scala:106)
2017-01-18T12:03:29.78-0600 [APP/PROC/WEB/0]OUT at kafka.utils.CoreUtils$.swallowError(CoreUtils.scala:51)
2017-01-18T12:03:29.78-0600 [APP/PROC/WEB/0]OUT at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:68)
2017-01-18T12:03:29.78-0600 [APP/PROC/WEB/0]OUT at kafka.producer.Producer.send(Producer.scala:77)
1条答案
按热度按时间1zmg4dgp1#
我们的kafka客户端代码版本:“0.9.0.1”在服务器移到0.10.0.1时停止工作当我们将客户端代码更改为0.10.0.0时,我们仍然得到相同的eofileexception,它在替换不推荐的类kafka.producer时得到修复。{keyedmessage,producer,producerconfig}与新的0.10.0类org.apache.kafka.clients.producer.{producerrecord,kafkaproducer,producerconfig}分别按旧的不推荐类在指向非ssl url时工作正常的方式,它们只在指向安全的kafka url时失败