使用mapr spark流的apache kafka集群不工作

djmepvbi  于 2021-06-08  发布在  Kafka
关注(0)|答案(1)|浏览(414)

我们在使用mapr spark streaming(1.6.1)连接apachekafka集群时遇到了一个问题。设置详细信息如下:
•带spark 1.6.1的mapr集群(3节点集群)
•apache kafka群集v0.8.1.1(5节点群集)
我们正在使用maprv1.6.1-ampr-1605中的“spark streaming kafka”库。我们还尝试用apachespark(而不是maprspark)在本地模式下运行,效果非常好。
下面是错误的堆栈跟踪:

Exception in thread "main" org.apache.kafka.common.config.ConfigException: No bootstrap urls given in bootstrap.servers
        at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:57)
        at org.apache.kafka.clients.consumer.KafkaConsumer.initializeConsumer(KafkaConsumer.java:606)
        at org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1563)
        at org.apache.spark.streaming.kafka.v09.KafkaCluster$$anonfun$getPartitions$1$$anonfun$1.apply(KafkaCluster.scala:54)
        at org.apache.spark.streaming.kafka.v09.KafkaCluster$$anonfun$getPartitions$1$$anonfun$1.apply(KafkaCluster.scala:54)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
        at scala.collection.immutable.Set$Set1.foreach(Set.scala:74)
        at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
        at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)
        at org.apache.spark.streaming.kafka.v09.KafkaCluster$$anonfun$getPartitions$1.apply(KafkaCluster.scala:53)
        at org.apache.spark.streaming.kafka.v09.KafkaCluster$$anonfun$getPartitions$1.apply(KafkaCluster.scala:52)
        at org.apache.spark.streaming.kafka.v09.KafkaCluster.withConsumer(KafkaCluster.scala:164)
        at org.apache.spark.streaming.kafka.v09.KafkaCluster.getPartitions(KafkaCluster.scala:52)
        at org.apache.spark.streaming.kafka.v09.KafkaUtils$.getFromOffsets(KafkaUtils.scala:421)
        at org.apache.spark.streaming.kafka.v09.KafkaUtils$.createDirectStream(KafkaUtils.scala:292)
        at org.apache.spark.streaming.kafka.v09.KafkaUtils$.createDirectStream(KafkaUtils.scala:397)
        at org.apache.spark.streaming.kafka.v09.KafkaUtils.createDirectStream(KafkaUtils.scala)
        at com.cisco.it.log.KafkaDirectStreamin2.main(KafkaDirectStreamin2.java:111)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:742)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

ps:我们正在创建连接时传递“metadata.broker.list”。spark流应用程序无法连接到zk,也无法获取引导url。这就是我的理解。或者可能是没有正确版本的map-r和kafka-jar。我们从map-r那边拿了jar,但还是没用。
我们能够成功地使用apachespark进行测试,但无法在mapr上使用它。
任何帮助都出现了。

e3bfsja2

e3bfsja21#

在stacktrace中有 org.apache.spark.streaming.kafka.v09 这可能意味着它是一个使用Kafka0.9提供的新消费者api的实现,而不会与Kafka0.8.1.1一起工作。您可能应该尝试mapr的spark-streaming-kafkaè2.10中的一个库。

相关问题