spark streaming+kafka“jobgenerator”java.lang.nosuchmethoderror

csbfibhn  于 2021-06-08  发布在  Kafka
关注(0)|答案(1)|浏览(379)

我是spark流媒体和kafka的新手,我不理解这个运行时异常。我已经安装了Kafka服务器。

Exception in thread "JobGenerator" java.lang.NoSuchMethodError: org.apache.spark.streaming.scheduler.InputInfoTracker.reportInfo(Lorg/apache/spark/streaming/Time;Lorg/apache/spark/streaming/scheduler/StreamInputInfo;)V
at org.apache.spark.streaming.kafka.DirectKafkaInputDStream.compute(DirectKafkaInputDStream.scala:166)
at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:350)
at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:350)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:349)
at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:349)
at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:399)
at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:344)
at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:342)
at scala.Option.orElse(Option.scala:257)

这是我的密码

public class TwitterStreaming {
     // setup kafka : 
     public static final String ZKQuorum = "localhost:2181";
     public static final String ConsumerGroupID = "ingi2145-analytics";
     public static final String ListTopics = "newTweet";
     public static final String ListBrokers = "localhost:9092"; // I'm not sure about ...

    @SuppressWarnings("deprecation")
public static void main(String[] args) throws Exception {
    // Location of the Spark directory
    String sparkHome = "usr/local/spark";
    // URL of the Spark cluster
    String sparkUrl = "local[4]";
    // Location of the required JAR files
    String jarFile = "target/analytics-1.0.jar";
// Generating spark's streaming context
JavaStreamingContext jssc = new JavaStreamingContext(
  sparkUrl, "Streaming", new Duration(1000), sparkHome, new String[]{jarFile});
// Start kafka stream
HashSet<String> topicsSet = new HashSet<String>(Arrays.asList(ListTopics.split(",")));
HashMap<String, String> kafkaParams = new HashMap<String, String>();
kafkaParams.put("metadata.broker.list", ListBrokers);

//JavaPairReceiverInputDStream<String, String> kafkaStream = KafkaUtils.createStream(ssc, ZKQuorum, ConsumerGroupID, mapPartitionsPerTopics);
// Create direct kafka stream with brokers and topics
JavaPairInputDStream<String, String> messages = KafkaUtils.createDirectStream(
    jssc,
    String.class,
    String.class,
    StringDecoder.class,
    StringDecoder.class,
    kafkaParams,
    topicsSet
);

// get the json file :
   JavaDStream<String> json = messages.map(
        new Function<Tuple2<String, String>, String>() {
            public String call(Tuple2<String, String> tuple2) {
              return tuple2._2();
            }
   });

这个项目的目的是通过使用kafka队列从twitter流计算10个bests标签。代码在没有kakfa的情况下运行。你知道有什么问题吗?

9rnv2umw

9rnv2umw1#

我有同样的问题,这是我使用的spark版本。我用的是1.5,然后是1.4,最终对我有效的版本是1.6。因此,请确保您使用的Kafka版本与spark版本兼容。在我的例子中,我使用的是kafka版本2.10-0.10.1.1和spark-1.6.0-bin-hadoop2.3。
另外,(非常重要)确保日志文件中没有任何禁止的错误。您必须为spark使用的文件夹分配适当的安全授权,否则您可能会收到许多与应用程序本身无关但安全设置不正确的错误。

相关问题