spark-error“必须在配置中设置主url”使用intellij idea

rta7y2nd  于 2021-06-07  发布在  Kafka
关注(0)|答案(2)|浏览(395)

当我尝试使用intellij idea开发spark流媒体应用程序时
环境
spark core 2.2.0版intellij idea 2017.3.5版
附加信息:spark正在Yarn模式下运行。
获取错误:

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Exception in thread "main" java.lang.ExceptionInInitializerError
    at kafka_stream.kafka_stream.main(kafka_stream.scala)
Caused by: org.apache.spark.SparkException: A master URL must be set in your configuration
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:376)
    at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
    at kafka_stream.InitSpark$class.$init$(InitSpark.scala:15)
    at kafka_stream.kafka_stream$.<init>(kafka_stream.scala:6)
    at kafka_stream.kafka_stream$.<clinit>(kafka_stream.scala)
    ... 1 more

Process finished with exit code 1

试过这个

val spark: SparkSession = SparkSession.builder()
    .appName("SparkStructStream")
    .master("spark://127.0.0.1:7077")
    //.master("local[*]")
    .getOrCreate()

仍然得到相同的主url错误
build.sbt文件的内容

name := "KafkaSpark"

version := "1.0"

scalaVersion := "2.11.8"

libraryDependencies ++= Seq(
  "org.apache.spark" % "spark-core_2.11" % "2.2.0",
  "org.apache.spark" % "spark-sql_2.11" % "2.2.0",
  "org.apache.spark" % "spark-streaming_2.11" % "2.2.0",
  "org.apache.spark" % "spark-streaming-kafka_2.11" % "1.6.3"
)

// https://mvnrepository.com/artifact/org.apache.kafka/kafka_2.11
libraryDependencies += "org.apache.kafka" % "kafka_2.11" % "0.11.0.0"

// https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients
libraryDependencies += "org.apache.kafka" % "kafka-clients" % "0.11.0.0"

// https://mvnrepository.com/artifact/org.apache.kafka/kafka-streams
libraryDependencies += "org.apache.kafka" % "kafka-streams" % "0.11.0.0"

// https://mvnrepository.com/artifact/org.apache.kafka/connect-api
libraryDependencies += "org.apache.kafka" % "connect-api" % "0.11.0.0"

libraryDependencies += "com.databricks" %% "spark-avro" % "4.0.0"

resolvers += Resolver.mavenLocal
resolvers += "central maven" at "https://repo1.maven.org/maven2/"

如果您能帮上忙,我们将不胜感激?

u59ebvdq

u59ebvdq1#

看起来这个参数没有被传递。e、 Spark在早些时候被初始化了。不过,您可以尝试使用vm选项 -Dspark.master=local[*] ,将参数传递到未定义参数的所有位置,因此应该可以解决您的问题。在intellij里它在 list of run config -> Edit Configurations... -> VM Options

nwlls2ji

nwlls2ji2#

下载 winutils.exe 把文件放进去 c/hadoop/bin/winutil.exe 包含在def main语句下面的行中

System.setProperty("hadoop.home.dir", "C:\\hadoop")

而且效果很好。

相关问题