在scala intellij中保存Dataframe引发异常

8ehkhllq  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(343)

我正在尝试使用intellij spark scala将csv或xml文件加载到一个预先存在的配置单元表中,然后在保存dataframe的最后一步中,它给出了以下异常。

讽刺的是:下面的代码在sparkshell中运行良好,没有任何问题。

1. 当我使用配置单元上下文和insertinto()时。

val sparkConf = new SparkConf().setAppName("TEST")
val sc = new SparkContext(sparkConf)
val hiveContext = new HiveContext(sc)
hiveContext.setConf("hive.exec.dynamic.partition", "true")
hiveContext.setConf("hive.exec.dynamic.partition.mode", "nonstrict")
println("CONFIG DONE!!!!!")
val xml = hiveContext.read.format("com.databricks.spark.xml").option("rowTag","employee").load("/PUBLIC_TABLESPACE/updatedtest1.xml")
println("XML LOADED!!!!!!")
xml.write.format("parquet").mode("overwrite").partitionBy("designation").insertInto("test2")
println("TABLE SAVED!!!!!!!")

线程“main”java.lang.nosuchmethodexception中出现异常:org.apache.hadoop.hive.ql.metadata.hive.loaddynamicpartitions(org.apache.hadoop.fs.path,java.lang.string,java.util.map,boolean,int,boolean,boolean)

2.当我使用hive context和saveastable()时。

val sparkConf = new SparkConf().setAppName("TEST")
val sc = new SparkContext(sparkConf)
val hiveContext = new HiveContext(sc)
hiveContext.setConf("hive.exec.dynamic.partition", "true")
hiveContext.setConf("hive.exec.dynamic.partition.mode", "nonstrict")
println("CONFIG DONE!!!!!")
val xml = hiveContext.read.format("com.databricks.spark.xml").option("rowTag","employee").load("/PUBLIC_TABLESPACE/updatedtest1.xml")
println("XML LOADED!!!!!!")

xml.write.format("parquet")
  .mode("overwrite")
  .partitionBy("designation")
  .saveAsTable("test2")

线程“main”java.lang.nosuchmethodexception中出现异常:org.apache.hadoop.hive.ql.metadata.hive.loaddynamicpartitions(org.apache.hadoop.fs.path,java.lang.string,java.util.map,boolean,int,boolean,boolean)

3. 当我使用sql context和insertinto()时。

val sparkConf = new SparkConf().setAppName("TEST")
val sc = new SparkContext(sparkConf)
val hiveContext = new SQLContext(sc)
hiveContext.setConf("hive.exec.dynamic.partition", "true")
hiveContext.setConf("hive.exec.dynamic.partition.mode", "nonstrict")
println("CONFIG DONE!!!!!")
val xml = hiveContext.read.format("com.databricks.spark.xml").option("rowTag","employee").load("/PUBLIC_TABLESPACE/updatedtest1.xml")
println("XML LOADED!!!!!!") xml.write.format("parquet").mode("overwrite").partitionBy("designation").insertInto("test2")
println("TABLE SAVED!!!!!!!")

线程“main”org.apache.spark.sql.analysisexception异常:未找到表:test2;

4. 当我使用sql context和saveastable()时。

val sparkConf = new SparkConf().setAppName("TEST")
val sc = new SparkContext(sparkConf)
val hiveContext = new SQLContext(sc)
hiveContext.setConf("hive.exec.dynamic.partition", "true")
hiveContext.setConf("hive.exec.dynamic.partition.mode", "nonstrict")
println("CONFIG DONE!!!!!") 
val xml = hiveContext.read.format("com.databricks.spark.xml").option("rowTag","employee").load("/PUBLIC_TABLESPACE/updatedtest1.xml")
println("XML LOADED!!!!!!") xml.write.format("parquet").mode("overwrite").partitionBy("designation").saveAsTable("test2")
println("TABLE SAVED!!!!!!!")

线程“main”java.lang.runtimeexception中出现异常:使用sqlcontext创建的表必须是临时的。改用hivecontext。
使用build.sbt文件编辑:

BUILD.SBT File: name := "testonSpark"
version := "1.0"
scalaVersion := "2.10.4"
libraryDependencies += "org.apache.spark" % "spark-core_2.10" % "1.6.0"
libraryDependencies += "com.databricks" % "spark-csv_2.10" % "1.5.0"
libraryDependencies += "org.apache.spark" % "spark-hive_2.10" % "1.6.0"
pgccezyw

pgccezyw1#

尝试使用sbt文件作为

val sparkVersion = "1.6.0"
resolvers ++= Seq(
  "apache-snapshots" at "http://repository.apache.org/snapshots/"
)
libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-core" % sparkVersion,
  "org.apache.spark" %% "spark-sql" % sparkVersion,
  "org.apache.spark" %% "spark-hive" % sparkVersion,
  "org.apache.spark" %% "spark-mllib" % sparkVersion
)
libraryDependencies += "com.databricks" % "spark-csv_2.10" % "1.5.0"

相关问题