如何在ide上的常规scala项目中使用delta湖

yptwkmov  于 2021-05-27  发布在  Spark
关注(0)|答案(3)|浏览(441)

我在build.sbt中添加了增量依赖项

libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-core" % sparkVersion,
  "org.apache.spark" %% "spark-sql" % sparkVersion,
  "org.apache.spark" %% "spark-hive" % sparkVersion,
  // logging
  "org.apache.logging.log4j" % "log4j-api" % "2.4.1",
  "org.apache.logging.log4j" % "log4j-core" % "2.4.1",
  // postgres for DB connectivity
  "org.postgresql" % "postgresql" % postgresVersion,
  "io.delta" %% "delta-core" % "0.7.0"

但是,我不知道spark会话必须包含什么配置。下面的代码失败。

val spark = SparkSession.builder()
    .appName("Spark SQL Practice")
    .config("spark.master", "local")
    .config("spark.network.timeout"  , "10000000s")//to avoid Heartbeat exception
    .config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension")
    .config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")
    .getOrCreate()

例外情况-

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/sql/catalyst/plans/logical/MergeIntoTable
qcbq4gxm

qcbq4gxm1#

这是由于存在代码所依赖的类文件,并且该类文件在编译时存在,但在运行时找不到。查找构建时和运行时类路径中的差异。
更具体到您的场景:

If you get  java.lang.NoClassDefFoundError on
org/apache/spark/sql/catalyst/plans/logical/MergeIntoTable exception 
in this case JAR version does not have MergeIntoTable.scala file.
The solution was to add the apache spark latest version, which comes with the
org/apache/spark/sql/catalyst/plans/logical/MergeIntoTable.scala file .

spark 3.x.x升级和发布中的更多信息-https://github.com/apache/spark/pull/26167.

dgsult0t

dgsult0t2#

下面是一个可以帮助您的示例项目。
这个 build.sbt 文件应包括以下依赖项:

libraryDependencies += "org.apache.spark" %% "spark-sql" % "3.0.0" % "provided"
libraryDependencies += "io.delta" %% "delta-core" % "0.7.0" % "provided"

我想你应该用spark 3来制作delta lake 0.7.0。
您不需要任何特殊的sparksession配置选项,类似这样的选项应该可以:

lazy val spark: SparkSession = {
  SparkSession
    .builder()
    .master("local")
    .appName("spark session")
    .config("spark.databricks.delta.retentionDurationCheck.enabled", "false")
    .getOrCreate()
}

这里有一篇博文可以帮助你开始了解三角洲湖。

2nc8po8w

2nc8po8w3#

你需要升级apachespark。 MergeIntoTable v3.0.0版中引入了此功能。链接到源代码:astbuilder.scala、analyzer.scala、github pull请求、发行说明(查看功能增强部分)。

相关问题