使用不同的项目模块交叉编译scala版本

sczxawaw  于 2021-05-16  发布在  Spark
关注(0)|答案(0)|浏览(298)

我正在尝试使用sbtprojectmatrix交叉编译scala版本。但是,我希望最后的一个模块根据目标scala版本使用不同的内部依赖关系。
这是我现在的一个玩具例子:

lazy val root: Project = (projectMatrix in file("."))
  .settings(commonSettings: _*)
  .settings(publish := {})
  .aggregate(
    a_spark21.projectRefs ++
    a_spark22.projectRefs ++
    a_spark23.projectRefs ++
    a_spark24.projectRefs ++
    top.projectRefs ++: _*
  )

lazy val top: Project = (projectMatrix in file("vulcan-hive"))
  .disablePlugins(SitePreviewPlugin, SitePlugin, ParadoxPlugin, ParadoxSitePlugin)
  .settings(name := "vulcan-hive")
  .settings(commonSettings: _*)
  .jvmPlatform(scalaVersions = Seq(versions.scala211, versions.scala212))
  .customRow(
    scalaVersions = Seq(versions.scala211),
    axisValues = Seq(VirtualAxis.jvm),
    _.settings(
      SubModuleDependencies.hiveDependencies24,
      libraryDependencies ++= Dependencies.Spark24.Compile.all.map(_.force()),
    )
  )
  .customRow(
    scalaVersions = Seq(versions.scala212),
    axisValues = Seq(VirtualAxis.jvm),
    _.settings(
      SubModuleDependencies.hiveDependencies24,
      libraryDependencies ++= Dependencies.Spark24.Compile.all.map(_.force()),
    )
  )
  .dependsOn(a_spark24 % "compile->compile;test->test", a_spark23 % "compile->compile;test->test", a_spark22 % "compile->compile;test->test", a_spark21 % "compile->compile;test->test")

lazy val a_spark21: Project = (projectMatrix in file(a_spark21))
  .jvmPlatform(scalaVersions = Seq(versions.scala211))
  .customRow(
    scalaVersions = Seq(versions.scala211),
    axisValues = Seq(VirtualAxis.jvm),
    _.settings(
      libraryDependencies ++= Dependencies.Spark21.Compile.all.map(_.force())
    )
  )

lazy val a_spark22: Project = ...
lazy val a_spark23: Project = ...

lazy val a_spark24: Project = (projectMatrix in file(a_spark24))
  .jvmPlatform(scalaVersions = Seq(versions.scala211, versions.scala212))
  .customRow(
    scalaVersions = Seq(versions.scala211),
    axisValues = Seq(VirtualAxis.jvm),
    _.settings(
      libraryDependencies ++= Dependencies.Spark24.Compile.all.map(_.force())
    )
  )
  .customRow(
    scalaVersions = Seq(versions.scala212),
    axisValues = Seq(VirtualAxis.jvm),
    _.settings(
      libraryDependencies ++= Dependencies.Spark24.Compile.all.map(_.force())
    )
  )

此操作失败,并出现以下错误:

no rows were found in a_spark23 matching ProjectRow(true, List(PlatformAxis(jvm,JVM,jvm), ScalaVersionAxis(2.12.12,2.12))): List(ProjectRow(true, List(PlatformAxis(jvm,JVM,jvm), ScalaVersionAxis(2.11.12,2.11))), ProjectRow(true, List(PlatformAxis(jvm,JVM,jvm), ScalaVersionAxis(2.11.12,2.11))))

这个错误是有意义的,因为top正在尝试为\u spark23查找2.12依赖项,即使\u spark23没有定义2.12行。有没有办法将dependson定义移到customrow参数?或者访问settings闭包外部的scalabinaryversion,以便我可以将不同的列表传递给dependson?
我想要的最终产品是 top_2.12 这取决于 a_spark24_2.12 以及 top_2.11 这取决于[ a_spark21_2.11 , a_spark22_2.11 , a_spark23_2.11 , a_spark24_2.11 ]
任何帮助或指导将不胜感激!

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题