scala spark数据集更改类类型

yxyvkwin  于 2021-05-29  发布在  Spark
关注(0)|答案(2)|浏览(499)

我创建了一个Dataframe作为 MyData1 然后我创建了一个列,这样新的Dataframe就遵循 MyData2 . 现在我想将新的dataframe作为数据集返回,但有以下错误:

[info]   org.apache.spark.sql.AnalysisException: cannot resolve '`hashed`' given input columns: [id, description];
[info]   at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
[info]   at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$3.applyOrElse(CheckAnalysis.scala:110)
[info]   at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$3.applyOrElse(CheckAnalysis.scala:107)
[info]   at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:278)
[info]   at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:278)

这是我的密码:

import org.apache.spark.sql.{DataFrame, Dataset}

case class MyData1(id: String, description: String)

case class MyData2(id: String, description: String, hashed: String) 

object MyObject {

    def read(arg1: String, arg2: String): Dataset[MyData2] {
        var df: DataFrame = null
        val obj1 = new Matcher("cbutrer383", "e8f8chsdfd")
        val obj2 = new Matcher("cbutrer383", "g567g4rwew")
        val obj3 = new Matcher("cbutrer383", "567yr45e45")
        df = Seq(obj1, obj2, obj3).toDF("id", "description")

        df.withColumn("hashed", lit("hash"))

        val ds: Dataset[MyData2] = df.as[MyData2]
        ds
    }
}

我知道下面这行可能有点不对劲,但我想不通

val ds: Dataset[MyData2] = df.as[MyData2]

我是个新手,所以可能犯了一个基本的错误。有人能帮忙吗?短暂性脑缺血发作

qncylg1j

qncylg1j1#

您忘记将新创建的Dataframe分配给 dfdf = df.withColumn("hashed", lit("hash"))withcolumn spark医生说
通过添加列或替换具有相同名称的现有列返回新数据集。

qco9c6ql

qco9c6ql2#

更好的read函数版本如下:,
尽量避免 null 作业, var ,和 return 声明不是必须的

def read(arg1: String, arg2: String): Dataset[MyData2] = {
  val obj1 = new Matcher("cbutrer383", "e8f8chsdfd")
  val obj2 = new Matcher("cbutrer383", "g567g4rwew")
  val obj3 = new Matcher("cbutrer383", "567yr45e45")
  Seq(obj1, obj2, obj3).toDF("id", "description")
    .withColumn("hashed", lit("hash"))
    .as[MyData2]
}

相关问题