将数组结构拆分为单值列spark scala

rsaldnfx  于 2021-05-29  发布在  Spark
关注(0)|答案(1)|浏览(526)

我有一个带有单个数组struct column的dataframe,我想在其中拆分嵌套的值并作为逗号分隔的字符串添加新列示例dataframe:tests

{id:1,name:foo},{id:2,name:bar}

预期结果Dataframe

tests                            tests_id  tests_name
[id:1,name:foo],[id:2,name:bar]  1, 2     foo, bar

我尝试了下面的代码,但出现了一个错误

df.withColumn("tests_name", concat_ws(",", explode(col("tests.name"))))

错误:

org.apache.spark.sql.AnalysisException: Generators are not supported when it's nested in expressions, but got: concat_ws(,, explode(tests.name AS `name`));
sf6xfgos

sf6xfgos1#

取决于您使用的spark版本。假设Dataframe方案如下

root
 |-- test: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- id: long (nullable = true)
 |    |    |-- name: string (nullable = true)

Spark3.0.0

df.withColumn("id", concat_ws(",", transform($"test", x => x.getField("id"))))
  .withColumn("name", concat_ws(",", transform($"test", x => x.getField("name"))))
.show(false)

Spark2.4.0+

df.withColumn("id", concat_ws(",", expr("transform(test, x -> x.id)")))
.withColumn("name", concat_ws(",", expr("transform(test, x -> x.name)")))
.show(false)

Spark<2.4

val extract_id = udf((test: Seq[Row]) => test.map(_.getAs[Long]("id")))
val extract_name = udf((test: Seq[Row]) => test.map(_.getAs[String]("name")))

df.withColumn("id", concat_ws(",", extract_id($"test")))
  .withColumn("name", concat_ws(",", extract_name($"test")))
  .show(false)

输出:

+--------------------+---+-------+
|test                |id |name   |
+--------------------+---+-------+
|[[1, foo], [2, bar]]|1,2|foo,bar|
|[[3, foo], [4, bar]]|3,4|foo,bar|
+--------------------+---+-------+

相关问题