使用json数组中的java spark sql在配置单元中保存表

waxmsbnn  于 2021-06-01  发布在  Hadoop
关注(0)|答案(1)|浏览(576)
Dataset<Row> ds = spark.read().option("multiLine", true).option("mode", "PERMISSIVE").json("/user/administrador/prueba_diario.txt").toDF();

    ds.printSchema();

    Dataset<Row> ds2 = ds.select("articles").toDF();

    ds2.printSchema();
    spark.sql("drop table if exists table1"); 
    ds2.write().saveAsTable("table1");

我有这个json格式

root
 |-- articles: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- author: string (nullable = true)
 |    |    |-- content: string (nullable = true)
 |    |    |-- description: string (nullable = true)
 |    |    |-- publishedAt: string (nullable = true)
 |    |    |-- source: struct (nullable = true)
 |    |    |    |-- id: string (nullable = true)
 |    |    |    |-- name: string (nullable = true)
 |    |    |-- title: string (nullable = true)
 |    |    |-- url: string (nullable = true)
 |    |    |-- urlToImage: string (nullable = true)
 |-- status: string (nullable = true)
 |-- totalResults: long (nullable = true)

我想用arrays格式将数组项目保存为配置单元的表
我想要的配置单元表示例:

author (string)
content (string)
description (string)
publishedat (string)
source (struct<id:string,name:string>)
title (string)
url (string)
urltoimage (string)

问题是,只保存一个名为article的列的表,而content就在这个列中

xmq68pz9

xmq68pz91#

有点费解,但我发现这个很管用:

import org.apache.spark.sql.functions._
ds.select(explode(col("articles")).as("exploded")).select("exploded.*").toDF()

我试过了

{
  "articles": [
    {
      "author": "J.K. Rowling",
      "title": "Harry Potter and the goblet of fire"
    },
    {
      "author": "George Orwell",
      "title": "1984"
    }
  ]
}

然后它返回(在收集到一个数组之后)

result = {Arrays$ArrayList@13423}  size = 2
 0 = {GenericRowWithSchema@13425} "[J.K. Rowling,Harry Potter and the goblet of fire]"
 1 = {GenericRowWithSchema@13426} "[George Orwell,1984]"

相关问题