spark:从json中删除空值,或者只从json中获取值

ma8fv8wu  于 2021-07-12  发布在  Spark
关注(0)|答案(1)|浏览(316)

我有一个带有json数据列的sparkDataframe:

df = spark.createDataFrame(
     [
         (1, '{"a": "hello"}'),
         (2, '{"b": ["foo", "bar"]}'),
         (3, '{"c": {"cc": "baz"}}'),
         (4, '{"d": [{"dd": "foo"}, {"dd": "bar"}]}'),
     ],
     schema=['id', 'jsonData'],
)

df.show()
+---+--------------------+
| id|            jsonData|
+---+--------------------+
|  1|      {"a": "hello"}|
|  2|{"b": ["foo", "ba...|
|  3|{"c": {"cc": "baz"}}|
|  4|{"d": [{"dd": "fo...|
+---+--------------------+

关键是模式标识符。也就是说,两个键不能有不同的模式
我需要解析此列中的json并从每个dict中获取值。
我运行下一个命令:

from pyspark.sql.functions import from_json
json_schema = spark.read.json(df.select("jsonData").rdd.map(lambda x: x[0])).schema
df = df.withColumn("jsonParsedData", from_json("jsonData", json_schema))

df.show()
+---+--------------------+--------------------+
| id|            jsonData|      jsonParsedData|
+---+--------------------+--------------------+
|  1|      {"a": "hello"}|          [hello,,,]|
|  2|{"b": ["foo", "ba...|    [, [foo, bar],,]|
|  3|{"c": {"cc": "baz"}}|         [,, [baz],]|
|  4|{"d": [{"dd": "fo...|[,,, [[foo], [bar]]]|
+---+--------------------+--------------------+

我有一个 jsonParsedData 带的列 null 缺少键的值。
问题:如何从 jsonData 列并获取不带 null 缺少键的值。
我认为 jsonParsedData 列应具有 string 类型。
预期结果:

+---+--------------------+--------------------+
| id|            jsonData|      jsonParsedData|
+---+--------------------+--------------------+
|  1|      {"a": "hello"}|               hello|
|  2|{"b": ["foo", "ba...|          [foo, bar]|
|  3|{"c": {"cc": "baz"}}|       {"cc": "baz"}|
|  4|{"d": [{"dd": "fo...|[{"dd": "foo"}, {...|
+---+--------------------+--------------------+
uttx8gqw

uttx8gqw1#

尝试使用从json中提取值 regexp_extract :

import pyspark.sql.functions as F

df2 = df.withColumn('jsonParsedData', F.regexp_extract('jsonData', '\\{"[^"]+": (.*)\\}', 1))

df2.show(truncate=False)
+---+-------------------------------------+------------------------------+
|id |jsonData                             |jsonParsedData                |
+---+-------------------------------------+------------------------------+
|1  |{"a": "hello"}                       |"hello"                       |
|2  |{"b": ["foo", "bar"]}                |["foo", "bar"]                |
|3  |{"c": {"cc": "baz"}}                 |{"cc": "baz"}                 |
|4  |{"d": [{"dd": "foo"}, {"dd": "bar"}]}|[{"dd": "foo"}, {"dd": "bar"}]|
+---+-------------------------------------+------------------------------+

另一个可能更好的方法是使用 from_json 具有 map<string, string> :

import pyspark.sql.functions as F

df2 = df.withColumn('jsonParsedData', F.map_values(F.from_json('jsonData', 'map<string,string>'))[0])

df2.show(truncate=False)
+---+-------------------------------------+---------------------------+
|id |jsonData                             |jsonParsedData             |
+---+-------------------------------------+---------------------------+
|1  |{"a": "hello"}                       |hello                      |
|2  |{"b": ["foo", "bar"]}                |["foo","bar"]              |
|3  |{"c": {"cc": "baz"}}                 |{"cc":"baz"}               |
|4  |{"d": [{"dd": "foo"}, {"dd": "bar"}]}|[{"dd":"foo"},{"dd":"bar"}]|
+---+-------------------------------------+---------------------------+

相关问题