pyspark 向Spark DataFrame添加空列

szqfcxe2  于 2023-03-17  发布在  Spark
关注(0)|答案(4)|浏览(230)

正如web上的manyother locations中所提到的,向现有DataFrame添加新列并不简单,不幸的是,拥有此功能非常重要(即使它在分布式环境中效率低下),特别是在尝试使用unionAll连接两个DataFrame时。
要将null列添加到DataFrame以促进unionAll,最佳的解决方案是什么?
我的版本是这样的:

from pyspark.sql.types import StringType
from pyspark.sql.functions import UserDefinedFunction
to_none = UserDefinedFunction(lambda x: None, StringType())
new_df = old_df.withColumn('new_column', to_none(df_old['any_col_from_old']))
c3frrgcw

c3frrgcw1#

这里所需要的只是导入StringType并使用litcast

from pyspark.sql.types import StringType
from pyspark.sql.functions import lit

new_df = old_df.withColumn('new_column', lit(None).cast(StringType()))

完整示例:

df = sc.parallelize([row(1, "2"), row(2, "3")]).toDF()
df.printSchema()
# root
#  |-- foo: long (nullable = true)
#  |-- bar: string (nullable = true)

new_df = df.withColumn('new_column', lit(None).cast(StringType()))

new_df.printSchema()
# root
#  |-- foo: long (nullable = true)
#  |-- bar: string (nullable = true)
#  |-- new_column: string (nullable = true)

new_df.show()
# +---+---+----------+
# |foo|bar|new_column|
# +---+---+----------+
# |  1|  2|      null|
# |  2|  3|      null|
# +---+---+----------+

Scala的等价物可以在这里找到:Create new Dataframe with empty/null field values

svmlkihl

svmlkihl2#

我会将lit(None)强制转换为NullType而不是StringType,这样,如果我们必须过滤掉该列上的非空行......就可以很容易地完成如下操作

df = sc.parallelize([Row(1, "2"), Row(2, "3")]).toDF()

new_df = df.withColumn('new_column', lit(None).cast(NullType()))

new_df.printSchema() 

df_null = new_df.filter(col("new_column").isNull()).show()
df_non_null = new_df.filter(col("new_column").isNotNull()).show()

如果要强制转换为StringType,请注意不要使用lit(“None”)(带引号),因为它将无法在col(“new_column”)上搜索具有过滤条件.isNull()的记录。

zyfwsgd6

zyfwsgd63#

不带import StringType的选件

df = df.withColumn('foo', F.lit(None).cast('string'))

完整示例:

from pyspark.sql import functions as F
df = spark.range(1, 3).toDF('c')

df = df.withColumn('foo', F.lit(None).cast('string'))

df.printSchema()
#     root
#      |-- c: long (nullable = false)
#      |-- foo: string (nullable = true)

df.show()
#     +---+----+
#     |  c| foo|
#     +---+----+
#     |  1|null|
#     |  2|null|
#     +---+----+
g6baxovj

g6baxovj4#

df1.selectExpr("school","null as col1").show()

输出:

+--------------------+----+
|              school|col1|
+--------------------+----+
|Shanghai Jiao Ton...|null|
|   Peking University|null|
|Shanghai Jiao Ton...|null|
|    Fudan University|null|
|    Fudan University|null|
| Tsinghua University|null|
|Shanghai Jiao Ton...|null|
| Tsinghua University|null|
| Tsinghua University|null|
|   Peking University|null|

或在pyspark 2.2+中

df1.pandas_api().assign(new_column=None)

相关问题