pyspark:用最后一个值向前填充空值

mu0hgdu0  于 2021-05-27  发布在  Spark
关注(0)|答案(1)|浏览(505)

我有一个类似这样的Dataframe:

values = [
    ("2019-10-01", "004", 1.0),
    ("2019-10-02", "005", None),
    ("2019-10-03", "004", 2.0),
    ("2019-10-04", "004", 1.0),
    ("2019-10-05", "006", None)

] 

df = spark.createDataFrame(values, ['time', 'mode', 'value'])

我想用前面的非空值填充最后一列中的none。

("2019-10-01", "004", 1.0),
    ("2019-10-02", "005", 1.0),
    ("2019-10-03", "004", 2.0),
    ("2019-10-04", "004", 1.0),
    ("2019-10-05", "006", 1.0)

我试过这个:

import pyspark.sql.functions as f
from pyspark.sql.window import Window

df_2 = df.withColumn("value2", f.last('value', ignorenulls=True).over(Window.orderBy('time').rowsBetween(Window.unboundedPreceding, 0)))

这不起作用,因为新列中仍有空值。如何向前填充最后一列?

mspsb9vt

mspsb9vt1#

你的窗口有一个小错误,试试这个:

from pyspark.sql import functions as f, Window

window_last = Window.orderBy("time")

df_2 = df.withColumn("value2", f.last("value", ignorenulls=True).over(window_last))

结果:

+----------+----+-----+------+
|      time|mode|value|value2|
+----------+----+-----+------+
|2019-10-01| 004|  1.0|   1.0|
|2019-10-02| 005| null|   1.0|
|2019-10-03| 004|  2.0|   2.0|
|2019-10-04| 004|  1.0|   1.0|
|2019-10-05| 006| null|   1.0|
+----------+----+-----+------+

相关问题