如何根据时间戳范围和行类型在sparkDataframe中配对行

ghhkc1vu  于 2021-05-29  发布在  Spark
关注(0)|答案(1)|浏览(415)

我有一个类似这样的Dataframe:

+------------------+---------+------------+
|    Timestamp     | RowType |   Value    |
+------------------+---------+------------+
| 2020. 6. 5. 8:12 | X       | Null       |
| 2020. 6. 5. 8:13 | Y       | Null       |
| 2020. 6. 5. 8:14 | Y       | Null       |
| 2020. 6. 5. 8:15 | A       | SomeValue  |
| 2020. 6. 5. 8:16 | Y       | Null       |
| 2020. 6. 5. 8:17 | Y       | Null       |
| 2020. 6. 5. 8:18 | X       | Null       |
| 2020. 6. 5. 8:19 | Y       | Null       |
| 2020. 6. 5. 8:20 | Y       | Null       |
| 2020. 6. 6. 8:21 | A       | SomeValue2 |
| 2020. 6. 7. 8:22 | Y       | Null       |
| 2020. 6. 8. 8:23 | Y       | Null       |
| 2020. 6. 9. 8:24 | X       | Null       |
+------------------+---------+------------+

对于每个x类型的行,我想从下面的a类型行中选择值。如果两个x typed之间没有a typed行,那么x行的值应该保持为null。

+------------------+---------+------------+
|    Timestamp     | RowType |   Value    |
+------------------+---------+------------+
| 2020. 6. 5. 8:12 | X       | SomeValue  |
| 2020. 6. 5. 8:18 | X       | SomeValue2 |
| 2020. 6. 9. 8:24 | X       | Null       |
+------------------+---------+------------+

这是否可以使用窗口函数?

fdbelqdn

fdbelqdn1#

如果 RowType 只包含这些值(x,y,a)它应该工作:

df.filter('RowType=!="Y")
   .select('Timestamp,'RowType,lag('Value,-1).over(Window.orderBy('Timestamp)).as("lag"))
   .filter('RowType==="X")
   .show()

输出:

+----------------+-------+-----------+
|       Timestamp|RowType|        lag|
+----------------+-------+-----------+
|2020. 6. 5. 8:12|      X|SomeValue  |
|2020. 6. 5. 8:18|      X|SomeValue2 |
|2020. 6. 9. 8:24|      X|       null|
+----------------+-------+-----------+

相关问题