在sparksql中使用窗口函数(dense_rank())选择

ih99xse1  于 2021-06-20  发布在  Mysql
关注(0)|答案(2)|浏览(565)

我有一个包含客户购买记录的表,我需要指定购买是在特定的日期时间窗口中进行的,一个窗口是8天,所以如果我今天购买,5天内购买一个,那么如果窗口号是1,则意味着我的购买,但如果我在今天第一天购买,并且在8天内的下一天购买,第一次购买在窗口1,最后一次购买在窗口2

create temporary table transactions
 (client_id int,
 transaction_ts datetime,
 store_id int)

 insert into transactions values 
 (1,'2018-06-01 12:17:37', 1),
 (1,'2018-06-02 13:17:37', 2),
 (1,'2018-06-03 14:17:37', 3),
 (1,'2018-06-09 10:17:37', 2),
 (2,'2018-06-02 10:17:37', 1),
 (2,'2018-06-02 13:17:37', 2),
 (2,'2018-06-08 14:19:37', 3),
 (2,'2018-06-16 13:17:37', 2),
 (2,'2018-06-17 14:17:37', 3)

窗口是8天,问题是我不知道如何指定dense\u rank()over(partition by)来查看datetime并在8天内创建一个窗口,因此我需要这样的东西

1,'2018-06-01 12:17:37', 1,1
1,'2018-06-02 13:17:37', 2,1
1,'2018-06-03 14:17:37', 3,1
1,'2018-06-09 10:17:37', 2,2
2,'2018-06-02 10:17:37', 1,1
2,'2018-06-02 13:17:37', 2,1
2,'2018-06-08 14:19:37', 3,2
2,'2018-06-16 13:17:37', 2,3
2,'2018-06-17 14:17:37', 3,3

你知道怎么弄吗?我可以在mysql或sparksql中运行它,但是mysql不支持分区。还是找不到解决办法!有什么帮助吗

jum4pzuy

jum4pzuy1#

很可能您可以使用时间和分区窗口函数在spark sql中解决此问题:

val purchases = Seq((1,"2018-06-01 12:17:37", 1), (1,"2018-06-02 13:17:37", 2), (1,"2018-06-03 14:17:37", 3), (1,"2018-06-09 10:17:37", 2), (2,"2018-06-02 10:17:37", 1), (2,"2018-06-02 13:17:37", 2), (2,"2018-06-08 14:19:37", 3), (2,"2018-06-16 13:17:37", 2), (2,"2018-06-17 14:17:37", 3)).toDF("client_id", "transaction_ts", "store_id")

purchases.show(false)
+---------+-------------------+--------+
|client_id|transaction_ts     |store_id|
+---------+-------------------+--------+
|1        |2018-06-01 12:17:37|1       |
|1        |2018-06-02 13:17:37|2       |
|1        |2018-06-03 14:17:37|3       |
|1        |2018-06-09 10:17:37|2       |
|2        |2018-06-02 10:17:37|1       |
|2        |2018-06-02 13:17:37|2       |
|2        |2018-06-08 14:19:37|3       |
|2        |2018-06-16 13:17:37|2       |
|2        |2018-06-17 14:17:37|3       |
+---------+-------------------+--------+

val groupedByTimeWindow = purchases.groupBy($"client_id", window($"transaction_ts", "8 days")).agg(collect_list("transaction_ts").as("transaction_tss"), collect_list("store_id").as("store_ids"))

val withWindowNumber = groupedByTimeWindow.withColumn("window_number", row_number().over(windowByClient))

withWindowNumber.orderBy("client_id", "window.start").show(false)

    +---------+---------------------------------------------+---------------------------------------------------------------+---------+-------------+
|client_id|window                                       |transaction_tss                                                |store_ids|window_number|
+---------+---------------------------------------------+---------------------------------------------------------------+---------+-------------+
|1        |[2018-05-28 17:00:00.0,2018-06-05 17:00:00.0]|[2018-06-01 12:17:37, 2018-06-02 13:17:37, 2018-06-03 14:17:37]|[1, 2, 3]|1            |
|1        |[2018-06-05 17:00:00.0,2018-06-13 17:00:00.0]|[2018-06-09 10:17:37]                                          |[2]      |2            |
|2        |[2018-05-28 17:00:00.0,2018-06-05 17:00:00.0]|[2018-06-02 10:17:37, 2018-06-02 13:17:37]                     |[1, 2]   |1            |
|2        |[2018-06-05 17:00:00.0,2018-06-13 17:00:00.0]|[2018-06-08 14:19:37]                                          |[3]      |2            |
|2        |[2018-06-13 17:00:00.0,2018-06-21 17:00:00.0]|[2018-06-16 13:17:37, 2018-06-17 14:17:37]                     |[2, 3]   |3            |
+---------+---------------------------------------------+---------------------------------------------------------------+---------+-------------+

如果你需要,你可以 explode 列出存储标识或事务标识中的元素。
希望有帮助!

puruo6ea

puruo6ea2#

我没有使用spark提出的解决方案,而是使用纯sql逻辑和游标。这不是很有效率,但我需要完成这项工作

相关问题