在pysparkDataframe中的选定时间间隔内复制日期间隔之间的记录

wlsrxk51  于 2021-05-26  发布在  Spark
关注(0)|答案(2)|浏览(509)

我有一个pyspark数据框架,它可以跟踪几个月内产品价格和状态的变化。这意味着只有当与上月相比发生了变化(状态或价格)时,才会创建新行,如下面的虚拟数据中所示

----------------------------------------
    |product_id| status    | price| month  |
    ----------------------------------------
    |1         | available | 5    | 2019-10|
    ----------------------------------------
    |1         | available | 8    | 2020-08|
    ----------------------------------------
    |1         | limited   | 8    | 2020-10|
    ----------------------------------------
    |2         | limited   | 1    | 2020-09|
    ----------------------------------------
    |2         | limited   | 3    | 2020-10|
    ----------------------------------------

我想创建一个数据框,显示过去6个月的值。这意味着每当上面的Dataframe中有间隙时,我就需要复制记录。例如,如果最近6个月是2020-07、2020-08。。。2020-12,则上述Dataframe的结果应为

----------------------------------------
    |product_id| status    | price| month  |
    ----------------------------------------
    |1         | available | 5    | 2020-07|
    ----------------------------------------
    |1         | available | 8    | 2020-08|
    ----------------------------------------
    |1         | available | 8    | 2020-09|
    ----------------------------------------
    |1         | limited   | 8    | 2020-10|
    ----------------------------------------
    |1         | limited   | 8    | 2020-11|
    ----------------------------------------
    |1         | limited   | 8    | 2020-12|
    ----------------------------------------
    |2         | limited   | 1    | 2020-09|
    ----------------------------------------
    |2         | limited   | 3    | 2020-10|
    ----------------------------------------
    |2         | limited   | 3    | 2020-11|
    ----------------------------------------
    |2         | limited   | 3    | 2020-12|
    ----------------------------------------

请注意,对于产品\u id=1,有一个2019-10年的较旧记录,该记录被传播到2020-08年,然后被修剪,而对于产品\u id=2,在2020-09年之前没有记录,因此2020-07、2020-08月份没有为其填写(因为该产品在2020-09年之前不存在)。
由于Dataframe由数百万条记录组成,使用for循环和检查每个产品id的“暴力”解决方案相当慢。似乎应该可以使用窗口函数来解决这个问题,下个月再创建一个列,然后根据该列填补空白,但我不知道如何实现这一点。

kx5bkwkv

kx5bkwkv1#

关于@jxc注解,我已经为这个用例准备了答案。
下面是代码片段。
导入spark sql函数 from pyspark.sql import functions as F, Window 准备样本数据

simpleData = ((1,"Available",5,"2020-07"),                                                              
    (1,"Available",8,"2020-08"),                                           
    (1,"Limited",8,"2020-12"),                                           
    (2,"Limited",1,"2020-09"),                                          
    (2,"Limited",3,"2020-12")
    )

    columns= ["product_id", "status", "price", "month"]

创建示例数据的dataframe df = spark.createDataFrame(data = simpleData, schema = columns) 在dataframe中添加date列以获得正确的格式化日期

df0 = df.withColumn("date",F.to_date('month','yyyy-MM'))

    df0.show()

    +----------+---------+-----+-------+----------+                                          
    |product_id|   status|price|  month|      date|                                               
    +----------+---------+-----+-------+----------+                                                
    |         1|Available|    5|2020-07|2020-07-01|                                                 
    |         1|Available|    8|2020-08|2020-08-01|                                                
    |         1|  Limited|    8|2020-12|2020-12-01|                                                
    |         2|  Limited|    1|2020-09|2020-09-01|                                                
    |         2|  Limited|    3|2020-12|2020-12-01|                                                
    +----------+---------+-----+-------+----------+

创建winspec w1并使用窗口聚合函数lead查找下一个日期(w1),将其转换为前几个月以设置日期序列:

w1 = Window.partitionBy('product_id').orderBy('date')
    df1 = df0.withColumn('end_date',F.coalesce(F.add_months(F.lead('date').over(w1),-1),'date'))
    df1.show()

    +----------+---------+-----+-------+----------+----------+                                                                  
    |product_id|   status|price|  month|      date|  end_date|                                                      
    +----------+---------+-----+-------+----------+----------+                                              
    |         1|Available|    5|2020-07|2020-07-01|2020-07-01|                                                      
    |         1|Available|    8|2020-08|2020-08-01|2020-11-01|                                                            
    |         1|  Limited|    8|2020-12|2020-12-01|2020-12-01|                                                                     
    |         2|  Limited|    1|2020-09|2020-09-01|2020-11-01|                                                                            
    |         2|  Limited|    3|2020-12|2020-12-01|2020-12-01|                                                                                   
    +----------+---------+-----+-------+----------+----------+

使用months#between(end#date,date)计算两个日期之间的#个月,并使用transform函数迭代序列(0,#months),创建一个命名的#结构,date=add#months(date,i)和price=if(i=0,price,price),使用inline#outer分解结构数组。

df2 = df1.selectExpr("product_id", "status", inline_outer( transform( sequence(0,int(months_between(end_date, date)),1), i -> (add_months(date,i) as date, IF(i=0,price,price) as price) ) ) )

    df2.show()

    +----------+---------+----------+-----+                                                    
    |product_id|   status|      date|price|                                                             
    +----------+---------+----------+-----+                                                              
    |         1|Available|2020-07-01|    5|                                                              
    |         1|Available|2020-08-01|    8|                                                  
    |         1|Available|2020-09-01|    8|                                                           
    |         1|Available|2020-10-01|    8|                                                             
    |         1|Available|2020-11-01|    8|                                                                 
    |         1|  Limited|2020-12-01|    8|                                                                
    |         2|  Limited|2020-09-01|    1|                                                                                 
    |         2|  Limited|2020-10-01|    1|                                                    
    |         2|  Limited|2020-11-01|    1|                                                                          
    |         2|  Limited|2020-12-01|    3|                                                          
    +----------+---------+----------+-----+

对上的Dataframe进行分区 product_id 以及在 df3 获取每行的行号。然后,存储 rank 包含新列的列值 max_rank 对于每个 product_id 储存 max_rank 加入 df4 ```
w2 = Window.partitionBy('product_id').orderBy('date')
df3 = df2.withColumn('rank',F.row_number().over(w2))
Schema: DataFrame[product_id: bigint, status: string, date: date, price: bigint, rank: int]
df3.show()
+----------+---------+----------+-----+----+
|product_id| status| date|price|rank|
+----------+---------+----------+-----+----+
| 1|Available|2020-07-01| 5| 1|
| 1|Available|2020-08-01| 8| 2|
| 1|Available|2020-09-01| 8| 3|
| 1|Available|2020-10-01| 8| 4|
| 1|Available|2020-11-01| 8| 5|
| 1| Limited|2020-12-01| 8| 6|
| 2| Limited|2020-09-01| 1| 1|
| 2| Limited|2020-10-01| 1| 2|
| 2| Limited|2020-11-01| 1| 3|
| 2| Limited|2020-12-01| 3| 4|
+----------+---------+----------+-----+----+

df4 = df3.groupBy("product_id").agg(F.max('rank').alias('max_rank'))                                                           
Schema: DataFrame[product_id: bigint, max_rank: int]
df4.show()
+----------+--------+
|product_id|max_rank|
+----------+--------+
|         1|       6|
|         2|       4|
+----------+--------+
加入 `df3` 以及 `df4` Dataframe打开 `product_id` 得到 `max_rank` ```
df5 = df3.join(df4,df3.product_id == df4.product_id,"inner") \
             .select(df3.product_id,df3.status,df3.date,df3.price,df3.rank,df4.max_rank)                                                                                          
    Schema: DataFrame[product_id: bigint, status: string, date: date, price: bigint, rank: int, max_rank: int]
    df5.show()
    +----------+---------+----------+-----+----+--------+
    |product_id|   status|      date|price|rank|max_rank|
    +----------+---------+----------+-----+----+--------+
    |         1|Available|2020-07-01|    5|   1|       6|
    |         1|Available|2020-08-01|    8|   2|       6|
    |         1|Available|2020-09-01|    8|   3|       6|
    |         1|Available|2020-10-01|    8|   4|       6|
    |         1|Available|2020-11-01|    8|   5|       6|
    |         1|  Limited|2020-12-01|    8|   6|       6|
    |         2|  Limited|2020-09-01|    1|   1|       4|
    |         2|  Limited|2020-10-01|    1|   2|       4|
    |         2|  Limited|2020-11-01|    1|   3|       4|
    |         2|  Limited|2020-12-01|    3|   4|       4|
    +----------+---------+----------+-----+----+--------+

最后过滤 df5 Dataframe使用 between 函数获取最近6个月的数据。

FinalResultDF = df5.filter(F.col('rank') \                                      
                         .between(F.when((F.col('max_rank') > 5),(F.col('max_rank')-6)).otherwise(0),F.col('max_rank'))) \
                         .select(df5.product_id,df5.status,df5.date,df5.price)

    FinalResultDF.show(truncate=False)
+----------+---------+----------+-----+                                               
    |product_id|status   |date      |price|                                                
    +----------+---------+----------+-----+                                                               
    |1         |Available|2020-07-01|5    |                                                                                
    |1         |Available|2020-08-01|8    |                                                                                          
    |1         |Available|2020-09-01|8    |                                                                                                           
    |1         |Available|2020-10-01|8    |                                                                                                             
    |1         |Available|2020-11-01|8    |                                                                                                               
    |1         |Limited  |2020-12-01|8    |                                                                                                                     
    |2         |Limited  |2020-09-01|1    |                                                                                                                     
    |2         |Limited  |2020-10-01|1    |                                                                                                                        
    |2         |Limited  |2020-11-01|1    |                                                                                                                      
    |2         |Limited  |2020-12-01|3    |                                                                                                         
    +----------+---------+----------+-----+
hfwmuf9z

hfwmuf9z2#

使用spark sql:
给定输入Dataframe:

val df = spark.sql(""" with t1 (
 select  1 c1,   'available' c2, 5 c3,   '2019-10' c4  union all
 select  1 c1,   'available' c2, 8 c3,   '2020-08' c4  union all
 select  1 c1,   'limited' c2, 8 c3,   '2020-10' c4  union all
 select  2 c1,   'limited' c2, 1 c3,   '2020-09' c4  union all
 select  2 c1,   'limited' c2, 3 c3,   '2020-10' c4 
  )  select   c1  product_id,   c2   status    ,   c3   price,   c4  month      from t1
""")

df.createOrReplaceTempView("df")
df.show(false)

+----------+---------+-----+-------+
|product_id|status   |price|month  |
+----------+---------+-----+-------+
|1         |available|5    |2019-10|
|1         |available|8    |2020-08|
|1         |limited  |8    |2020-10|
|2         |limited  |1    |2020-09|
|2         |limited  |3    |2020-10|
+----------+---------+-----+-------+

过滤日期窗口,即从2020-07到2020-12的6个月,并将其存储在df1中

val df1 = spark.sql("""
select * from df where month > '2020-07' and month < '2020-12' 
""")
df1.createOrReplaceTempView("df1")
df1.show(false)

+----------+---------+-----+-------+
|product_id|status   |price|month  |
+----------+---------+-----+-------+
|1         |available|8    |2020-08|
|1         |limited  |8    |2020-10|
|2         |limited  |1    |2020-09|
|2         |limited  |3    |2020-10|
+----------+---------+-----+-------+

下边界-当月份<=“2020-07”时获得最大值。将月份改写为“2020-07”

val df2 = spark.sql("""
select product_id, status, price, '2020-07' month from df  where (product_id,month) in 
( select product_id, max(month) from df where month <= '2020-07' group by 1 ) 
""")
df2.createOrReplaceTempView("df2")
df2.show(false)

+----------+---------+-----+-------+
|product_id|status   |price|month  |
+----------+---------+-----+-------+
|1         |available|5    |2020-07|
+----------+---------+-----+-------+

上限-使用<='2020-12'获得最大值。将月份改写为“2020-12”

val df3 = spark.sql("""
select product_id, status, price, '2020-12' month from df where (product_id, month) in  
( select product_id, max(month) from df where month <= '2020-12' group by 1 ) 
""")
df3.createOrReplaceTempView("df3")
df3.show(false)

+----------+-------+-----+-------+
|product_id|status |price|month  |
+----------+-------+-----+-------+
|1         |limited|8    |2020-12|
|2         |limited|3    |2020-12|
+----------+-------+-----+-------+

现在将所有3个合并并存储在df4中

val df4 = spark.sql("""
select  product_id, status, price,  month from df1  union all 
select  product_id, status, price,  month from df2  union all 
select  product_id, status, price,  month from df3
order by product_id, month
""")
df4.createOrReplaceTempView("df4")
df4.show(false)

+----------+---------+-----+-------+
|product_id|status   |price|month  |
+----------+---------+-----+-------+
|1         |available|5    |2020-07|
|1         |available|8    |2020-08|
|1         |limited  |8    |2020-10|
|1         |limited  |8    |2020-12|
|2         |limited  |1    |2020-09|
|2         |limited  |3    |2020-10|
|2         |limited  |3    |2020-12|
+----------+---------+-----+-------+

结果:使用序列(date1,date2,interval 1 month)为缺少的月份生成日期数组。分解数组得到结果。

spark.sql("""
select product_id, status, price, month, explode(dt) res_month from 
(
select t1.*, 
case when months_between(lm||'-01',month||'-01')=1.0 then array(month||'-01')
     when month='2020-12' then array(month||'-01')
     else sequence(to_date(month||'-01'), add_months(to_date(lm||'-01'),-1), interval 1 month ) 
end dt 
     from (
            select product_id, status, price, month, 
            lead(month) over(partition by product_id order by month) lm 
            from df4 
          ) t1 
    ) t2 
  order by product_id, res_month
""")
.show(false)

+----------+---------+-----+-------+----------+
|product_id|status   |price|month  |res_month |
+----------+---------+-----+-------+----------+
|1         |available|5    |2020-07|2020-07-01|
|1         |available|8    |2020-08|2020-08-01|
|1         |available|8    |2020-08|2020-09-01|
|1         |limited  |8    |2020-10|2020-10-01|
|1         |limited  |8    |2020-10|2020-11-01|
|1         |limited  |8    |2020-12|2020-12-01|
|2         |limited  |1    |2020-09|2020-09-01|
|2         |limited  |3    |2020-10|2020-10-01|
|2         |limited  |3    |2020-10|2020-11-01|
|2         |limited  |3    |2020-12|2020-12-01|
+----------+---------+-----+-------+----------+

相关问题