如何在pyspark中计算Dataframe中的重叠天数?

fsi0uk1n  于 2021-05-27  发布在  Spark
关注(0)|答案(2)|浏览(358)

我需要在数据框中按行计算重叠天数。数据如下所示:

+-------+-------------------+-------------------+------------------+
|id|             begin|                end|              days|
+-------+-------------------+-------------------+------------------+
|1|2019-01-01 00:00:00|2019-01-08 02:10:00| 7.090277777777778|
|1|2019-02-04 05:28:00|2019-03-05 19:29:00|29.584027777777777|
|1|2019-06-05 22:18:00|2020-01-01 00:00:00|209.07083333333333|
|1|2019-05-17 16:25:00|2019-06-05 22:18:00| 19.24513888888889|
|1|2019-05-03 05:05:00|2019-05-17 16:25:00|14.472222222222221|
|1|2019-01-08 02:10:00|2019-02-04 05:28:00|           27.1375|
|1|2019-01-01 00:00:00|2020-01-01 00:00:00|             365.0|
|1|2019-04-22 18:45:00|2019-05-03 05:05:00|10.430555555555555|
|1|2019-03-05 19:29:00|2019-04-22 18:45:00| 47.96944444444444|
+-------+-------------------+-------------------+------------------+

在这里,第一个条目跨越2019年(365天)。所有其他条目与第一个条目重叠。我需要一个函数来计算总天数,即删除重叠天数后数据集中的365天。
实际上我在r中解决了这个问题,但是我不能在pyspark中运行循环。
我在寻找这样的输出。

+-------+-------------------+-------------------+------------------+------------------+
|     id|              begin|                end|              days|           overlap|
+-------+-------------------+-------------------+------------------+------------------+
|1      |2019-01-01 00:00:00|2020-01-01 00:00:00|             365.0|            0|
|1      |2019-01-01 00:00:00|2019-01-08 02:10:00| 7.090277777777778| 7.090277777777778|
|1      |2019-01-08 02:10:00|2019-02-04 05:28:00|           27.1375|           27.1375|
|1      |2019-02-04 05:28:00|2019-03-05 19:29:00|29.584027777777777|29.584027777777777|
|1      |2019-03-05 19:29:00|2019-04-22 18:45:00| 47.96944444444444| 47.96944444444444|
|1      |2019-04-22 18:45:00|2019-05-03 05:05:00|10.430555555555555|10.430555555555555|
|1      |2019-05-03 05:05:00|2019-05-17 16:25:00|14.472222222222221|14.472222222222221|
|1      |2019-05-17 16:25:00|2019-06-05 22:18:00| 19.24513888888889| 19.24513888888889|
|1      |2019-06-05 22:18:00|2020-01-01 00:00:00|209.07083333333333|209.07083333333333|
+-------+-------------------+-------------------+------------------+------------------+

日期从来都不是有序的,也有没有重叠的情况。
场景2:没有重叠

+-------+-------------------+-------------------+-----+-----+
|  id   |              begin|                end| days| over|
+-------+-------------------+-------------------+-----+-----+
|2      |2019-01-01 00:00:00|2019-12-25 00:00:00|358.0|    0|
|2      |2019-12-25 00:00:00|2020-01-01 00:00:00|  7.0|    0|
+-------+-------------------+-------------------+-----+-----+

场景3:部分重叠

+-------+-------------------+-------------------+-----+-----+
|     id|              begin|                end| days| over|
+-------+-------------------+-------------------+-----+-----+
|3      |2019-01-01 00:00:00|2019-12-25 00:00:00|358.0|    0|
|3      |2019-12-20 00:00:00|2020-01-01 00:00:00| 12.0|    5|
+-------+-------------------+-------------------+-----+-----+

场景4:更复杂的是,第一次进入跨越2019年前358天。第二个条目与第一个条目完全重叠,因此所有的日子都结束了。第三个条目与第二个条目不重叠,但与第一个条目重叠5天,因此在“结束”列下为5天。

+-------+-------------------+-------------------+-----+-----+
|     id|              begin|                end| days| over|
+-------+-------------------+-------------------+-----+-----+
|4      |2019-01-01 00:00:00|2019-12-25 00:00:00|358.0|    0|
|4      |2019-01-01 00:00:00|2019-11-25 00:00:00|328.0|328.0|
|4      |2019-12-20 00:00:00|2020-01-01 00:00:00| 12.0|    5|
+-------+-------------------+-------------------+-----+-----+

基本上,我想知道一个特定的身份证有效期有多长。我不能只取最大值和最小值减去它们,因为周期之间可能会有间断。
在r中,我创建了另一个名为“overlap”的列,并在for循环中使用overlap函数检查所有值。
产生所需输出的r函数:

abc<-data.frame()
for (i in id) {
  xyz<- dataset %>% filter(id==i) %>% arrange(begin)

  for(j in 1:(nrow(xyz)-1)){
    k=j
    while(k<nrow(xyz)){
      xyz$overlap[j]<- xyz$overlap[j] + Overlap(c(xyz$begin[j], xyz$end[j]), c(xyz$begin[k+1], xyz$end[k+1])) 
      k=k+1
    }

  }
  abc<- bind_rows(abc,xyz)
}

我还在学习pyspark,需要帮助。
@murtihash对代码段的响应
嗨,它看起来更接近答案,但仍然不是我想要的结果。代码的输出

+-------+-------------------+-------------------+-----------------+-------+
|     id|              begin|                end|             days|overlap|
+-------+-------------------+-------------------+-----------------+-------+
|7777777|2019-01-05 01:00:00|2019-04-04 00:00:00|88.95833333333333|      0|
|7777777|2019-04-04 00:00:00|2019-07-11 00:00:00|             98.0|      0|
|7777777|2019-07-11 00:00:00|2019-09-17 00:00:00|             68.0|      1|
|7777777|2019-09-17 00:00:00|2019-09-19 22:01:00|2.917361111111111|      0|
|7777777|2019-09-19 22:01:00|2020-01-01 00:00:00|103.0826388888889|     -1|
|7777777|2019-09-19 22:01:00|2020-01-01 00:00:00|103.0826388888889|     -1|
+-------+-------------------+-------------------+-----------------+-------+

期望输出应为:

+-------+-------------------+-------------------+-----------------+-------+
|     id|              begin|                end|             days|overlap|
+-------+-------------------+-------------------+-----------------+-------+
|7777777|2019-01-05 01:00:00|2019-04-04 00:00:00|88.95833333333333|      0|
|7777777|2019-04-04 00:00:00|2019-07-11 00:00:00|             98.0|      0|
|7777777|2019-07-11 00:00:00|2019-09-17 00:00:00|             68.0|      0|
|7777777|2019-09-17 00:00:00|2019-09-19 22:01:00|2.917361111111111|      0|
|7777777|2019-09-19 22:01:00|2020-01-01 00:00:00|103.0826388888889|103.082|
|7777777|2019-09-19 22:01:00|2020-01-01 00:00:00|103.0826388888889|      0|
+-------+-------------------+-------------------+-----------------+-------+

说明:前四行没有重叠。第五行和第六行的周期完全相同(并且与其他行不重叠),因此第五行或第六行中的一行的重叠应为103.08天
更新:无法处理此特定场景。代码片段@murtihash的输出

+-------+-------------------+-------------------+------------------+-------+
|  imono|              begin|                end|              days|overlap|
+-------+-------------------+-------------------+------------------+-------+
|9347774|2019-01-01 00:00:00|2019-01-08 02:10:00| 7.090277777777778|    0.0|
|9347774|2019-01-08 02:10:00|2019-02-04 05:28:00|           27.1375|    0.0|
|9347774|2019-02-04 05:28:00|2019-03-05 19:29:00|29.584027777777777|    0.0|
|9347774|2019-03-05 19:29:00|2019-04-22 18:45:00| 47.96944444444444|    0.0|
|9347774|2019-04-22 18:45:00|2019-05-03 05:05:00|10.430555555555555|    0.0|
|9347774|2019-05-03 05:05:00|2019-05-17 16:25:00|14.472222222222221|    0.0|
|9347774|2019-05-17 16:25:00|2019-06-05 22:18:00| 19.24513888888889|    0.0|
|9347774|2019-01-01 00:00:00|2020-01-01 00:00:00|             365.0|    7.0|
|9347774|2019-06-05 22:18:00|2020-01-01 00:00:00|209.07083333333333|    0.0|
+-------+-------------------+-------------------+------------------+-------+

期望输出:这个

+-------+-------------------+-------------------+------------------+-------+
    |  imono|              begin|                end|              days|overlap|
    +-------+-------------------+-------------------+------------------+-------+
    |9347774|2019-01-01 00:00:00|2019-01-08 02:10:00| 7.090277777777778|    0.0|
    |9347774|2019-01-08 02:10:00|2019-02-04 05:28:00|           27.1375|    0.0|
    |9347774|2019-02-04 05:28:00|2019-03-05 19:29:00|29.584027777777777|    0.0|
    |9347774|2019-03-05 19:29:00|2019-04-22 18:45:00| 47.96944444444444|    0.0|
    |9347774|2019-04-22 18:45:00|2019-05-03 05:05:00|10.430555555555555|    0.0|
    |9347774|2019-05-03 05:05:00|2019-05-17 16:25:00|14.472222222222221|    0.0|
    |9347774|2019-05-17 16:25:00|2019-06-05 22:18:00| 19.24513888888889|    0.0|
    |9347774|2019-01-01 00:00:00|2020-01-01 00:00:00|             365.0|    365|
    |9347774|2019-06-05 22:18:00|2020-01-01 00:00:00|209.07083333333333|    0.0|
    +-------+-------------------+-------------------+------------------+-------+

+-------+-------------------+-------------------+------------------+-------+
|  imono|              begin|                end|              days|overlap|

+-------+-------------------+-------------------+------------------+-------+
|9347774|2019-01-01 00:00:00|2019-01-08 02:10:00| 7.090277777777778|    7.1|
|9347774|2019-01-08 02:10:00|2019-02-04 05:28:00|           27.1375|   27.1|
|9347774|2019-02-04 05:28:00|2019-03-05 19:29:00|29.584027777777777|   29.5|
|9347774|2019-03-05 19:29:00|2019-04-22 18:45:00| 47.96944444444444|   48.0|
|9347774|2019-04-22 18:45:00|2019-05-03 05:05:00|10.430555555555555|   10.4|
|9347774|2019-05-03 05:05:00|2019-05-17 16:25:00|14.472222222222221|   14.5|
|9347774|2019-05-17 16:25:00|2019-06-05 22:18:00| 19.24513888888889|   19.2|
|9347774|2019-01-01 00:00:00|2020-01-01 00:00:00|             365.0|    0.0|
|9347774|2019-06-05 22:18:00|2020-01-01 00:00:00|209.07083333333333|  209.1|
+-------+-------------------+-------------------+------------------+-------+

说明:最后第二个条目跨越全年,所有其他条目与之重叠。因此,要么输出最后一个条目重叠=365,要么所有其他条目的日期重叠,最后一个条目的日期重叠为0天。
更新2:无法处理此特定场景。来自代码片段@murtihash(update2)的输出

+-------+-------------------+-------------------+------------------+-------+
|  imono|              begin|                end|              days|overlap|
+-------+-------------------+-------------------+------------------+-------+
|9395123|2019-01-19 05:01:00|2019-02-06 00:00:00|17.790972222222223|   17.0|
|9395123|2019-02-06 00:00:00|2019-06-17 00:00:00|             131.0|    0.0|
|9395123|2019-01-19 05:01:00|2020-01-01 00:00:00| 346.7909722222222|    0.0|
|9395123|2019-06-17 00:00:00|2020-01-01 00:00:00|             198.0|    0.0|
+-------+-------------------+-------------------+------------------+-------+

期望输出:

+-------+-------------------+-------------------+------------------+-------+
|  id   |              begin|                end|              days|overlap|
+-------+-------------------+-------------------+------------------+-------+
|8888888|2019-01-19 05:01:00|2019-02-06 00:00:00|17.790972222222223|   17.8|
|8888888|2019-02-06 00:00:00|2019-06-17 00:00:00|             131.0|    0.0|
|8888888|2019-01-19 05:01:00|2020-01-01 00:00:00| 346.7909722222222|    329|
|8888888|2019-06-17 00:00:00|2020-01-01 00:00:00|             198.0|    0.0|
+-------+-------------------+-------------------+------------------+-------+

我真的不明白你的代码片段是做什么的,因此我无法调整它为我的目的。谢谢你的帮助!

z4bn682m

z4bn682m1#

为了 Spark2.4+ ,您可以使用 sequence (生成日期范围), collect_list, 并使用数组函数和高阶函数的组合来获得所需的重叠。

df.show() #sample dataframe

# +---+-------------------+-------------------+-----+

# | id|              begin|                end| days|

# +---+-------------------+-------------------+-----+

# |  2|2019-01-01 00:00:00|2019-12-25 00:00:00|358.0|

# |  2|2019-12-25 00:00:00|2020-01-01 00:00:00|  7.0|

# |  3|2019-01-01 00:00:00|2019-12-25 00:00:00|358.0|

# |  3|2019-12-20 00:00:00|2020-01-01 00:00:00| 12.0|

# |  4|2019-01-01 00:00:00|2019-12-25 00:00:00|358.0|

# |  4|2019-01-01 00:00:00|2019-11-25 00:00:00|328.0|

# |  4|2019-12-20 00:00:00|2020-01-01 00:00:00| 12.0|

# +---+-------------------+-------------------+-----+

from pyspark.sql import functions as F
from pyspark.sql.window import Window

w1=Window().partitionBy("id").orderBy("begin")
df.withColumn("seq", F.expr("""sequence(to_timestamp(begin), to_timestamp(end),interval 1 day)"""))\
  .withColumn("seq1", F.expr("""flatten(filter(collect_list(seq) over\
                                (partition by id),x-> arrays_overlap(x,seq)==True and seq!=x))"""))\
  .withColumn("overlap", F.when(F.row_number().over(w1)==1, F.lit(0))\
              .otherwise(F.size(F.array_intersect("seq","seq1"))-1)).orderBy("id","end").drop("seq","seq1").show()

# +---+-------------------+-------------------+-----+-------+

# | id|              begin|                end| days|overlap|

# +---+-------------------+-------------------+-----+-------+

# |  2|2019-01-01 00:00:00|2019-12-25 00:00:00|358.0|      0|

# |  2|2019-12-25 00:00:00|2020-01-01 00:00:00|  7.0|      0|

# |  3|2019-01-01 00:00:00|2019-12-25 00:00:00|358.0|      0|

# |  3|2019-12-20 00:00:00|2020-01-01 00:00:00| 12.0|      5|

# |  4|2019-01-01 00:00:00|2019-11-25 00:00:00|328.0|    328|

# |  4|2019-01-01 00:00:00|2019-12-25 00:00:00|358.0|      0|

# |  4|2019-12-20 00:00:00|2020-01-01 00:00:00| 12.0|      5|

# +---+-------------------+-------------------+-----+-------+

``` `UPDATE` :
这应涵盖所有情况:

from pyspark.sql import functions as F
from pyspark.sql.window import Window

w1=Window().partitionBy("id").orderBy("begin")
w2=Window().partitionBy("id","begin","end").orderBy("begin")
w3=Window().partitionBy("id","begin","end")
w4=Window().partitionBy("id","begin","end","maxrownum").orderBy("begin")
df.withColumn("seq", F.expr("""sequence(to_timestamp(begin), to_timestamp(end),interval 1 day)"""))
.withColumn('maxrownum', F.max(F.row_number().over(w2)).over(w3))
.withColumn('rowNum', F.row_number().over(w4))
.withColumn("seq1", F.expr("""flatten(filter(collect_list(seq) over
(partition by id order by begin),x-> arrays_overlap(x,seq)==True and seq!=x))"""))
.withColumn("overlap", F.when(F.row_number().over(w1)==1, F.lit(0))
.when(F.size(F.array_intersect("seq","seq1"))!=0,F.size(F.array_intersect("seq","seq1"))-1)
.when((F.col("maxrownum")!=1)&(F.col("rowNum")<F.col("maxrownum")),F.col("days"))
.otherwise(F.lit(0)))
.orderBy("id","end").drop("seq","seq1","maxrownum","rowNum").show()

+-------+-------------------+-------------------+-----------------+-----------------+

| id| begin| end| days| overlap|

+-------+-------------------+-------------------+-----------------+-----------------+

|7777777|2019-01-05 01:00:00|2019-04-04 00:00:00|88.95833333333333| 0.0|

|7777777|2019-04-04 00:00:00|2019-07-11 00:00:00| 98.0| 0.0|

|7777777|2019-07-11 00:00:00|2019-09-17 00:00:00| 68.0| 0.0|

|7777777|2019-09-17 00:00:00|2019-09-19 22:01:00|2.917361111111111| 0.0|

|7777777|2019-09-19 22:01:00|2020-01-01 00:00:00|103.0826388888889|103.0826388888889|

|7777777|2019-09-19 22:01:00|2020-01-01 00:00:00|103.0826388888889| 0.0|

+-------+-------------------+-------------------+-----------------+-----------------+

`UPDATE2:`
from pyspark.sql import functions as F
from pyspark.sql.window import Window

w1=Window().partitionBy("id").orderBy("begin")
w2=Window().partitionBy("id","begin","end").orderBy("begin")
w3=Window().partitionBy("id","begin","end")
w4=Window().partitionBy("id","begin","end","maxrownum").orderBy("begin")
df.withColumn("seq", F.expr("""sequence(to_timestamp(begin), to_timestamp(end),interval 1 day)"""))
.withColumn('maxrownum', F.max(F.row_number().over(w2)).over(w3))
.withColumn('rowNum', F.row_number().over(w4))
.withColumn("seq1", F.expr("""flatten(filter(collect_list(seq) over
(partition by id),x-> arrays_overlap(x,seq)==True and seq!=x))"""))
.withColumn("overlap", F.when(F.row_number().over(w1)==1, F.lit(0))
.when(F.size(F.array_intersect("seq","seq1"))!=0,F.size(F.array_intersect("seq","seq1"))-1)
.when((F.col("maxrownum")!=1)&(F.col("rowNum")<F.col("maxrownum")),F.col("days"))
.when(F.col("maxrownum")==1,F.col("days"))
.otherwise(F.lit(0)))
.replace(1,0)
.orderBy("id","end").drop("seq","seq1","rowNum","maxrownum").show()

+-------+-------------------+-------------------+------------------+------------------+

| id| begin| end| days| overlap|

+-------+-------------------+-------------------+------------------+------------------+

|9347774|2019-01-01 00:00:00|2019-01-08 02:10:00| 7.090277777777778| 7.0|

|9347774|2019-01-08 02:10:00|2019-02-04 05:28:00| 27.1375| 27.1375|

|9347774|2019-02-04 05:28:00|2019-03-05 19:29:00|29.584027777777777|29.584027777777777|

|9347774|2019-03-05 19:29:00|2019-04-22 18:45:00| 47.96944444444444| 47.96944444444444|

|9347774|2019-04-22 18:45:00|2019-05-03 05:05:00|10.430555555555555|10.430555555555555|

|9347774|2019-05-03 05:05:00|2019-05-17 16:25:00|14.472222222222221|14.472222222222221|

|9347774|2019-05-17 16:25:00|2019-06-05 22:18:00| 19.24513888888889| 19.24513888888889|

|9347774|2019-01-01 00:00:00|2020-01-01 00:00:00| 365.0| 0.0|

|9347774|2019-06-05 22:18:00|2020-01-01 00:00:00|209.07083333333333|209.07083333333333|

+-------+-------------------+-------------------+------------------+------------------+

c9qzyr3d

c9qzyr3d2#

检查下面的解决方案,看看是否适合你。

import pyspark.sql.functions as F

df = sc.parallelize([["1","2019-01-01 00:00:00","2019-01-08 02:10:00","7.090277777777778"],
["1","2019-02-04 05:28:00","2019-03-05 19:29:00","29.584027777777777"],
["1","2019-06-05 22:18:00","2020-01-01 00:00:00","209.07083333333333"],
["1","2019-05-17 16:25:00","2019-06-05 22:18:00","19.24513888888889"],
["1","2019-05-03 05:05:00","2019-05-17 16:25:00","14.472222222222221"],
["1","2019-01-08 02:10:00","2019-02-04 05:28:00","27.1375"],
["1","2019-01-01 00:00:00","2020-01-01 00:00:00","365.0"],
["1","2019-04-22 18:45:00","2019-05-03 05:05:00","10.430555555555555"],
["1","2019-03-05 19:29:00","2019-04-22 18:45:00","47.96944444444444"]]).toDF(("id","begin","end","days"))

df.withColumn("overlap", ((F.unix_timestamp(col("end")).cast("long") - F.unix_timestamp(col("begin")).cast("long"))/(24*3600))).show()

+---+-------------------+-------------------+------------------+------------------+
| id|              begin|                end|              days|           overlap|
+---+-------------------+-------------------+------------------+------------------+
|  1|2019-01-01 00:00:00|2019-01-08 02:10:00| 7.090277777777778| 7.090277777777778|
|  1|2019-02-04 05:28:00|2019-03-05 19:29:00|29.584027777777777|29.584027777777777|
|  1|2019-06-05 22:18:00|2020-01-01 00:00:00|209.07083333333333|          209.1125|
|  1|2019-05-17 16:25:00|2019-06-05 22:18:00| 19.24513888888889| 19.24513888888889|
|  1|2019-05-03 05:05:00|2019-05-17 16:25:00|14.472222222222221|14.472222222222221|
|  1|2019-01-08 02:10:00|2019-02-04 05:28:00|           27.1375|           27.1375|
|  1|2019-01-01 00:00:00|2020-01-01 00:00:00|             365.0|             365.0|
|  1|2019-04-22 18:45:00|2019-05-03 05:05:00|10.430555555555555|10.430555555555555|
|  1|2019-03-05 19:29:00|2019-04-22 18:45:00| 47.96944444444444| 47.92777777777778|
+---+-------------------+-------------------+------------------+------------------+

相关问题