将scala中的两个Dataframe与不具有精确值的列连接起来

gmol1639  于 2021-06-25  发布在  Hive
关注(0)|答案(3)|浏览(390)

我尝试将两个Dataframe合并到一个值不完全相同的列中。
下面给出的是df1

+--------+-----+------+
| NUM_ID | TIME|SG1_V |
+--------+-----+------+
|XXXXX01 |1001 |79.0  |
|XXXXX01 |1005 |88.0  |
|XXXXX01 |1010 |99.0  |
|XXXXX01 |1015 |null  |
|XXXXX01 |1020 |100.0 |
|XXXXX02 |1001 |81.0  |
|XXXXX02 |1010 |91.0  |
|XXXXX02 |1050 |93.0  |
|XXXXX02 |1060 |93.0  |
|XXXXX02 |1070 |93.0  |
+--------+-----+------+

下面是df2

+---------+-----+------+
| NUM_ID  | TIME|SG2_V |
+---------+-----+------+
|XXXXX01  |1001 |  99.0|
|XXXXX01  |1003 |  22.0|
|XXXXX01  |1007 |  85.0|
|XXXXX01  |1011 |  1.0 |

|XXXXX02  |1001 |  22.0|
|XXXXX02  |1009 |  85.0|
|XXXXX02  |1048 |  1.0 |
|XXXXX02  |1052 |  99.0|
+---------+-----+------+

我必须在num\u id列和time列上连接这两个df,前者应该完全相同,后者可能/可能不完全相同。
df2中的时间可能/可能不包含df1中的精确值。如果值不精确,那么我必须使用可用的最高最近值(即df2中的列值应为=<df1中的精确值)联接。
在看了下面所示的预期输出之后,会更加清楚。

+--------+-----+------+-----+------+
| NUM_ID | TIME|SG1_V | TIME|SG2_V |
+--------+-----+------+-----+------+
|XXXXX01 |1001 |79.0  |1001 |  99.0|
|XXXXX01 |1005 |88.0  |1003 |  22.0|
|XXXXX01 |1010 |99.0  |1007 |  85.0|
|XXXXX01 |1015 |null  |1011 |  1.0 |
|XXXXX01 |1020 |100.0 |1011 |  1.0 |

|XXXXX02 |1001 |81.0  |1001 |  22.0|
|XXXXX02 |1010 |91.0  |1009 |  85.0|
|XXXXX02 |1050 |93.0  |1048 |  1.0 |
|XXXXX02 |1060 |93.0  |1052 |  99.0|
|XXXXX02 |1070 |93.0  |1052 |  99.0|
+--------+-----+------+-----+------+

对于num_id xx 01,df1中的时间(1005)在df2中不可用,因此它采用小于1005的最近值(1003)。
如何以这样的方式连接,如果没有精确的值,则使用最近的值连接。
感谢任何线索。提前谢谢。

monwx1rj

monwx1rj1#

简单的方法是使用spark的一个窗口函数row\u number()或rank():

scala> spark.sql("""
     |   SELECT * FROM (
     |     SELECT *,
     |       ROW_NUMBER() OVER (PARTITION BY df1.NUM_ID, df1.TIME ORDER BY (df1.TIME - df2.TIME)) rno
     |     FROM df1 JOIN df2 
     |     ON df2.NUM_ID = df1.NUM_ID AND 
     |        df2.TIME  <= df1.TIME
     |   ) T
     | WHERE T.rno = 1
     |""").show()
+-------+----+-----+-------+----+-----+---+
| NUM_ID|TIME|SG1_V| NUM_ID|TIME|SG2_V|rno|
+-------+----+-----+-------+----+-----+---+
|XXXXX01|1001| 79.0|XXXXX01|1001| 99.0|  1|
|XXXXX01|1005| 88.0|XXXXX01|1003| 22.0|  1|
|XXXXX01|1010| 99.0|XXXXX01|1007| 85.0|  1|
|XXXXX01|1015| null|XXXXX01|1011|  1.0|  1|
|XXXXX01|1020|100.0|XXXXX01|1011|  1.0|  1|
|XXXXX02|1001| 81.0|XXXXX02|1001| 22.0|  1|
|XXXXX02|1010| 91.0|XXXXX02|1009| 85.0|  1|
+-------+----+-----+-------+----+-----+---+

scala>
eqzww0vc

eqzww0vc2#

如果需要使用两个字段和其中一个字段的特定间隔进行连接,可以执行以下操作:

import org.apache.spark.sql.functions.when

  val spark = SparkSession.builder().master("local[1]").getOrCreate()

  val df1 : DataFrame = spark.createDataFrame(spark.sparkContext.parallelize(Seq(Row("XXXXX01",1001,79.0),
    Row("XXXXX01",1005,88.0),
    Row("XXXXX01",1010,99.0),
    Row("XXXXX01",1015, null),
    Row("XXXXX01",1020,100.0),
    Row("XXXXX02",1001,81.0))),
    StructType(Seq(StructField("NUM_ID", StringType, false), StructField("TIME", IntegerType, false), StructField("SG1_V", DoubleType, true))))

  val df2 : DataFrame = spark.createDataFrame(spark.sparkContext.parallelize(Seq(Row("XXXXX01",1001,79.0),
    Row("XXXXX01",1001, 99.0),
    Row("XXXXX01",1003, 22.0),
    Row("XXXXX01",1007, 85.1),
    Row("XXXXX01",1011, 1.0),
    Row("XXXXX02",1001,22.0))),
    StructType(Seq(StructField("NUM_ID", StringType, false), StructField("TIME", IntegerType, false), StructField("SG1_V", DoubleType, false))))

  val interval : Int = 10

  def main(args: Array[String]) : Unit = {
    df1.join(df2, ((df1("TIME")) - df2("TIME") > lit(interval)) && df1("NUM_ID") === df2("NUM_ID")).show()
  }

结果如下:

+-------+----+-----+-------+----+-----+
| NUM_ID|TIME|SG1_V| NUM_ID|TIME|SG1_V|
+-------+----+-----+-------+----+-----+
|XXXXX01|1015| null|XXXXX01|1001| 79.0|
|XXXXX01|1015| null|XXXXX01|1001| 99.0|
|XXXXX01|1015| null|XXXXX01|1003| 22.0|
|XXXXX01|1020|100.0|XXXXX01|1001| 79.0|
|XXXXX01|1020|100.0|XXXXX01|1001| 99.0|
|XXXXX01|1020|100.0|XXXXX01|1003| 22.0|
|XXXXX01|1020|100.0|XXXXX01|1007| 85.1|
+-------+----+-----+-------+----+-----+
ycggw6v2

ycggw6v23#

上面的解决方案是在将Dataframe保存到配置单元表之后连接Dataframe。
我尝试通过应用相同的逻辑连接两个Dataframe而不保存到配置单元表中,如下所示。

val finalSignals = finalABC.as("df1").join(finalXYZ.as("df2"), $"df1.NUM_ID" === $"df2.NUM_ID" && $"df2.TIME"  <= $"df1.TIME", "left").withColumn("rno", row_number.over(Window.partitionBy($"df1.NUM_ID", $"df1.TIME").orderBy($"df1.TIME" - $"df2.TIME"))).select(col("df1.NUM_ID").as("NUM_ID"),col("df1.TIME"),col("df2.NUM_ID").as("NUM_ID2"),col("df1.TIME").as("TIME2"),
col("rno")).filter("rno == 1")

这是否等同于上述提供的解决方案

spark.sql("""
     |   SELECT * FROM (
     |     SELECT *,
     |       ROW_NUMBER() OVER (PARTITION BY df1.NUM_ID, df1.TIME ORDER BY (df1.TIME - df2.TIME)) rno
     |     FROM df1 JOIN df2 
     |     ON df2.NUM_ID = df1.NUM_ID AND 
     |        df2.TIME  <= df1.TIME
     |   ) T
     | WHERE T.rno = 1
     |""")

相关问题