spark最近30天过滤器,提高性能的最佳方法

8yoxcaq7  于 2021-06-02  发布在  Hadoop
关注(0)|答案(2)|浏览(247)

我有一个rdd的记录,转换成dataframe,我想过滤的日子时间戳,并计算过去30天的统计数据,过滤列和计数的结果。
在进入for循环之前,spark应用程序的速度非常快,所以我想知道这是否是一种反模式的方法,如何才能获得良好的性能,我应该使用spark笛卡尔,如何?

//FILTER PROJECT RECORDS
val clientRecordsDF = recordsDF.filter($"rowkey".contains(""+client_id))
client_records_total = clientRecordsDF.count().toLong

这是clientrecordsdf内容

root
 |-- rowkey: string (nullable = true) //CLIENT_ID-RECORD_ID
 |-- record_type: string (nullable = true)
 |-- device: string (nullable = true)
 |-- timestamp: long (nullable = false) // MILLISECOND
 |-- datestring: string (nullable = true) // yyyyMMdd

[1-575e7f80673a0,login,desktop,1465810816424,20160613]
[1-575e95fc34568,login,desktop,1465816572216,20160613]
[1-575ef88324eb7,registration,desktop,1465841795153,20160613]
[1-575efe444d2be,registration,desktop,1465843268317,20160613]
[1-575e6b6f46e26,login,desktop,1465805679292,20160613]
[1-575e960ee340f,login,desktop,1465816590932,20160613]
[1-575f1128670e7,action,mobile-phone,1465848104423,20160613]
[1-575c9a01b67fb,registration,mobile-phone,1465686529750,20160612]
[1-575dcfbb109d2,registration,mobile-phone,1465765819069,20160612]
[1-575dcbcb9021c,registration,desktop,1465764811593,20160612] 
...

the for loop with bad performances

var dayCounter = 0;
for( dayCounter <- 1 to 30){ 
    //LAST 30 DAYS

    // CREATE DAY TIMESTAMP
    var cal = Calendar.getInstance(gmt);

    cal.add(Calendar.DATE, -dayCounter);
    cal.set(Calendar.HOUR_OF_DAY, 0);
    cal.set(Calendar.MINUTE, 0);
    cal.set(Calendar.SECOND, 0);
    cal.set(Calendar.MILLISECOND, 0);
    val calTime=cal.getTime()
    val dayTime = cal.getTimeInMillis()

    cal.set(Calendar.HOUR_OF_DAY, 23);
    cal.set(Calendar.MINUTE, 59);
    cal.set(Calendar.SECOND, 59);
    cal.set(Calendar.MILLISECOND, 999);
    val dayTimeEnd = cal.getTimeInMillis()

    //FILTER PROJECT RECORDS
    val dailyClientRecordsDF = clientRecordsDF.filter(
      $"timestamp" >= dayTime && $"timestamp" <= dayTimeEnd
    )
    val daily_client_records = dailyClientRecordsDF.count().toLong

    println("dayCounter "+dayCounter+" records = "+daily_project_records);

    // perform other filter on dailyClientRecordsDF
    // save daily statistics to hbase

  }
}
6jjcrrmo

6jjcrrmo1#

这种方法是遵循sql的。首先,我注册了一个要查询的表。然后,我需要定义一个udf(用户定义函数)来将时间戳转换为日期。最后,您需要像在sql中一样,在所需的日期范围内进行筛选和分组。

def mk(timestamp: Long): Long = {
            val blockTime: Int = 3600 * 24 // daily
          //  val blockTime: Int = 3600 // hourly
            (timestamp - timestamp % blockTime)
          }

    recordsDF.registerTempTable("client") // define your table
    sqlContext.udf.register("makeDaily", (timestamp: Long) => mk(timestamp)) // register your function

    val res = sqlContext.sql("""select makeDaily(timestamp) as date, count(*) as count 
                                from client 
                                where timestamp between 111111 and 222222 
                                group by makeDaily(timestamp)""").collect()

添加:例如,计数所有记录类型是在30天内注册。

sqlContext.sql("select count(*) 
                from client 
                where record_type='registration' and timestamp between 1111 and 2222")
z0qdvdin

z0qdvdin2#

在几乎所有情况下,都应避免创建自定义项。这样做会阻止catalyst优化器正确处理查询。
相反,请使用内置sql函数:

(
  spark.read.table("table_1")
  .join(
    spark.read.table("table_2"), 
    "user_id"
  )
  .where("p_eventdate > current_date() - 30")
)

相关问题