over hbase(使用shc 1.1.1-2.1-s_.11)进行聚合花费的时间太长

bybem2ql  于 2021-06-09  发布在  Hbase
关注(0)|答案(0)|浏览(185)

我正在将流式数据摄取到hbase中。我已经用kafka分区预先拆分了hbase表。使用复合rowkey和kafka分区的组合,timestamp后跟其他一些列使其唯一。使用这种方法,我在插入数据时有很好的吞吐量,但是我还必须每天聚合数据,这非常慢。
我们观察到 groupBy 或者 count 等于表所在的区域总数。
我做错什么了吗?如何限制hbase中表的区域数?
hbase create语句

create 'default:test', {NAME => 'data', VERSIONS => 1, TTL => '3888000'},{SPLITS=> ['10000000000000000000000000000000','20000000000000000000000000000000','30000000000000000000000000000000','40000000000000000000000000000000','50000000000000000000000000000000','60000000000000000000000000000000','70000000000000000000000000000000','80000000000000000000000000000000','90000000000000000000000000000000']}

插入时目录

def catalog = s"""{
  |"table":{"namespace":"default", "name": "test", "tableCoder":"Phoenix"},
  |"rowkey":"key",
  |"columns":{
  |"rowkey":{"cf":"rowkey", "col":"key", "type":"string"},
  |"resource_id":{"cf":"data", "col":"resource_id", "type":"string"},
  |"resource_name":{"cf":"data", "col":"resource_name", "type":"string"},
  |"parent_id":{"cf":"data", "col":"parent_id", "type":"string"},  
  |"parent_name":{"cf":"data", "col":"parent_name", "type":"string"},
  |"id":{"cf":"data", "col":"id", "type":"string"},
  |"name":{"cf":"data", "col":"name", "type":"string"},
  |"timestamp":{"cf":"data", "col":"timestamp", "type":"string"},
  |"readable_timestamp":{"cf":"data", "col":"readable_timestamp", 
    "type":"string"},
  |"value":{"cf":"data", "col":"value", "type":"string"},
  |"partition":{"cf":"data", "col":"partition", "type":"string"}
  |}
|}""".stripMargin

要阅读的目录

def catalog = s"""{
    |"table":{"namespace":"default", "name": "test", "tableCoder":"Phoenix"},
    |"rowkey":"partition:timestamp:id:parent_id:resource_id",
    |"columns":{   
    |"partition":{"cf":"rowkey", "col":"partition", "type":"string"},
    |"timestamp":{"cf":"rowkey", "col":"timestamp", "type":"string"},
    |"id":{"cf":"rowkey", "col":"id", "type":"string"},
    |"parent_id":{"cf":"rowkey", "col":"parent_id", "type":"string"},
    |"resource_id":{"cf":"rowkey", "col":"resource_id", "type":"string"},
    |"resource_name":{"cf":"data", "col":"resource_name", "type":"string"}, 
    |"parent_name":{"cf":"data", "col":"parent_name", "type":"string"},
    |"name":{"cf":"data", "col":"name", "type":"string"},
    |"value":{"cf":"data", "col":"value", "type":"string"},
    |"readable_timestamp":{"cf":"data", "col":"readable_timestamp", "type":"string"}
    |}
    |}""".stripMargin

要进行范围扫描,我将使用所有分区和时间范围:

val endrow = "1540080000000"
val startrow = "1539993600000"

df.filter(($"partition"==="0" && ($"timestamp" >= startrow && $"timestamp" <= endrow)) ||($"partition"==="1" && ($"timestamp" >= startrow && $"timestamp" <= endrow))||($"partition"==="2" && ($"timestamp" >= startrow && $"timestamp" <= endrow))||($"partition"==="3" && ($"timestamp" >= startrow && $"timestamp" <= endrow))||($"partition"==="4" && ($"timestamp" >= startrow && $"timestamp" <= endrow))||($"partition"==="5" && ($"timestamp" >= startrow && $"timestamp" <= endrow))||($"partition"==="6" && ($"timestamp" >= startrow && $"timestamp" <= endrow))||($"partition"==="7" && ($"timestamp" >= startrow && $"timestamp" <= endrow))||($"partition"==="8" && ($"timestamp" >= startrow && $"timestamp" <= endrow))||($"partition"==="9" && ($"timestamp" >= startrow && $"timestamp" <= endrow))).count

上面的过滤器将产生830个任务,这些任务等于区域数。花了太多时间。我怎样才能改进它?

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题