使用spark2的hbase分布式扫描问题

k5ifujac  于 2021-06-09  发布在  Hbase
关注(0)|答案(1)|浏览(538)

当我试图从spark/scala文件以编程方式执行hbase操作时
观察我们刚刚从spark 1.6版本迁移到2.3版本,hbase 1.2版本在这两种情况下是相同的。但是现在我们在执行hbase分布式扫描操作时面临这个问题
获取错误

Exception in thread "main" org.apache.hadoop.hbase.DoNotRetryIOException: /0.0.0.0:60020 is unable to read call parameter from client ; java.lang.UnsupportedOperationException: GetRegionLoad
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
    at java.lang.reflect.Constructor.newInstance(Unknown Source)
    at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100)
    at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90)
    at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:359)
    at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:336)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getRegionMetrics(HBaseAdmin.java:2129)
    at org.apache.hadoop.hbase.mapreduce.RegionSizeCalculator.init(RegionSizeCalculator.java:82)
    at org.apache.hadoop.hbase.mapreduce.RegionSizeCalculator.<init>(RegionSizeCalculator.java:61)
    at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.oneInputSplitPerRegion(TableInputFormatBase.java:294)
    at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:257)
    at org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:254)
    at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:127)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
    at org.apache.spark.rdd.RDD.count(RDD.scala:1162)
    at com.amobee.spark.dps.cdp.util.CopyHbaseTableUtil$.main(CopyHbaseTableUtil.scala:53)
    at com.amobee.spark.dps.cdp.util.CopyHbaseTableUtil.main(CopyHbaseTableUtil.scala)

任何帮助都将不胜感激。

yvt65v4c

yvt65v4c1#

根据我的经验,这个操作与低于2.x的hbase不兼容

相关问题