无法从eclipse访问hbase数据库(在安全群集上运行)?

v1uwarro  于 2021-06-10  发布在  Hbase
关注(0)|答案(1)|浏览(358)

正在尝试连接到 HBase database 来自eclipse scala程序 Windows .
群集是 secured using Kerberos 身份验证,因此它不连接到hbase数据库。
每次我们创建jar文件并在集群中运行时。但是这对于开发和调试是没有用的。
如何设置 hbase-site.xml 在类路径中?
我下载了 *site.xml 文件尝试添加 hbase-site.xml, core-site.xml and hdfs-site.xml 作为 source 文件夹,并尝试从项目生成路径将此文件作为外部类文件夹添加,但没有任何效果。我该怎么做?
还有什么我们可以定的吗 hbase-site.xml 在sqlcontext中,因为我使用sqlcontext来读取使用hortonworks连接器的hbase表。
错误日志为:

Exception in thread "main" java.io.IOException: java.lang.reflect.InvocationTargetException
       at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)
       at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)
       at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)
       at org.apache.spark.sql.execution.datasources.hbase.RegionResource.init(HBaseResources.scala:93)
       at org.apache.spark.sql.execution.datasources.hbase.ReferencedResource$class.liftedTree1$1(HBaseResources.scala:57)
       at org.apache.spark.sql.execution.datasources.hbase.ReferencedResource$class.acquire(HBaseResources.scala:54)
       at org.apache.spark.sql.execution.datasources.hbase.RegionResource.acquire(HBaseResources.scala:88)
       at org.apache.spark.sql.execution.datasources.hbase.ReferencedResource$class.releaseOnException(HBaseResources.scala:74)
       at org.apache.spark.sql.execution.datasources.hbase.RegionResource.releaseOnException(HBaseResources.scala:88)
       at org.apache.spark.sql.execution.datasources.hbase.RegionResource.<init>(HBaseResources.scala:108)
       at org.apache.spark.sql.execution.datasources.hbase.HBaseTableScanRDD.getPartitions(HBaseTableScan.scala:60)
       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
       at scala.Option.getOrElse(Option.scala:120)
       at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
       at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
       at scala.Option.getOrElse(Option.scala:120)
       at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
       at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
       at scala.Option.getOrElse(Option.scala:120)
       at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
       at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:190)
       at org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:165)
       at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:174)
       at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
       at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
       at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
       at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2086)
       at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$execute$1(DataFrame.scala:1498)
       at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$collect(DataFrame.scala:1505)
       at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1375)
       at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1374)
       at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2099)
       at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1374)
       at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1456)
       at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:170)
       at org.apache.spark.sql.DataFrame.show(DataFrame.scala:350)
       at org.apache.spark.sql.DataFrame.show(DataFrame.scala:311)
       at org.apache.spark.sql.DataFrame.show(DataFrame.scala:319)
       at scb.HBaseBroadcast$.main(HBaseBroadcast.scala:106)
       at scb.HBaseBroadcast.main(HBaseBroadcast.scala)
Caused by: java.lang.reflect.InvocationTargetException
       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
       at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
       at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
       at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
       at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
       ... 44 more
Caused by: java.lang.AbstractMethodError: org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy()Lorg/apache/hadoop/io/retry/FailoverProxyProvider$ProxyInfo;
       at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:73)
       at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:64)
       at org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:58)
       at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:147)
       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:510)
       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:453)
       at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:136)
       at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
       at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
       at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625)
       at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607)
       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
       at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
       at org.apache.hadoop.hbase.util.DynamicClassLoader.<init>(DynamicClassLoader.java:104)
       at org.apache.hadoop.hbase.protobuf.ProtobufUtil.<clinit>(ProtobufUtil.java:241)
       at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
       at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75)
       at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:105)
       at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:879)
       at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:635)
       ... 49 more
idfiyjo8

idfiyjo81#

您有一个hadoop dfs冲突。请检查服务器上的版本与开发路径上的版本。

相关问题