hbase连接问题,无法创建表

ukdjmx9f  于 2021-05-30  发布在  Hadoop
关注(0)|答案(1)|浏览(389)

我正在运行一个多节点集群;我使用的是hadoop-1.0.3(两个都有)、hbase-0.94.2(两个都有)和zookeeper-3.4.6(只有master)
master:192.168.0.1 slave:192.168.0.2
hbase运行不完美,我在尝试在hbase上创建表时遇到了问题,当然,我无法访问用户界面上的hbase状态 http://master:60010 请帮忙!!
以下是我的所有配置文件:
(hadoop conf)core-site.xml:(主服务器和从服务器上的配置相同)

<configuration>
<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
 </property>
</configuration>

(hbase conf)hbase-site.xml:

<configuration>

<property>
      <name>hbase.rootdir</name>
      <value>hdfs://master:54310/hbase</value>
</property>

<property>
      <name>hbase.cluster.distributed</name>
      <value>true</value>
</property>

<property>
      <name>hbase.zookeeper.quorum</name>
      <value>master,slave</value>
</property>

<property>
        <name>hbase.zookeeper.property.clientPort</name>
        <value>2222</value>
</property>

<property>
        <name>hbase.zookeeper.property.dataDir</name>
        <value>/usr/local/hadoop/zookeeper</value>
</property>

</configuration>

/etc/主机和:

192.168.0.1 master
192.168.0.2 slave

区域服务器:

master
slave

下面是日志文件:hbase-hduser-regionserver-master.log

2014-12-24 02:12:13,190 WARN org.apache.zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.NoRouteToHostException: No route to host
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
    at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286)
    at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1035)
2014-12-24 02:12:14,002 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server master/192.168.0.1:2181
2014-12-24 02:12:14,003 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2014-12-24 02:12:14,004 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to master/192.168.0.1:2181, initiating session
2014-12-24 02:12:14,005 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2014-12-24 02:12:14,675 INFO org.apache.hadoop.ipc.HBaseServer: Stopping server on 60020
2014-12-24 02:12:14,676 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server master,60020,1419415915643: Initialization of RS failed.  Hence aborting RS.
java.io.IOException: Received the shutdown message while waiting.
    at org.apache.hadoop.hbase.regionserver.HRegionServer.blockAndCheckIfStopped(HRegionServer.java:623)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:598)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669)
    at java.lang.Thread.run(Thread.java:745)
2014-12-24 02:12:14,676 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: []
2014-12-24 02:12:14,676 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Initialization of RS failed.  Hence aborting RS.
2014-12-24 02:12:14,683 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Registered RegionServer MXBean
2014-12-24 02:12:14,689 INFO org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=Thread[Thread-5,5,main]
2014-12-24 02:12:14,689 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Shutdown hook
2014-12-24 02:12:14,690 INFO org.apache.hadoop.hbase.regionserver.ShutdownHook: Starting fs shutdown hook thread.
2014-12-24 02:12:14,691 INFO org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook finished.
lymnna71

lymnna711#

我想相反 localhostcore-site.xml 文件使用 master .
并将从属节点添加到主机 slave hadoop目录中的文件。
以及主节点和从节点 core-site.xml 文件如下所示:

<configuration>
<property>
  <name>fs.default.name</name>
  <value>hdfs://master:54310</value>
 </property>
</configuration>

如果在两个区域服务器文件中都设置了当前zookeeper,那么主主机和从主机应该都存在于这两个区域服务器文件中。

相关问题