report”没有显示任何内容

ecbunoof  于 2021-05-29  发布在  Hadoop
关注(0)|答案(0)|浏览(175)
[root@master ~]# jps
10197 SecondaryNameNode
10805 Jps
10358 ResourceManager
9998 NameNode

[root@slave1 ~]# jps
5872 NodeManager
5767 DataNode
6186 Jps

[root@slave2 ~]# jps
5859 Jps
5421 DataNode 
5534 NodeManager

当我在namenode和相应的从节点“slave1”和“slave2”上运行jps命令时,您可以看到所有服务都在运行。
但是,当我检查 hdfs dfsadmin -report 命令,我得到的是:

[root@master ~]# hdfs dfsadmin -report
17/09/01 12:11:29 WARN util.NativeCodeLoader: Unable to load native-hadoop          library for your platform... using builtin-java classes where applicable
  Configured Capacity: 0 (0 B)
  Present Capacity: 0 (0 B)
  DFS Remaining: 0 (0 B)
  DFS Used: 0 (0 B)
  DFS Used%: NaN%
  Under replicated blocks: 0
  Blocks with corrupt replicas: 0
  Missing blocks: 0
  Missing blocks (with replication factor 1): 0

  -------------------------------------------------

这就是问题所在。我知道在这个特定主题上有很多文章,我已经提到并禁用了防火墙,用datanode cluster id格式化了集群,并解决了虚拟盒中的ip问题,在那里我从主服务器ping从属服务器时获得了重复的数据包。
数据节点似乎没有启动。有一次,即使幸运的是他们这样做了,我得到以下错误,而复制文件的hdfs。

[root@master ~]# hdfs dfs -moveFromLocal /home/master/Downloads                    /citibike.tar /user/citibike
17/09/01 12:17:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/09/01 12:17:34 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File                 /user/citibike._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1628)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3121)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3045)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:493)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)

at org.apache.hadoop.ipc.Client.call(Client.java:1476)
at org.apache.hadoop.ipc.Client.call(Client.java:1413)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy11.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1588)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1373)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)
 moveFromLocal: File /user/citibike._COPYING_ could only be replicated to 0    nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.

这个 fsck 命令运行良好,但没有用,因为它也像 dfsadmin -report .

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题