访问ec2 hadoop集群的ssh隧道

gopyfrb3  于 2021-06-03  发布在  Hadoop
关注(0)|答案(1)|浏览(447)

背景:
我已经在ec2示例上安装了3节点clouderahadoop集群,正如预期的那样工作。
我的windows机器上的客户端程序,用于将数据从我的机器加载到hdfs。
细节:
我的客户端程序是用java开发的,它从windows本地磁盘读取数据并将其写入hdfs。
为此,我尝试通过putty创建ssh隧道,然后尝试使用windows用户名登录到不起作用的远程ec2示例。我可以用unix用户名登录。我想知道这是正确的行为吗?
我不知道我是否正确地创建了隧道,但是当我尝试运行我的客户端程序时,它会给我以下错误:
我的客户端程序是用java开发的,它从windows本地磁盘读取数据并将其写入hdfs。当我试图运行我的程序,它是给我以下的错误。

PriviledgedActionException as:ubuntu (auth:SIMPLE) cause:java.io.IOException: File /user/ubuntu/features.json could only be replicated to 0 nodes instead of minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are excluded in this operation.

6:32:45.711 PM     INFO     org.apache.hadoop.ipc.Server     

IPC Server handler 13 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 108.161.91.186:54097: error: java.io.IOException: File /user/ubuntu/features.json could only be replicated to 0 nodes instead of minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
java.io.IOException: File /user/ubuntu/features.json could only be replicated to 0 nodes instead of minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1331)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2198)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:480)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1695)

你知道吗?

atmip9wb

atmip9wb1#

您可以使用 hdfs fsck / -delete ,然后可以重新平衡数据节点。

相关问题