java hdfs写入hadoop docker群集时出错:有3个datanode正在运行,此操作中排除了3个节点

nsc4cvqm  于 2021-07-15  发布在  Hadoop
关注(0)|答案(0)|浏览(370)

我在docker上运行hadoop cluster,当我尝试从java编写hdfs时,出现以下错误。我不知道是什么原因造成的:

  1. Exception in thread "main" org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/javadeveloperzone/javareadwriteexample/read_write_hdfs_example.txt could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
  2. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2219)
  3. at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
  4. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2789)
  5. at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
  6. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:574)
  7. at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  8. at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528)
  9. at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
  10. at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:999)
  11. at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:927)
  12. at java.security.AccessController.doPrivileged(Native Method)
  13. at javax.security.auth.Subject.doAs(Subject.java:422)
  14. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
  15. at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2915)

这是我用来写hdfs的代码,我从(http://localhost:9870/explorer.html#/)当代码运行但大小为0时创建目录和文件:

  1. public static void writeFileToHDFS() throws IOException {
  2. Configuration configuration = new Configuration();
  3. configuration.set("fs.defaultFS", "hdfs://localhost:9000");
  4. FileSystem fileSystem = FileSystem.get(configuration);
  5. //Create a path
  6. String fileName = "read_write_hdfs_example.txt";
  7. Path hdfsWritePath = new Path("/user/javadeveloperzone/javareadwriteexample/" + fileName);
  8. FSDataOutputStream fsDataOutputStream = fileSystem.create(hdfsWritePath,true);
  9. BufferedWriter bufferedWriter = new BufferedWriter(new OutputStreamWriter(fsDataOutputStream,StandardCharsets.UTF_8));
  10. bufferedWriter.write("Java API to write data in HDFS");
  11. bufferedWriter.newLine();
  12. bufferedWriter.close();
  13. fileSystem.close();
  14. }

存储库:(https://github.com/nsquare-jdzone/hadoop-examples/tree/master/readwritehdfsexample)从本教程中:https://javadeveloperzone.com/hadoop/java-read-write-files-hdfs-example/
使用本教程设置docker群集:https://clubhouse.io/developer-how-to/how-to-set-up-a-hadoop-cluster-in-docker/
综上所述,本教程使用大数据存储库(https://github.com/big-data-europe/docker-hadoop)另外,还可以使用docker-compose.yml使其成为多个数据节点,而不是单个数据节点。本教程版本位于big data europe存储库之后,因此我将docker-compose.yml文件更改为:

  1. version: "3"
  2. services:
  3. namenode:
  4. image: bde2020/hadoop-namenode:2.0.0-hadoop3.2.1-java8
  5. container_name: namenode
  6. restart: always
  7. ports:
  8. - 9870:9870
  9. - 9000:9000
  10. volumes:
  11. - hadoop_namenode:/hadoop/dfs/name
  12. environment:
  13. - CLUSTER_NAME=test
  14. env_file:
  15. - ./hadoop.env
  16. datanode1:
  17. image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8
  18. container_name: datanode1
  19. restart: always
  20. volumes:
  21. - hadoop_datanode1:/hadoop/dfs/data
  22. environment:
  23. SERVICE_PRECONDITION: "namenode:9870"
  24. env_file:
  25. - ./hadoop.env
  26. datanode2:
  27. image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8
  28. container_name: datanode2
  29. restart: always
  30. volumes:
  31. - hadoop_datanode2:/hadoop/dfs/data
  32. environment:
  33. SERVICE_PRECONDITION: "namenode:9870"
  34. env_file:
  35. - ./hadoop.env
  36. datanode3:
  37. image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8
  38. container_name: datanode3
  39. restart: always
  40. volumes:
  41. - hadoop_datanode3:/hadoop/dfs/data
  42. environment:
  43. SERVICE_PRECONDITION: "namenode:9870"
  44. env_file:
  45. - ./hadoop.env
  46. resourcemanager:
  47. image: bde2020/hadoop-resourcemanager:2.0.0-hadoop3.2.1-java8
  48. container_name: resourcemanager
  49. restart: always
  50. environment:
  51. SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode1:9864 datanode2:9864"
  52. env_file:
  53. - ./hadoop.env
  54. nodemanager1:
  55. image: bde2020/hadoop-nodemanager:2.0.0-hadoop3.2.1-java8
  56. container_name: nodemanager
  57. restart: always
  58. environment:
  59. SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode1:9864 datanode2:9864 resourcemanager:8088"
  60. env_file:
  61. - ./hadoop.env
  62. historyserver:
  63. image: bde2020/hadoop-historyserver:2.0.0-hadoop3.2.1-java8
  64. container_name: historyserver
  65. restart: always
  66. environment:
  67. SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode1:9864 datanode2:9864 resourcemanager:8088"
  68. volumes:
  69. - hadoop_historyserver:/hadoop/yarn/timeline
  70. env_file:
  71. - ./hadoop.env
  72. volumes:
  73. hadoop_namenode:
  74. hadoop_datanode1:
  75. hadoop_datanode2:
  76. hadoop_datanode3:
  77. hadoop_historyserver:

任何帮助都将不胜感激。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题