我在hdfs(cloudera2.0.0-cdh4.2.0)上附加文件时遇到错误。导致错误的用例是:
在文件系统(distributedfilesystem)上创建文件。好 啊
追加先前创建的文件。错误 OutputStream stream = FileSystem.append(filePath); stream.write(fileContents);
然后抛出错误: Exception in thread "main" java.io.IOException: Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on- failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[host1:50010, host2:50010], original=[host1:50010, host2:50010]) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:792) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:852) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:958) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:469)
一些相关的hdfs配置: dfs.replication
设置为2 dfs.client.block.write.replace-datanode-on-failure.policy
设置为真 dfs.client.block.write.replace-datanode-on-failure
设置为默认值
有什么想法吗?谢谢!
1条答案
按热度按时间qvk1mo1f1#
通过在文件系统上运行解决了这个问题
文件系统上的旧文件的复制因子设置为3,设置为0
dfs.replication
至2英寸hdfs-site.xml
不会解决此问题,因为此配置不会应用于现有文件。所以,如果你从集群中删除机器,你最好检查文件和系统复制因子