dfs已使用%:在hadoop中100.00%从虚拟机关闭

pu3pd22g  于 2021-06-02  发布在  Hadoop
关注(0)|答案(1)|浏览(639)

我的从属虚拟机宕机了,我想这是因为使用的dfs是100%。你能给出一个系统的方法来解决这个问题吗?是防火墙问题吗?容量问题或是什么原因导致的?如何解决?

  1. ubuntu@anmol-vm1-new:~$ hadoop dfsadmin -report
  2. DEPRECATED: Use of this script to execute hdfs command is deprecated.
  3. Instead use the hdfs command for it.
  4. 15/12/13 22:25:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  5. Configured Capacity: 845446217728 (787.38 GB)
  6. Present Capacity: 797579996211 (742.80 GB)
  7. DFS Remaining: 794296401920 (739.75 GB)
  8. DFS Used: 3283594291 (3.06 GB)
  9. DFS Used%: 0.41%
  10. Under replicated blocks: 1564
  11. Blocks with corrupt replicas: 0
  12. Missing blocks: 0
  13. -------------------------------------------------
  14. Datanodes available: 2 (4 total, 2 dead)
  15. Live datanodes:
  16. Name: 10.0.1.190:50010 (anmol-vm1-new)
  17. Hostname: anmol-vm1-new
  18. Decommission Status : Normal
  19. Configured Capacity: 422723108864 (393.69 GB)
  20. DFS Used: 1641142625 (1.53 GB)
  21. Non DFS Used: 25955075743 (24.17 GB)
  22. DFS Remaining: 395126890496 (367.99 GB)
  23. DFS Used%: 0.39%
  24. DFS Remaining%: 93.47%
  25. Configured Cache Capacity: 0 (0 B)
  26. Cache Used: 0 (0 B)
  27. Cache Remaining: 0 (0 B)
  28. Cache Used%: 100.00%
  29. Cache Remaining%: 0.00%
  30. Last contact: Sun Dec 13 22:25:51 UTC 2015
  31. Name: 10.0.1.193:50010 (anmol-vm4-new)
  32. Hostname: anmol-vm4-new
  33. Decommission Status : Normal
  34. Configured Capacity: 422723108864 (393.69 GB)
  35. DFS Used: 1642451666 (1.53 GB)
  36. Non DFS Used: 21911145774 (20.41 GB)
  37. DFS Remaining: 399169511424 (371.76 GB)
  38. DFS Used%: 0.39%
  39. DFS Remaining%: 94.43%
  40. Configured Cache Capacity: 0 (0 B)
  41. Cache Used: 0 (0 B)
  42. Cache Remaining: 0 (0 B)
  43. Cache Used%: 100.00%
  44. Cache Remaining%: 0.00%
  45. Last contact: Sun Dec 13 22:25:51 UTC 2015
  46. Dead datanodes:
  47. Name: 10.0.1.191:50010 (anmol-vm2-new)
  48. Hostname: anmol-vm2-new
  49. Decommission Status : Normal
  50. Configured Capacity: 0 (0 B)
  51. DFS Used: 0 (0 B)
  52. Non DFS Used: 0 (0 B)
  53. DFS Remaining: 0 (0 B)
  54. DFS Used%: 100.00%
  55. DFS Remaining%: 0.00%
  56. Configured Cache Capacity: 0 (0 B)
  57. Cache Used: 0 (0 B)
  58. Cache Remaining: 0 (0 B)
  59. Cache Used%: 100.00%
  60. Cache Remaining%: 0.00%
  61. Last contact: Sun Dec 13 21:20:12 UTC 2015
  62. Name: 10.0.1.192:50010 (anmol-vm3-new)
  63. Hostname: anmol-vm3-new
  64. Decommission Status : Normal
  65. Configured Capacity: 0 (0 B)
  66. DFS Used: 0 (0 B)
  67. Non DFS Used: 0 (0 B)
  68. DFS Remaining: 0 (0 B)
  69. DFS Used%: 100.00%
  70. DFS Remaining%: 0.00%
  71. Configured Cache Capacity: 0 (0 B)
  72. Cache Used: 0 (0 B)
  73. Cache Remaining: 0 (0 B)
  74. Cache Used%: 100.00%
  75. Cache Remaining%: 0.00%
  76. Last contact: Sun Dec 13 22:09:27 UTC 2015
rsl1atfo

rsl1atfo1#

在虚拟机中只有一个文件系统。以root身份登录 df -sh (其中一个挂载点将显示~100%) du -sh / (它将列出每个目录的大小)
如果除namenode和datanode目录以外的任何目录占用太多空间,可以开始清理
你也可以跑步 hadoop fs -du -s -h /user/hadoop (查看目录的用法)
识别所有不必要的目录,并通过运行 hadoop fs -rm -R /user/hadoop/raw_data (-rm是删除-r是递归删除,使用-r时要小心)。
hadoop fs -expunge (要立即清理垃圾,有时需要多次运行)
hadoop fs -du -s -h /(它将为您提供整个文件系统的hdfs使用情况,或者您也可以运行dfsadmin-report-以确认是否回收了存储)

相关问题