hadoop组件未启动

rjee0c15  于 2021-06-03  发布在  Hadoop
关注(0)|答案(2)|浏览(392)

我是hadoop的新手,我一直关注micheal安装(http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/ ).
当我运行命令/usr/local/hadoop/bin/start-all.sh时,通常这会启动机器上的namenode、datanode、jobtracker和tasktracker。
只有tasktracker从这里开始跟踪:

  1. hduser@srv591 ~ $ /usr/local/hadoop/bin/start-all.sh
  2. starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-namenode-srv591.out
  3. localhost: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-datanode-srv591.out
  4. localhost: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-secondarynamenode-srv591.out
  5. localhost: Exception in thread "main" org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /app/hadoop/tmp/dfs/namesecondary is in an inconsistent state: checkpoint directory does not exist or is not accessible.
  6. localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.recoverCreate(SecondaryNameNode.java:729)
  7. localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:208)
  8. localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:150)
  9. localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:676)
  10. starting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-jobtracker-srv591.out
  11. localhost: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-tasktracker-srv591.out
  12. hduser@srv591 ~ $ /usr/local/java/bin/jps
  13. 19469 TaskTracker
  14. 19544 Jps

tariq的解决方案是可行的,但是仍然需要启动jobtracker和namenode这里是日志的内容

  1. hduser@srv591 ~ $ cat /usr/local/hadoop/libexec/../logs/hadoop-hduser-namenode-srv591.log
  2. 2013-09-21 00:30:13,765 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
  3. /************************************************************
  4. STARTUP_MSG: Starting NameNode
  5. STARTUP_MSG: host = srv591.sd-france.net/46.21.207.111
  6. STARTUP_MSG: args = []
  7. STARTUP_MSG: version = 1.2.1
  8. STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
  9. STARTUP_MSG: java = 1.7.0_40
  10. ************************************************************/
  11. 2013-09-21 00:30:13,904 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
  12. 2013-09-21 00:30:13,913 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
  13. 2013-09-21 00:30:13,914 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
  14. 2013-09-21 00:30:13,914 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
  15. 2013-09-21 00:30:14,140 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
  16. 2013-09-21 00:30:14,144 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
  17. 2013-09-21 00:30:14,148 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
  18. 2013-09-21 00:30:14,149 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
  19. 2013-09-21 00:30:14,164 INFO org.apache.hadoop.hdfs.util.GSet: Computing capacity for map BlocksMap
  20. 2013-09-21 00:30:14,164 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit
  21. 2013-09-21 00:30:14,164 INFO org.apache.hadoop.hdfs.util.GSet: 2.0% max memory = 932184064
  22. 2013-09-21 00:30:14,164 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^21 = 2097152 entries
  23. 2013-09-21 00:30:14,164 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
  24. 2013-09-21 00:30:14,178 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hduser
  25. 2013-09-21 00:30:14,178 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
  26. 2013-09-21 00:30:14,178 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
  27. 2013-09-21 00:30:14,185 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
  28. 2013-09-21 00:30:14,185 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
  29. 2013-09-21 00:30:14,335 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
  30. 2013-09-21 00:30:14,370 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
  31. 2013-09-21 00:30:14,370 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
  32. 2013-09-21 00:30:14,373 INFO org.apache.hadoop.hdfs.server.common.Storage: Cannot access storage directory /app/hadoop/tmp/dfs/name
  33. 2013-09-21 00:30:14,374 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
  34. org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /app/hadoop/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
  35. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
  36. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
  37. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
  38. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
  39. at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
  40. at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
  41. at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
  42. at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
  43. 2013-09-21 00:30:14,404 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /app/hadoop/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
  44. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
  45. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
  46. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
  47. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
  48. at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
  49. at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
  50. at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
  51. at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
  52. 2013-09-21 00:30:14,405 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
  53. /************************************************************
  54. SHUTDOWN_MSG: Shutting down NameNode at srv591.sd-france.net/46.21.207.111
  55. ************************************************************/
  56. 2013-09-21 00:31:08,869 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
  57. /************************************************************
  58. STARTUP_MSG: Starting NameNode
  59. STARTUP_MSG: host = srv591.sd-france.net/46.21.207.111
  60. STARTUP_MSG: args = []
  61. STARTUP_MSG: version = 1.2.1
  62. STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
  63. STARTUP_MSG: java = 1.7.0_40
  64. ************************************************************/
  65. 2013-09-21 00:31:09,012 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
  66. 2013-09-21 00:31:09,021 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
  67. 2013-09-21 00:31:09,022 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
  68. 2013-09-21 00:31:09,022 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
  69. 2013-09-21 00:31:09,240 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
  70. 2013-09-21 00:31:09,244 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
  71. 2013-09-21 00:31:09,248 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
  72. 2013-09-21 00:31:09,249 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
  73. 2013-09-21 00:31:09,264 INFO org.apache.hadoop.hdfs.util.GSet: Computing capacity for map BlocksMap
  74. 2013-09-21 00:31:09,264 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit
  75. 2013-09-21 00:31:09,264 INFO org.apache.hadoop.hdfs.util.GSet: 2.0% max memory = 932184064
  76. 2013-09-21 00:31:09,264 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^21 = 2097152 entries
  77. 2013-09-21 00:31:09,264 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
  78. 2013-09-21 00:31:09,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hduser
  79. 2013-09-21 00:31:09,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
  80. 2013-09-21 00:31:09,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
  81. 2013-09-21 00:31:09,286 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
  82. 2013-09-21 00:31:09,286 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
  83. 2013-09-21 00:31:09,457 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
  84. 2013-09-21 00:31:09,496 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
  85. 2013-09-21 00:31:09,496 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
  86. 2013-09-21 00:31:09,501 INFO org.apache.hadoop.hdfs.server.common.Storage: Cannot lock storage /app/hadoop/tmp/dfs/name. The directory is already locked.
  87. 2013-09-21 00:31:09,501 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
  88. java.io.IOException: Cannot lock storage /app/hadoop/tmp/dfs/name. The directory is already locked.
  89. at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:599)
  90. at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:452)
  91. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:299)
  92. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
  93. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
  94. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
  95. at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
  96. at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
  97. at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
  98. at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
  99. 2013-09-21 00:31:09,508 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: Cannot lock storage /app/hadoop/tmp/dfs/name. The directory is already locked.
  100. at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:599)
  101. at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:452)
  102. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:299)
  103. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
  104. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
  105. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
  106. at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
  107. at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
  108. at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
  109. at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
  110. 2013-09-21 00:31:09,509 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
  111. /************************************************************
  112. SHUTDOWN_MSG: Shutting down NameNode at srv591.sd-france.net/46.21.207.111
  113. ************************************************************/

下面是datanode的日志:

  1. ************************************************************/
  2. 2013-09-21 01:01:24,622 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
  3. /************************************************************
  4. STARTUP_MSG: Starting DataNode
  5. STARTUP_MSG: host = srv591.sd-france.net/46.21.207.111
  6. STARTUP_MSG: args = []
  7. STARTUP_MSG: version = 1.2.1
  8. STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
  9. STARTUP_MSG: java = 1.7.0_40
  10. ************************************************************/
  11. 2013-09-21 01:01:24,855 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
  12. 2013-09-21 01:01:24,870 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
  13. 2013-09-21 01:01:24,871 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
  14. 2013-09-21 01:01:24,871 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
  15. 2013-09-21 01:01:25,204 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
  16. 2013-09-21 01:01:25,224 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
  17. 2013-09-21 01:01:25,499 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID = 1590050521; datanode namespaceID = 1863017904
  18. at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
  19. at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
  20. at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:414)
  21. at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
  22. at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
  23. at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
  24. at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
  25. at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
  26. at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
  27. 2013-09-21 01:01:25,500 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
fdbelqdn

fdbelqdn1#

默认的hadoop安装将datanode和namenode数据目录作为本地磁盘上的/tmp目录。正确的方法是在local上手动创建datanode和namenode的数据目录,并将路径放在hdfs-site.xml中。将以下属性添加到/etc/hadoop/hdfs-site.xml并重新格式化name节点,然后重新启动它应该是固定的。

  1. <property>
  2. <name>dfs.namenode.name.dir</name>
  3. <value>file:///data/1/dfs/nn,file:///data/2/dfs/nn</value>
  4. </property>
  5. <property>
  6. <name>dfs.datanode.data.dir</name>
  7. <value>file:///data/1/dfs/dn,file:///data/2/dfs/dn</value>
  8. </property>
r9f1avp5

r9f1avp52#

请确保您已经创建了/app/hadoop/tmp/dfs/namesecondary目录,并且拥有适当的权限。此外,检查日志(nn、dn、snn、jt)也会有所帮助。如果您仍然面临这个问题,请向我们显示日志以及配置文件。
针对您的评论:
org.apache.hadoop.hdfs.server.common.inconsistentfsstateexception:目录/app/hadoop/tmp/dfs/name处于不一致状态:存储目录不存在或不可访问。
看起来您尚未在配置文件中创建任何正在使用的目录。这个例外清楚地表明了这一点。请确保您已经创建了所有具有适当权限的目录,这些目录将用作配置属性的值。

相关问题