启动hadoop namenode时出错

qpgpyjmq  于 2021-06-03  发布在  Hadoop
关注(0)|答案(2)|浏览(263)

我想在我的ubuntu机器上实现一个伪分布式hadoop系统,但是我不能启动namenode(其他的像jobtracker可以正常启动)。我的开始命令是:

  1. ./hadoop namenode -format
  2. ./start-all.sh

我检查了logs/hadoop-mongodb-namenode-mongodb.log中的namenode日志

  1. 65 2013-12-25 13:44:39,797 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec cl ock time, 1 cycles
  2. 66 2013-12-25 13:44:39,797 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
  3. 67 2013-12-25 13:44:39,797 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec c lock time, 1 cycles
  4. 68 2013-12-25 13:44:39,799 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
  5. 69 2013-12-25 13:44:39,809 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
  6. 70 2013-12-25 13:44:39,810 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9000 registered.
  7. 71 2013-12-25 13:44:39,810 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort9000 registered.
  8. 72 2013-12-25 13:44:39,812 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: localhost/127.0.0.1:9000
  9. 73 2013-12-25 13:44:39,847 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
  10. 74 2013-12-25 13:44:39,878 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
  11. 75 2013-12-25 13:44:39,884 INFO org.apache.hadoop.http.HttpServer: dfs.webhdfs.enabled = false
  12. 76 2013-12-25 13:44:39,888 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
  13. 77 2013-12-25 13:44:39,889 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:mongodb cause:java.net.BindException: Address already in use
  14. 78 2013-12-25 13:44:39,889 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor thread received InterruptedExceptionjava.lang.InterruptedException: sleep interrupted
  15. 79 2013-12-25 13:44:39,890 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted Monitor
  16. 80 java.lang.InterruptedException: sleep interrupted
  17. 81 at java.lang.Thread.sleep(Native Method)
  18. 82 at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
  19. 83 at java.lang.Thread.run(Thread.java:701)
  20. 84 2013-12-25 13:44:39,890 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 0 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number o f syncs: 0 SyncTimes(ms): 0
  21. 85 2013-12-25 13:44:39,905 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/var/hadoop/hadoop-1.2.1/dfs.name.dir/current/edits
  22. 86 2013-12-25 13:44:39,905 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/var/hadoop/hadoop-1.2.1/dfs.name.dir/current/edits
  23. 87 2013-12-25 13:44:39,909 INFO org.apache.hadoop.ipc.Server: Stopping server on 9000
  24. 88 2013-12-25 13:44:39,909 INFO org.apache.hadoop.ipc.metrics.RpcInstrumentation: shut down
  25. 89 2013-12-25 13:44:39,909 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: Address already in use
  26. 90 at sun.nio.ch.Net.bind0(Native Method)
  27. 91 at sun.nio.ch.Net.bind(Net.java:174)
  28. 92 at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:139)
  29. 93 at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77)
  30. 94 at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
  31. 95 at org.apache.hadoop.http.HttpServer.start(HttpServer.java:602)
  32. 96 at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:517)
  33. 97 at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:395)
  34. 98 at java.security.AccessController.doPrivileged(Native Method)
  35. 99 at javax.security.auth.Subject.doAs(Subject.java:416)
  36. 100 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
  37. 101 at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:395)
  38. 102 at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:337)
  39. 103 at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
  40. 104 at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
  41. 105 at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
  42. 106
  43. 107 2013-12-25 13:44:39,910 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
  44. 108 /************************************************************
  45. 109 SHUTDOWN_MSG: Shutting down NameNode at mongodb/192.168.10.2
  46. 110************************************************************/
  47. 110,1 Bot
  48. 63 2013-12-25 13:44:39,796 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
  49. 64 2013-12-25 13:44:39,796 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
  50. 65 2013-12-25 13:44:39,797 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec cl ock time, 1 cycles
  51. 66 2013-12-25 13:44:39,797 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
  52. 67 2013-12-25 13:44:39,797 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec c lock time, 1 cycles
  53. 68 2013-12-25 13:44:39,799 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
  54. 69 2013-12-25 13:44:39,809 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
  55. 70 2013-12-25 13:44:39,810 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9000 registered.
  56. 71 2013-12-25 13:44:39,810 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort9000 registered.
  57. 72 2013-12-25 13:44:39,812 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: localhost/127.0.0.1:9000
  58. 73 2013-12-25 13:44:39,847 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
  59. 74 2013-12-25 13:44:39,878 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
  60. 75 2013-12-25 13:44:39,884 INFO org.apache.hadoop.http.HttpServer: dfs.webhdfs.enabled = false
  61. 76 2013-12-25 13:44:39,888 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
  62. 77 2013-12-25 13:44:39,889 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:mongodb cause:java.net.BindException: Address already in use
  63. 78 2013-12-25 13:44:39,889 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor thread received InterruptedExceptionjava.lang.InterruptedException: sleep interrupted
  64. 79 2013-12-25 13:44:39,890 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted Monitor
  65. 80 java.lang.InterruptedException: sleep interrupted
  66. 81 at java.lang.Thread.sleep(Native Method)
  67. 82 at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
  68. 83 at java.lang.Thread.run(Thread.java:701)
  69. 84 2013-12-25 13:44:39,890 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 0 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number o f syncs: 0 SyncTimes(ms): 0
  70. 85 2013-12-25 13:44:39,905 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/var/hadoop/hadoop-1.2.1/dfs.name.dir/current/edits
  71. 86 2013-12-25 13:44:39,905 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/var/hadoop/hadoop-1.2.1/dfs.name.dir/current/edits
  72. 87 2013-12-25 13:44:39,909 INFO org.apache.hadoop.ipc.Server: Stopping server on 9000
  73. 88 2013-12-25 13:44:39,909 INFO org.apache.hadoop.ipc.metrics.RpcInstrumentation: shut down
  74. 89 2013-12-25 13:44:39,909 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: Address already in use
  75. 90 at sun.nio.ch.Net.bind0(Native Method)
  76. 91 at sun.nio.ch.Net.bind(Net.java:174)
  77. 92 at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:139)
  78. 93 at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77)
  79. 94 at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
  80. 95 at org.apache.hadoop.http.HttpServer.start(HttpServer.java:602)
  81. 96 at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:517)
  82. 97 at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:395)
  83. 98 at java.security.AccessController.doPrivileged(Native Method)
  84. 99 at javax.security.auth.Subject.doAs(Subject.java:416)
  85. 100 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
  86. 101 at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:395)
  87. 102 at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:337)
  88. 103 at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
  89. 104 at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
  90. 105 at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
  91. 106
  92. 107 2013-12-25 13:44:39,910 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

这是错误信息。很明显,端口号出错了!下面是我的conf文件:core-site.xml

  1. 1 <?xml version="1.0"?>
  2. 2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3. 3 <configuration>
  4. 4 <property>
  5. 5 <name>fs.default.name</name>
  6. 6 <value>hdfs://localhost:9000</value>
  7. 7 </property>
  8. 8 </configuration>

hdfs-site.xml文件

  1. 1 <?xml version="1.0"?>
  2. 2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3. 3
  4. 4 <!-- Put site-specific property overrides in this file. -->
  5. 5 <configuration>
  6. 6 <property>
  7. 7 <name>dfs.replication</name>
  8. 8 <value>1</value>
  9. 9 </property>
  10. 10
  11. 11 <property>
  12. 12 <name>dfs.name.dir</name>
  13. 13 <value>/var/hadoop/hadoop-1.2.1/dfs.name.dir</value>
  14. 14 </property>
  15. 15
  16. 16 <property>
  17. 17 <name>dfs.data.dir</name>
  18. 18 <value>/var/hadoop/hadoop-1.2.1/dfs.data.dir</value>
  19. 19 </property>
  20. 20 </configuration>

无论我如何将端口更改为其他端口并重新启动hadoop,错误仍然存在!有人能帮我吗?

6pp0gazn

6pp0gazn1#

尝试删除hdfs数据目录,而不是在启动hdfs之前格式化namenode,首先启动hdfs并检查 jps 输出。如果一切正常,那么尝试格式化namenode并重新检查。如果仍然有问题,给我详细的日志。
p、 s:不要终止进程。只是使用 stop-all.sh 或者任何你应该阻止hadoop的东西。

pengsaosao

pengsaosao2#

群集中某台从属计算机上的datanode引发了类似的端口绑定异常:

  1. ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException: Address already in use

我注意到datanode的默认web接口端口(即50075)已经绑定到另一个应用程序:

  1. [ap2]-> netstat -an | grep -i 50075
  2. tcp 0 0 10.0.1.1:45674 10.0.1.1:50075 ESTABLISHED
  3. tcp 0 0 10.0.1.1:50075 10.0.1.1:45674 ESTABLISHED
  4. [ap2]->

我在中更改了datanode web界面 conf/hdfs-site.xml :

  1. <property>
  2. <name>dfs.datanode.http.address</name>
  3. <value>10.0.1.1:50080</value>
  4. <description>Datanode http port</description>
  5. </property>

这有助于解决问题,同样,您可以通过设置 dfs.http.addressconf/hadoop-site.xml ,例如。localhost:9090,但请确保端口可用。

展开查看全部

相关问题