hadoop namenode端口正在使用中

mrzz3bfm  于 2021-06-04  发布在  Hadoop
关注(0)|答案(3)|浏览(522)

这实际上是一个备用ha namenode。它的配置与主服务器和服务器的设置相同 hdfs namenode -bootstrapStandby 已成功运行。它开始出现在配置文件中定义的标准http端口50070上:

<property>
  <name>dfs.namenode.http-address.ha-hadoop.namenode2</name>
  <value>namenode2:50070</value>
</property>

启动开始正常,然后点击:

15/02/02 08:06:17 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://hadoop1:50070
15/02/02 08:06:17 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
15/02/02 08:06:17 INFO http.HttpRequestLog: Http request log for http.requests.namenode is not defined
15/02/02 08:06:17 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
15/02/02 08:06:17 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
15/02/02 08:06:17 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
15/02/02 08:06:17 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
15/02/02 08:06:17 INFO http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
15/02/02 08:06:17 INFO http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
15/02/02 08:06:17 INFO http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: hadoop1:50070
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:890)
        at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:826)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:695)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:585)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:754)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:738)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1427)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1493)
Caused by: java.net.BindException: Cannot assign requested address
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:444)
        at sun.nio.ch.Net.bind(Net.java:436)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:885)
        ... 8 more
15/02/02 08:06:17 INFO impl.MetricsSystemImpl: Stopping NameNode metrics system...
15/02/02 08:06:17 INFO impl.MetricsSystemImpl: NameNode metrics system stopped.
15/02/02 08:06:17 INFO impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
15/02/02 08:06:17 FATAL namenode.NameNode: Failed to start namenode.
java.net.BindException: Port in use: hadoop1:50070
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:890)
        at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:826)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:695)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:585)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:754)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:738)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1427)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1493)
Caused by: java.net.BindException: Cannot assign requested address
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:444)
        at sun.nio.ch.Net.bind(Net.java:436)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:885)
        ... 8 more
15/02/02 08:06:17 INFO util.ExitUtil: Exiting with status 1
15/02/02 08:06:17 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop1.marketstudies.com/192.168.1.125

************************************************************/

我已尝试通过设置更改http地址端口:

<property>
  <name>dfs.namenode.http-address.local1-hadoop.hadoop1</name>
  <value>hadoop1:10070</value>
</property>

但是,我得到了与上面相同的新端口:

15/02/02 08:16:51 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://hadoop1:10070
...
java.net.BindException: Port in use: hadoop1:10070
...
java.net.BindException: Port in use: hadoop1:10070

这与主namenode上的配置相同。
这个问题似乎和我的问题相似,但答案没有帮助。我试着设置 dfs.http.address 但这并没有改变任何事情。我相信这是一个非ha配置选项,在ha中被替换为 dfs.namenode.http-address.ha-name.namenodename 从这里可以看出,实际上没有什么东西在侦听http端口:


# netstat -anp |grep LIST

tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      946/sshd
tcp        0      0 0.0.0.0:46712           0.0.0.0:*               LISTEN      2066/java
tcp        0      0 0.0.0.0:50010           0.0.0.0:*               LISTEN      28892/java
tcp        0      0 0.0.0.0:50075           0.0.0.0:*               LISTEN      28892/java
tcp        0      0 0.0.0.0:8480            0.0.0.0:*               LISTEN      1471/java
tcp        0      0 0.0.0.0:10050           0.0.0.0:*               LISTEN      2358/zabbix_agentd
tcp        0      0 0.0.0.0:50020           0.0.0.0:*               LISTEN      28892/java
tcp        0      0 0.0.0.0:8485            0.0.0.0:*               LISTEN      1471/java
tcp        0      0 0.0.0.0:8040            0.0.0.0:*               LISTEN      2066/java
tcp        0      0 0.0.0.0:8042            0.0.0.0:*               LISTEN      2066/java
tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      1020/mysqld
tcp6       0      0 :::22                   :::*                    LISTEN      946/sshd

尝试以root用户身份启动,以查看侦听端口是否存在某种perms问题,但这会产生相同的错误。

vmdwslir

vmdwslir1#

下载osqueryhttps://code.facebook.com/projects/658950180885092
安装
发出这个命令osqueryi
当出现提示时,使用此sql命令查看所有正在运行的java进程并查找pid
从processes where name=“java”中选择name、path、pid;
你会在mac上看到这样的东西

+------+--------------------------------------------------------------------------+-------+
  | name | path                                                                     | pid   |
  +------+--------------------------------------------------------------------------+-------+
  | java | /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/bin/java | 59446 |
  | java | /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/bin/java | 59584 |
  | java | /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/bin/java | 59676 |
  | java | /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/bin/java | 59790 |
  +------+--------------------------------------------------------------------------+-------+

在所有进程上发出sudo kill-pid命令,以确保它终止正在使用的端口0.0.0.0:50070
然后重试sbin/start-dfs.sh,此时应该显示namenode

q9yhzks0

q9yhzks02#

使用以下命令检查正在运行和使用java的进程:
ps辅助| grep java
之后,使用上面的pid from命令杀死所有与hadoop相关的进程,如下所示:
sudo kill-9 pid

bxgwgixi

bxgwgixi3#

发现了问题。这来自于该服务器的简短历史记录,其中ip地址发生了更改,但/etc/hosts文件只是附加了新的文件,而不是被替换。我认为这让hadoop的启动很混乱,因为它试图在一个不存在的接口上打开50070。“端口正在使用”的错误使这有点混乱。

相关问题