hadoop:将多个ip地址绑定到集群名称节点

wn9m85ua  于 2021-06-04  发布在  Hadoop
关注(0)|答案(2)|浏览(625)

我在softlayer上有一个四节点的hadoop集群。主机(namenode)有一个用于外部访问的公共ip地址和一个用于集群访问的私有ip地址。从属节点(datanodes)有私有ip地址,我正试图连接到主节点,而不需要为每个从属节点分配公共ip地址。
我已经意识到 fs.defaultFS 到namenode的公共地址允许外部访问,除了namenode只侦听该地址以获取传入连接,而不侦听私有地址。所以我在datanode日志中得到connectiondensed异常,因为它们试图连接namenode的私有ip地址。
我认为解决方案可能是将公共和私有ip地址都设置为namenode,以便保留外部访问并允许从属节点连接。
有没有办法把这两个地址都绑定到namenode上,这样它就可以监听这两个地址?
编辑:hadoop版本2.4.1。

zour9fqk

zour9fqk1#

hdfs对多宿网络的支持,是在cloudera hdfs对多宿网络的支持上完成的。霍顿作品中的多归宿参数

<property>
  <name>dfs.namenode.rpc-bind-host</name>
  <value>0.0.0.0</value>
  <description>
    The actual address the RPC server will bind to. If this optional address is
    set, it overrides only the hostname portion of dfs.namenode.rpc-address.
    It can also be specified per name node or name service for HA/Federation.
    This is useful for making the name node listen on all interfaces by
    setting it to 0.0.0.0.
  </description>
</property>

此外,建议更改 dfs.namenode.rpc-bind-host , dfs.namenode.servicerpc-bind-host , dfs.namenode.http-bind-host 以及 dfs.namenode.https-bind-host 默认情况下,hdfs端点指定为主机名或ip地址。在这两种情况下,hdfs守护程序都将绑定到一个ip地址,从而使守护程序无法从其他网络访问。
解决方案是为服务器端点设置单独的设置,以强制将通配符ip地址绑定到dr\u any,即0.0.0.0。不要为这些设置提供端口号。
注意:首选在主/从配置文件中使用主机名而不是ip地址。

<property>
  <name>dfs.namenode.rpc-bind-host</name>
  <value>0.0.0.0</value>
  <description>
    The actual address the RPC server will bind to. If this optional address is
    set, it overrides only the hostname portion of dfs.namenode.rpc-address.
    It can also be specified per name node or name service for HA/Federation.
    This is useful for making the name node listen on all interfaces by
    setting it to 0.0.0.0.
  </description>
</property>

<property>
  <name>dfs.namenode.servicerpc-bind-host</name>
  <value>0.0.0.0</value>
  <description>
    The actual address the service RPC server will bind to. If this optional address is
    set, it overrides only the hostname portion of dfs.namenode.servicerpc-address.
    It can also be specified per name node or name service for HA/Federation.
    This is useful for making the name node listen on all interfaces by
    setting it to 0.0.0.0.
  </description>
</property>

<property>
  <name>dfs.namenode.http-bind-host</name>
  <value>0.0.0.0</value>
  <description>
    The actual adress the HTTP server will bind to. If this optional address
    is set, it overrides only the hostname portion of dfs.namenode.http-address.
    It can also be specified per name node or name service for HA/Federation.
    This is useful for making the name node HTTP server listen on all
    interfaces by setting it to 0.0.0.0.
  </description>
</property>

<property>
  <name>dfs.namenode.https-bind-host</name>
  <value>0.0.0.0</value>
  <description>
    The actual adress the HTTPS server will bind to. If this optional address
    is set, it overrides only the hostname portion of dfs.namenode.https-address.
    It can also be specified per name node or name service for HA/Federation.
    This is useful for making the name node HTTPS server listen on all
    interfaces by setting it to 0.0.0.0.
  </description>
</property>

注意:在开始修改之前,请按以下步骤停止代理和服务器:
服务cloudera scm代理停止
服务cloudera scm服务器停止
如果集群配置了主namenodes和次namenodes,则需要在这两个节点中进行此修改。修改是用 server and agent stopped 完成并保存 hdfs-site.xml 文件使用以下方法启动namenodes上的服务器和代理,以及datanodes上的代理(如果这样做也不会损害群集):
服务cloudera scm代理启动
服务cloudera scm服务器启动
可以为ibm biginsights实施相同的解决方案:

To configure HDFS to bind to all the interfaces , add the following configuration variable using Ambari under the section HDFS
-> Configs ->Advanced -> Custom hdfs-site

    dfs.namenode.rpc-bind-host = 0.0.0.0

    Restart HDFS to apply the configuration change . 

    Verify if port 8020 is bound and listening to requests from all the interfaces using the following command. 

    netstat -anp|grep 8020
    tcp 0 0 0.0.0.0:8020 0.0.0.0:* LISTEN 15826/java

ibmbiginsights:如何配置hadoop客户端端口8020以绑定到所有网络接口?
在cloudera中,hdfs配置中有一个名为
HDFS configuration 在cloudera中有一个属性叫做 Bind NameNode to Wildcard Address 只需选中该框,它就会将服务绑定到0.0.0.0上 then restart hdfs service ```
On the Home > Status tab, click to the right of the service
name and select Restart. Click Start on the next screen to confirm.
When you see a Finished status, the service has restarted.

启动、停止、刷新和重新启动群集启动、停止和重新启动服务
xzv2uavs

xzv2uavs2#

提问者把这个编辑成他的问题作为回答:
在hdfs-site.xml中,设置 dfs.namenode.rpc-bind-host0.0.0.0 hadoop将监听私有和公共网络接口,允许远程访问和datanode访问。

相关问题