start-dfs.sh抛出端口22:连接超时错误

krcsximq  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(596)

我正在尝试在伪分布式环境下在ubuntu上安装hadoop。 start-dfs.sh (给我一个错误)

  1. Starting namenodes on [10.1.37.12]
  2. 10.1.37.00: ssh: connect to host 10.1.37.12 port 22: Connection timed out
  3. localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-superuser-datanode-superuser-Satellite-E45W-C.out
  4. Starting secondary namenodes [0.0.0.0]
  5. 0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-superuser-secondarynamenode-superuser-Satellite-E45W-C.out

我已将端口22添加到防火墙
jps输出:
2562数据节点
3846日元
2743次要名称节点
有人能帮我明白吗,这里怎么了?

  1. EXPORT HADOOP_SSH_OPTS="-p 22"' -- done
  2. added port 22 to firewal("sudo ufw allow 22")
  3. Tried stopping the firewall("sudo ufw disable")
  4. run ssh -vvv 10.1.37.12 and share output
  5. OpenSSH_7.9p1 Ubuntu-10, OpenSSL 1.1.1b 26 Feb 2019
  6. debug1: Reading configuration data /etc/ssh/ssh_config
  7. debug1: /etc/ssh/ssh_config line 19: Applying options for *
  8. debug2: resolve_canonicalize: hostname 10.1.37.12 is address
  9. debug2: ssh_connect_direct
  10. debug1: Connecting to 10.1.37.12 [10.1.37.12] port 22.
  11. debug1: connect to address 10.1.37.12 port 22: Connection timed out
  12. ssh: connect to host 10.1.37.12 port 22: Connection timed out
92dk7w1h

92dk7w1h1#

请检查您的行李 /etc/hosts 文件中,您需要提及专用ip地址(在子网范围内)。必须在worker文件中更新

相关问题