put/copyFromLocal的Hadoop连接错误

ql3eal8s  于 2023-08-03  发布在  Hadoop
关注(0)|答案(3)|浏览(224)

我在跟踪tutorial to install hadoop。现在我被困在
将本地示例数据复制到HDFS
步骤。
出现以下连接错误:

  1. <12/10/26 17:29:16 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 0 time(s).
  2. 12/10/26 17:29:17 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 1 time(s).
  3. 12/10/26 17:29:18 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 2 time(s).
  4. 12/10/26 17:29:19 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 3 time(s).
  5. 12/10/26 17:29:20 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 4 time(s).
  6. 12/10/26 17:29:21 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 5 time(s).
  7. 12/10/26 17:29:22 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 6 time(s).
  8. 12/10/26 17:29:23 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 7 time(s).
  9. 12/10/26 17:29:24 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 8 time(s).
  10. 12/10/26 17:29:25 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 9 time(s).
  11. Bad connection to FS. command aborted. exception: Call to localhost/127.0.0.1:54310 failed on connection exception: java.net.ConnectException: Connection refused

字符串
这与Errors while running hadoop中的基本相同
现在的问题是,我已经禁用了ivp6,正如上面的教程中所描述的那样,但它没有帮助。我是不是错过了什么?
我在第二台安装了Ubuntu的机器上重复了这个教程,并一步一步地进行了比较。原来,在hduserbashrc配置中存在一些bug。后来一切正常...

rbl8hiat

rbl8hiat1#

如果我在DataNode/NameNode没有运行时尝试执行Hadoop fs <anything>,我会得到确切的错误消息,所以我猜您也会遇到同样的情况。
在终端中键入jps。如果一切都在运行,它应该看起来像:

  1. 16022 DataNode
  2. 16524 Jps
  3. 15434 TaskTracker
  4. 15223 JobTracker
  5. 15810 NameNode
  6. 16229 SecondaryNameNode

字符串
我敢打赌你的DataNode或NameNode没有运行。如果jps的打印输出中缺少任何内容,请重新启动。

bz4sfanl

bz4sfanl2#

完成整个配置后给予此命令
hadoop namenode -formate
并通过此命令启动所有服务
start-all.sh
这会解决你的问题

disho6za

disho6za3#

1.转到etc/hadoop/core-site.xml。检查www.example.com的值fs.default.name,应该如下所示。{ fs.default.name hdfs://localhost:54310 }
1.完成整个配置后给予此命令
hadoop namenode -format
1.通过此命令启动所有服务
start-all.sh
这会解决你的问题。
您的namenode可能处于安全模式,请运行bin/hdfs dfsadmin -safemode leave或bin/hadoop dsfadmin -safemode leave,然后执行步骤-2和步骤-3

相关问题