我把所有的设置都做好了,我可以跑了 Hadoop ( 1.1.2 )
在单个节点上。但是,在对相关文件(/etc/hosts,*-site.xml)进行更改之后,我无法将datanode添加到集群,并且在从属服务器上不断出现以下错误。
有人知道怎么纠正吗?
2013-05-13 15:36:10,135 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-05-13 15:36:11,137 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-05-13 15:36:12,140 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2条答案
按热度按时间xpszyzbs1#
检查
fs.default.name
在core-site.xml conf文件中(在集群中的每个节点上)。这需要是名称节点的网络名称,我怀疑您将此作为hdfs://localhost:54310
).如果在群集中所有节点上的hadoop配置文件中没有提到localhost,则检查失败:
jv4diomz2#
尝试使用namenode的ip地址或网络名称重新调整localhost的速度