我的测试环境
我试图在我的测试环境中部署一个 Hadoop Cluster
基于3个节点:
1个名称节点(主节点:172.30.10.64)
2个数据节点(slave1:172.30.10.72和slave2:172.30.10.62)
我在namenode中配置了主属性文件,在datananodes中配置了从属性文件。
硕士学位论文
主持人:
127.0.0.1 localhost
172.30.10.64 master
172.30.10.62 slave2
172.30.10.72 slave1
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_tmp/hdfs/namenode</value>
</property>
</configuration>
core-site.xml:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
</configuration>
yarn-site.xml:
<configuration>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8050</value>
</property>
</configuration>
mapred-site.xml:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
</configuration>
我有个文件:
slave1
slave2
主文件:
master
从属文件:
我只添加了一些文件,这些文件与主文件不同。
hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_tmp/hdfs/datanode</value>
</property>
</configuration>
我的问题
我是从 /usr/local/hadoop/sbin
:
./start-dfs.sh&&./start-yarn.sh
这就是我得到的:
hduser@master:/usr/local/hadoop/sbin$ ./start-dfs.sh && ./start-yarn.sh
18/03/14 10:45:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [master]
hduser@master's password:
master: starting namenode, logging to /usr/local/hadoop-2.7.5/logs/hadoop-hduser-namenode-master.out
hduser@slave2's password: hduser@slave1's password:
slave2: starting datanode, logging to /usr/local/hadoop-2.7.5/logs/hadoop-hduser-datanode-slave2.out
所以我打开了slave2的日志文件:
2018-03-14 10:46:05,494 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/172.30.10.64:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECOND$
2018-03-14 10:46:06,495 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/172.30.10.64:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECOND$
2018-03-14 10:46:07,496 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/172.30.10.64:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECOND$
我做了什么
我试过一些东西,但到目前为止没有效果:
从主人到奴隶,从奴隶到奴隶的过程都很好
从主人到奴隶和奴隶之间的ssh工作得很好 hdfs namenode -format
在我的主节点中
重新创建namenode和datanaode文件夹
在我的主虚拟机中打开端口9000
防火墙已禁用: sudo ufw status
-->残疾人
我有点迷路,因为一切似乎都很好,我不知道为什么我不克服开始我的hadoop集群。
1条答案
按热度按时间2guxujil1#
我也许能找到答案:
我从主节点重新生成ssh密钥,然后复制到从节点。它现在似乎起作用了。