hadoop 272-datanodes启动然后停止

cigdeys3  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(422)

环境详情:
我在aws上安装了hadoop2.7.2(不是hw而是纯hadoop)多节点集群(1 namenode/1 2nd nn/3 datanodes-ubuntu14.04)。
该集群基于以下教程(http://mfaizmzaki.com/2015/12/17/how-to-install-hadoop-2-7-1-multi-node-cluster-on-amazon-aws-ec2-instance-improved-part-1/)-->这意味着第一次安装(主安装)将被复制和优化
问题是:
如果我用1个datanode配置集群(我特别排除了另外2个),那么这3个数据节点可以分别正常工作。
当我添加另一个数据节点时,数据节点启动第一个日志会记录一个致命错误(见下文日志文件的提取和版本文件的快照)并停止。数据节点启动第二个工作然后罚款。。。
有什么建议吗?我在另一台机器上克隆主人的ami是不是做错了什么?谢谢大家!
日志文件

INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Unsuccessfully sent block report 0x1858458671b, containing 1 storage report(s), of which we sent 0. The reports had 0 total blocks and used 0 RPC(s). This took 5 msec to generate and 35 msecs for RPC and NN processing. Got back no commands.

WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1251070591-172.Y.Y.Y-1454167071207 (Datanode Uuid 54bc8b80-b84f-4893-8b96-36568acc5d4b) service to master/172.Y.Y.Y:9000 is shutting down org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.UnregisteredNodeException): Data node DatanodeRegistration(172.X.X.X:50010, datanodeUuid=54bc8b80-b84f-4893-8b96-36568acc5d4b, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-8e09ff25-80fb-4834-878b-f23b3deb62d0;nsid=278157295;c=0) is attempting to report storage ID 54bc8b80-b84f-4893-8b96-36568acc5d4b. Node 172.Z.Z.Z:50010 is expected to serve this storage.

WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool BP-1251070591-172.31.34.94-1454167071207 (Datanode Uuid 54bc8b80-b84f-4893-8b96-36568acc5d4b) service to master/172.Y.Y.Y:9000

INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool BP-1251070591-172.Y.Y.Y-1454167071207 (Datanode Uuid 54bc8b80-b84f-4893-8b96-36568acc5d4b) 

INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Removing block pool BP-1251070591-172.31.34.94-1454167071207

WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0

INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:    /************************************************************SHUTDOWN_MSG: Shutting down DataNode at HNDATA2/172.X.X.x************************************************************/
rjee0c15

rjee0c151#

您必须在中添加三个数据节点的ip地址 slaves namenode的文件。然后重新启动集群。这将解决问题
奴隶

<IPaddress of datanode1>
<IPaddress of datanode2>
<IPaddress of datanode3>

相关问题