hadoop:datanode进程被终止

olmpazwi  于 2021-06-03  发布在  Hadoop
关注(0)|答案(4)|浏览(362)

我目前正在使用hadoop-2.0.3-alpha,在我可以完美地使用hdfs(将文件复制到hdfs中,从外部框架获得成功,使用webfrontend)之后,在我的vm重新启动之后,datanode进程会在一段时间后停止。namenode流程和所有Yarn流程都可以正常工作。我在另一个用户的文件夹中安装了hadoop,因为我仍然安装了hadoop0.2,它也运行良好。查看所有datanode进程的日志文件,我得到以下信息:

2013-04-11 16:23:50,475 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2013-04-11 16:24:17,451 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-04-11 16:24:23,276 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-04-11 16:24:23,279 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2013-04-11 16:24:23,480 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is user-VirtualBox
2013-04-11 16:24:28,896 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
2013-04-11 16:24:29,239 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2013-04-11 16:24:38,348 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-04-11 16:24:44,627 INFO org.apache.hadoop.http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingIn putFilter)
2013-04-11 16:24:45,163 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFil ter$StaticUserFilter) to context datanode
2013-04-11 16:24:45,164 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFil ter$StaticUserFilter) to context logs
2013-04-11 16:24:45,164 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFil ter$StaticUserFilter) to context static
2013-04-11 16:24:45,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 0.0.0.0:50075
2013-04-11 16:24:45,508 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false
2013-04-11 16:24:45,536 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
2013-04-11 16:24:45,576 INFO org.mortbay.log: jetty-6.1.26
2013-04-11 16:25:18,416 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
2013-04-11 16:25:42,670 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
2013-04-11 16:25:44,955 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
2013-04-11 16:25:45,483 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
2013-04-11 16:25:47,079 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
2013-04-11 16:25:47,660 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (storage id unknown) service to localhost/127.0.0.1:8020 starting to offer service
2013-04-11 16:25:50,515 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2013-04-11 16:25:50,631 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2013-04-11 16:26:15,068 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/hadoop/workspace/hadoop_space/hadoop23/dfs/data/in_use.lock acquired by nodename 3099@user-VirtualBox
2013-04-11 16:26:15,720 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-474150866-127.0.1.1-1365686732002 (storage id DS-317990214-127.0.1.1-50010-1365505141363) service to localhost/127.0.0.1:8020
java.io.IOException: Incompatible clusterIDs in /home/hadoop/workspace/hadoop_space/hadoop23/dfs/data: namenode clusterID = CID-1745a89c-fb08-40f0-a14d-d37d01f199c3; datanode clusterID = CID-bb3547b0-03e4-4588-ac25-f0299ff81e4f
at org.apache.hadoop.hdfs.server.datanode.DataStorage .doTransition(DataStorage.java:391)
at org.apache.hadoop.hdfs.server.datanode.DataStorage .recoverTransitionRead(DataStorage.java:191)
at org.apache.hadoop.hdfs.server.datanode.DataStorage .recoverTransitionRead(DataStorage.java:219)
at org.apache.hadoop.hdfs.server.datanode.DataNode.in itStorage(DataNode.java:850)
at org.apache.hadoop.hdfs.server.datanode.DataNode.in itBlockPool(DataNode.java:821)
at org.apache.hadoop.hdfs.server.datanode.BPOfferServ ice.verifyAndSetNamespaceInfo(BPOfferService.java: 280)
at org.apache.hadoop.hdfs.server.datanode.BPServiceAc tor.connectToNNAndHandshake(BPServiceActor.java:22 2)
at org.apache.hadoop.hdfs.server.datanode.BPServiceAc tor.run(BPServiceActor.java:664)
at java.lang.Thread.run(Thread.java:722)
2013-04-11 16:26:16,212 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool BP-474150866-127.0.1.1-1365686732002 (storage id DS-317990214-127.0.1.1-50010-1365505141363) service to localhost/127.0.0.1:8020
2013-04-11 16:26:16,276 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool BP-474150866-127.0.1.1-1365686732002 (storage id DS-317990214-127.0.1.1-50010-1365505141363)
2013-04-11 16:26:18,396 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2013-04-11 16:26:18,940 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2013-04-11 16:26:19,668 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at user-VirtualBox/127.0.1.1

************************************************************/

有什么想法吗?可能是我在安装过程中出错了吗?但奇怪的是,它曾经成功过。我还必须说,如果我作为其他用户登录以执行命令
./hadoop-daemon.sh start namenode 和datanode一样,我需要添加sudo。
我使用了以下安装指南:http://jugnu-life.blogspot.ie/2012/0...rial-023x.html
顺便说一下,我使用的是oraclejava-7版本。

6pp0gazn

6pp0gazn1#

你需要同时删除这两个
c:\hadoop\data\dfs\datanode和
c:\hadoop\data\dfs\namenode文件夹。
如果没有此文件夹,请打开c:\hadoop\etc\hadoop\hdfs-site.xml文件并获取此文件夹的路径,以便下次删除。对我来说它说:

<property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/hadoop/data/dfs/namenode</value>
</property>
<property>
    <name>dfs.datanode.data.dir</name>
    <value>file:/hadoop/data/dfs/datanode</value>
</property>

运行格式化namenode命令 c:\hadoop\bin>hdfs namenode -format 现在应该可以了!

ippsafx7

ippsafx72#

datanode因不兼容的ClusterId而死亡。如果您使用的是Hadoop2.x,则要解决此问题,您必须删除hdfs-site.xml中指定的文件夹中的所有内容-“dfs.datanode.data.dir”(而不是文件夹本身)。
clusterid将保存在该文件夹中。删除并重新启动dfs.sh。这应该管用!!!

7bsow1i6

7bsow1i63#

我认为在不删除数据目录的情况下,推荐的方法是简单地更改datanode版本文件中的clusterid变量。
如果查看守护程序目录,您将看到datanode目录exmaple

data/hadoop/daemons/datanode

版本文件应该如下所示。

cat current/VERSION 

# Tue Oct 14 17:31:58 CDT 2014

storageID=DS-23bf7f3a-085c-4531-808f-801ff6d52d14
clusterID=CID-bb3547b0-03e4-4588-ac25-f0299ff81e4f
cTime=0
datanodeUuid=63154929-ae68-4149-9f75-9a6558545041
storageType=DATA_NODE
layoutVersion=-55

您需要将clusterid更改为消息输出中的第一个值,因此在您的示例中,它将是cid-1745a89c-fb08-40f0-a14d-d37d01f199c3,而不是cid-bb3547b0-03e4-4588-ac25-f0299ff81e4f
更新后的版本应该是这样的,带有修改过的clusterid

cat current/VERSION 
    #Tue Oct 14 17:31:58 CDT 2014
    storageID=DS-23bf7f3a-085c-4531-808f-801ff6d52d14
    clusterID=CID-1745a89c-fb08-40f0-a14d-d37d01f199c3
    cTime=0
    datanodeUuid=63154929-ae68-4149-9f75-9a6558545041
    storageType=DATA_NODE
    layoutVersion=-55

重新启动hadoop,datanode应该可以正常启动。

owfi6suc

owfi6suc4#

问题可能是namenode是在集群设置之后格式化的,而datanodes不是,因此从属节点仍然引用旧的namenode。
我们必须在datanode的本地文件系统上删除并重新创建文件夹/home/hadoop/dfs/data。
检查hdfs-site.xml文件以查看dfs.data.dir指向的位置
然后删除那个文件夹
然后重新启动计算机上的datanode守护程序
上述步骤应重新创建文件夹并解决问题。
如果以上说明不起作用,请共享您的配置信息。

相关问题