HadoopDataNode只运行一次,然后在Windows10上不再启动

axr492tv  于 2021-06-26  发布在  Java
关注(0)|答案(2)|浏览(451)

我试图安装hadoop并运行一个简单的示例程序
datanode只启动了“1”-一次,然后我开始得到这个错误

2021-01-06 23:48:25,610 INFO checker.ThrottledAsyncChecker: Scheduling a check for [DISK]file:/C:/hadoop/sbin/datanode
2021-01-06 23:48:25,666 WARN checker.StorageLocationChecker: Exception checking StorageLocation [DISK]file:/C:/hadoop/sbin/datanode
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Ljava/lang/String;)Lorg/apache/hadoop/io/nativeio/NativeIO$POSIX$Stat;
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.getStat(NativeIO.java:608)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfoByNativeIO(RawLocalFileSystem.java:823)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:737)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:705)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)
at org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)
at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:239)
at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:52)
at org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$1.call(ThrottledAsyncChecker.java:142)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)              
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2021-01-06 23:48:25,671 ERROR datanode.DataNode: Exception in secureMain
org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0
at org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:233)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2841)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2754)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2798)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2942)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2966)
2021-01-06 23:48:25,675 INFO util.ExitUtil: Exiting with status 1: org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0

我参考了许多不同的文章,但都没有用。我尝试过使用另一个版本的hadoop,但问题仍然存在,因为我刚刚开始,我不能完全理解这个问题,因此我需要帮助
这些是我的配置

-For core-site.xml 

<configuration>
 <property>
 <name>fs.defaultFS</name>
 <value>hdfs://localhost:9000</value>
 </property>
</configuration>

                - For mapred-site.xml 

<configuration>
 <property>
 <name>mapreduce.framework.name</name>
 <value>yarn</value>
 </property>
</configuration>

                -For yarn-site.xml 

<configuration>
 <property>
 <name>yarn.nodemanager.aux-services</name>
 <value>mapreduce_shuffle</value>
 </property>
 <property>
 <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
 </property>
</configuration>

                -For hdfs-site.xml

<configuration>
 <property>
<name>dfs.replication</name>
 <value>1</value>
 </property>
 <property>
 <name>dfs.namenode.name.dir</name>
 <value>C:\hadoop\data\namenode</value>
 </property>
 <property>
 <name>dfs.datanode.data.dir</name>
 <value>datanode</value>
 </property>
</configuration>
jckbn6z7

jckbn6z71#

在每次成功运行datanode之后,一个名为 "datanode" 在中创建 sbin 在再次运行datanode之前必须删除的目录。
不知道逻辑和原因,但似乎工作。

n53p2ov0

n53p2ov02#

如果您没有优雅地停止datanode和namenode(例如,只是关闭计算机),那么就有可能发生损坏,并且会导致它在没有重新格式化的情况下无法启动
通常也不建议使用windows文件系统或ntfs格式的驱动器来运行hdfs

相关问题