无法在ubuntu(16.04)上的伪代码中启动hadoop(3.1.0)

czfnxgou  于 2021-05-31  发布在  Hadoop
关注(0)|答案(1)|浏览(479)

我试图遵循hadoopapache网站的入门指南,特别是伪分布式配置的入门指南,即apachehadoop3.1.0的入门指南
但是我无法启动hadoop名称和数据节点。有人能帮忙建议吗?即使它的东西我可以运行尝试调试/调查进一步。
在日志的末尾,我看到一条错误消息(不确定它是重要的还是危险的)。

  1. 2018-04-18 14:15:40,003 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
  2. 2018-04-18 14:15:40,006 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
  3. 2018-04-18 14:15:40,014 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks = 0
  4. 2018-04-18 14:15:40,014 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks = 0
  5. 2018-04-18 14:15:40,014 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 0
  6. 2018-04-18 14:15:40,014 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of over-replicated blocks = 0
  7. 2018-04-18 14:15:40,014 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written = 0
  8. 2018-04-18 14:15:40,014 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 11 msec
  9. 2018-04-18 14:15:40,028 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
  10. 2018-04-18 14:15:40,028 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
  11. 2018-04-18 14:15:40,029 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: localhost/127.0.0.1:9000
  12. 2018-04-18 14:15:40,031 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
  13. 2018-04-18 14:15:40,031 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Initializing quota with 4 thread(s)
  14. 2018-04-18 14:15:40,033 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Quota initialization completed in 2 milliseconds name space=1 storage space=0 storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0, PROVIDED=0 2018-04-18 14:15:40,037 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds
  15. > 2018-04-18 14:15:40,232 ERROR
  16. > org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 15:
  17. > SIGTERM
  18. >
  19. > 2018-04-18 14:15:40,236 ERROR
  20. > org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 1:
  21. > SIGHUP
  22. >
  23. > 2018-04-18 14:15:40,236 INFO
  24. > org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
  25. > /************************************************************
  26. > SHUTDOWN_MSG: Shutting down NameNode at c0315/127.0.1.1

我已经确认,我可以 ssh localhost 没有密码提示。我还运行了上述apache入门指南中的以下步骤,
$bin/hdfs namenode-格式
$sbin/开始-dfs.sh
但我不能跑第三步。浏览位置http://localhost:9870/. 当我从终端提示符运行>jsp时,我得到的结果是,
14900日元
我在等我的节点列表。
我会附上完整的日志。
有人能帮我调试一下吗?
java版本,$java—版本

  1. java 9.0.4
  2. Java(TM) SE Runtime Environment (build 9.0.4+11)
  3. Java HotSpot(TM) 64-Bit Server VM (build 9.0.4+11, mixed mode)

edit1:我也用java8重复了这些步骤,得到了相同的错误消息。
edit2:按照下面的注解建议,我已经检查过了,我现在肯定是指向java8,并且我还注解掉了的localhost设置 127.0.0.0/etc/hosts 文件



ubuntu版本,
$lsb\发布-a

  1. No LSB modules are available.
  2. Distributor ID: neon
  3. Description: KDE neon User Edition 5.12
  4. Release: 16.04
  5. Codename: xenial

我试过执行命令, bin/hdfs version ```
Hadoop 3.1.0
Source code repository https://github.com/apache/hadoop -r 16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d
Compiled by centos on 2018-03-30T00:00Z
Compiled with protoc 2.5.0
From source with checksum 14182d20c972b3e2105580a1ad6990
This command was run using /home/steelydan.com/roycecollige/Apps/hadoop-3.1.0/share/hadoop/common/hadoop-common-3.1.0.jar

  1. 当我尝试的时候 `bin/hdfs groups` 它不会回来,但给了我,

018-04-18 15:33:34,590 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)

  1. 当我尝试的时候, `$ bin/hdfs lsSnapshottableDir` ```
  2. lsSnapshottableDir: Call From c0315/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused

当我尝试的时候, $ bin/hdfs classpath ```
/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/etc/hadoop:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/common/lib/:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/common/:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/hdfs:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/hdfs/lib/:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/hdfs/:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/mapreduce/:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/yarn:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/yarn/lib/:/home/steelydan.com/roycecoolige/Apps/hadoop-3.1.0/share/hadoop/yarn/*

  1. core-site.xml文件
huwehgph

huwehgph1#

我还没有弄清楚(我只是再试了一次,因为我想念霓虹灯这么多),但即使:9000是不在使用,操作系统发送一个sigterm在我的情况下也。
我发现解决这个问题的唯一方法就是重新购买ubuntu,很遗憾。

相关问题