hadoop说启动datanode,启动datanode,但是以后不会在jps中出现

rsaldnfx  于 2021-06-03  发布在  Hadoop
关注(0)|答案(2)|浏览(285)

数据节点日志:
ulimit-a用户sumitkhanna:

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 63202
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 63202
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: �%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)

从本质上讲,它会在将本地文件复制到hdfs时出错。

vojdkbi0

vojdkbi01#

jps的输出是不一致的,这取决于多个因素,通常可以修复,但我停止了尝试。使用“ps”作为你的真相来源:

ps ax | grep DataNode

希望到时候你能看到。

c9qzyr3d

c9qzyr3d2#

这意味着您的datanode尚未启动。启动datanode时出现异常。请查看stacktrace的datanode日志。

相关问题