namenode,datanode不使用jps列出

fjnneemd  于 2021-06-02  发布在  Hadoop
关注(0)|答案(6)|浏览(496)

环境:ubuntu 14.04,hadoop 2.6
在我键入 start-all.sh 以及 jps , DataNode 不在终端上列出

>jps
9529 ResourceManager
9652 NodeManager
9060 NameNode
10108 Jps
9384 SecondaryNameNode

根据这个答案:datanode进程没有在hadoop中运行
我试着找到最好的解决办法 bin/stop-all.sh (or stop-dfs.sh and stop-yarn.sh in the 2.x serie) rm -Rf /app/tmp/hadoop-your-username/* bin/hadoop namenode -format (or hdfs in the 2.x series) 不过,现在我明白了:

>jps
20369 ResourceManager
26032 Jps
20204 SecondaryNameNode
20710 NodeManager

如你所见,即使是 NameNode 不见了,请帮帮我。 DataNode logs : https://gist.github.com/fifiteen82726/b561bbd9cdcb9bf36032 NmaeNode logs : https://gist.github.com/fifiteen82726/02dcf095b5a23c1570b0 mapred-site.xml :

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>mapreduce.framework.name</name>
 <value>yarn</value>
</property>

</configuration>

更新

coda@ubuntu:/usr/local/hadoop/sbin$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/04/30 01:07:25 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
coda@localhost's password: 
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.4’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.5’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.3’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.4’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.2’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.3’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.1’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.2’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out’ to ‘/usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out.1’: Permission denied
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
localhost: ulimit -a for user coda
localhost: core file size          (blocks, -c) 0
localhost: data seg size           (kbytes, -d) unlimited
localhost: scheduling priority             (-e) 0
localhost: file size               (blocks, -f) unlimited
localhost: pending signals                 (-i) 3877
localhost: max locked memory       (kbytes, -l) 64
localhost: max memory size         (kbytes, -m) unlimited
localhost: open files                      (-n) 1024
localhost: pipe size            (512 bytes, -p) 8
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop/logs/hadoop-coda-namenode-ubuntu.out: Permission denied
coda@localhost's password: 
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.4’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.5’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.3’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.4’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.2’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.3’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.1’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.2’: Permission denied
localhost: mv: cannot move ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out’ to ‘/usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out.1’: Permission denied
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
localhost: ulimit -a for user coda
localhost: core file size          (blocks, -c) 0
localhost: data seg size           (kbytes, -d) unlimited
localhost: scheduling priority             (-e) 0
localhost: file size               (blocks, -f) unlimited
localhost: pending signals                 (-i) 3877
localhost: max locked memory       (kbytes, -l) 64
localhost: max memory size         (kbytes, -m) unlimited
localhost: open files                      (-n) 1024
localhost: pipe size            (512 bytes, -p) 8
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop/logs/hadoop-coda-datanode-ubuntu.out: Permission denied
Starting secondary namenodes [0.0.0.0]
coda@0.0.0.0's password: 
0.0.0.0: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
0.0.0.0: secondarynamenode running as process 20204. Stop it first.
15/04/30 01:07:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
resourcemanager running as process 20369. Stop it first.
coda@localhost's password: 
localhost: chown: changing ownership of ‘/usr/local/hadoop/logs’: Operation not permitted
localhost: nodemanager running as process 20710. Stop it first.
coda@ubuntu:/usr/local/hadoop/sbin$ jps
20369 ResourceManager
2934 Jps
20204 SecondaryNameNode
20710 NodeManager

更新

hadoop@ubuntu:/usr/local/hadoop/sbin$ $HADOOP_HOME ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/05/03 09:32:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
hadoop@localhost's password: 
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-ubuntu.out
hadoop@localhost's password: 
localhost: datanode running as process 28584. Stop it first.
Starting secondary namenodes [0.0.0.0]
hadoop@0.0.0.0's password: 
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-ubuntu.out
15/05/03 09:32:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-resourcemanager-ubuntu.out
hadoop@localhost's password: 
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-ubuntu.out
hadoop@ubuntu:/usr/local/hadoop/sbin$ jps
6842 Jps
28584 DataNode
tcbh2hod

tcbh2hod1#

我也面临同样的问题, jps 未显示数据节点。
正在删除的内容 hdfs 文件夹和更改文件夹权限已为我解决。

sudo rm -r /usr/local/hadoop_store/hdfs/*
sudo chmod -R 755 /usr/local/hadoop_store/hdfs    
hadoop namenode =format
start-all.sh
jps
zlhcx6iw

zlhcx6iw2#

致命org.apache.hadoop.hdfs.server.datanode.datanode:securemain java.io.ioexception:dfs.datanode.data.dir中的所有目录都无效:“/usr/local/hadoop\u store/hdfs/datanode/”
此错误可能是由于对的权限错误 /usr/local/hadoop_store/hdfs/datanode/ 文件夹。
致命org.apache.hadoop.hdfs.server.namenode.namenode:无法启动namenode。org.apache.hadoop.hdfs.server.common.inconsistentfsstateexception:目录/usr/local/hadoop\u store/hdfs/namenode处于不一致状态:存储目录不存在或不可访问。
此错误可能是由于对的权限错误 /usr/local/hadoop_store/hdfs/namenode 文件夹或它不存在。要纠正此问题,请遵循以下选项:
方案一:
如果你没有文件夹 /usr/local/hadoop_store/hdfs ,然后按如下方式创建并授予文件夹权限:

sudo mkdir /usr/local/hadoop_store/hdfs
sudo chown -R hadoopuser:hadoopgroup /usr/local/hadoop_store/hdfs
sudo chmod -R 755 /usr/local/hadoop_store/hdfs

改变 hadoopuser 以及 hadoopgroup 分别添加到hadoop用户名和hadoop组名。现在,尝试启动hadoop进程。如果问题仍然存在,请尝试选项2。
方案二:
删除的内容 /usr/local/hadoop_store/hdfs 文件夹:

sudo rm -r /usr/local/hadoop_store/hdfs/*

更改文件夹权限:

sudo chmod -R 755 /usr/local/hadoop_store/hdfs

现在,启动hadoop进程。应该有用。
注意:如果错误仍然存在,请发布新日志。
更新:
如果尚未创建hadoop用户和组,请按以下步骤进行:

sudo addgroup hadoop
sudo adduser --ingroup hadoop hadoop

现在,更改 /usr/local/hadoop 以及 /usr/local/hadoop_store :

sudo chown -R hadoop:hadoop /usr/local/hadoop
sudo chown -R hadoop:hadoop /usr/local/hadoop_store

将用户更改为hadoop:

su - hadoop

输入hadoop用户密码。现在终端应该是这样的: hadoop@ubuntu:$ 现在,键入: $HADOOP_HOME/bin/start-all.shsh /usr/local/hadoop/bin/start-all.sh

zy1mlcev

zy1mlcev3#

解决方案是首先使用go-to-your/usr/local/hadoop停止namenode bin/hdfs namenode -format 然后从您的主页中删除hdfs和tmp目录

mkdir ~/tmp
mkdir ~/hdfs
chmod 750 ~/hdfs

转到hadoop目录并启动hadoop

`sbin/start-dfs.sh`

它将显示数据节点

csbfibhn

csbfibhn4#

为此,您需要授予hdfc文件夹的权限。然后运行以下命令:
按命令创建组: sudo adgroup hadoop 将您的用户添加到此: sudo usermod -a -G hadoop "ur_user" (您可以通过who命令查看当前用户)
现在直接通过以下方式更改此hadoop\ u存储的所有者: sudo chown -R "ur_user":"ur_gourp" /usr/local/hadoop_store 然后通过以下方式再次格式化名称节点: hdfs namenode -format 然后启动所有你能看到结果的服务…..现在输入jps(它可以工作)。

carvr3hs

carvr3hs5#

面临同样的问题:namenode服务没有在jps命令中显示解决方案:由于目录/usr/local/hadoop\u store/hdfs的权限问题,只需更改权限和格式namenode并重新启动hadoop:
$sudo chmod-r 755/usr/local/hadoop\u商店/hdfs
$hadoop namenode-格式
$start-all.sh全部启动
$日元

deyfvvtc

deyfvvtc6#

设置p时要记住一件事ermission:---- ssh-keygen -t rsa-p“”以上命令只能在namenode中输入。然后将生成的公钥添加到所有数据节点ssh copy id-i ~/.ssh/id\u rsa.pub,然后按命令ssh permission will set。。。。。。之后,启动dfs时不需要密码。。。。。。

相关问题