all.sh

xxe27gdn  于 2021-06-03  发布在  Hadoop
关注(0)|答案(4)|浏览(416)

我已经在我的笔记本电脑上设置了一个hadoop单一模式。信息:Ubuntu12.10,JDK1.7Oracle,从.deb文件安装hadoop。位置:/etc/hadoop/usr/share/hadoop
我在/usr/share/hadoop/templates/conf/core-site.xml中有config,我添加了2个属性

<property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:9000</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>

在hdfs-site.xml中

<property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>

在mapred-site.xml中

<property>
  <name>mapred.job.tracker</name>
  <value>localhost:9001</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>

当我开始命令的时候hduser@sepdau:~$start-all.sh

starting namenode, logging to /var/log/hadoop/hduser/hadoop-hduser-namenode-sepdau.com.out
localhost: starting datanode, logging to /var/log/hadoop/hduser/hadoop-hduser-datanode-sepdau.com.out
localhost: starting secondarynamenode, logging to /var/log/hadoop/hduser/hadoop-hduser-secondarynamenode-sepdau.com.out
starting jobtracker, logging to /var/log/hadoop/hduser/hadoop-hduser-jobtracker-sepdau.com.out
localhost: starting tasktracker, logging to /var/log/hadoop/hduser/hadoop-hduser-tasktracker-sepdau.com.out

但当我用jps查看流程时

hduser@sepdau:~$ jps
13725 Jps

更多

root@sepdau:/home/sepdau# netstat -plten | grep java
tcp6       0      0 :::8080                 :::*                    LISTEN      117        9953        1316/java       
tcp6       0      0 :::53976                :::*                    LISTEN      117        16755       1316/java       
tcp6       0      0 127.0.0.1:8700          :::*                    LISTEN      1000       786271      8323/java       
tcp6       0      0 :::59012                :::*                    LISTEN      117        16756       1316/java

当我停下来的时候

hduser@sepdau:~$ stop-all.sh
no jobtracker to stop
localhost: no tasktracker to stop
no namenode to stop
localhost: no datanode to stop
localhost: no secondarynamenode to stop

在我的主机文件中

hduser@sepdau:~$ cat /etc/hosts

127.0.0.1       localhost
127.0.1.1   sepdau.com

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

从文件:localhost主文件:localhost
这是一些日志

hduser@sepdau:/home/sepdau$ start-all.sh
mkdir: cannot create directory `/var/run/hadoop': Permission denied
starting namenode, logging to /var/log/hadoop/hduser/hadoop-hduser-namenode-sepdau.com.out
/usr/sbin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-hduser-namenode.pid: No such file or directory
localhost: mkdir: cannot create directory `/var/run/hadoop': Permission denied
localhost: starting datanode, logging to /var/log/hadoop/hduser/hadoop-hduser-datanode-sepdau.com.out
localhost: /usr/sbin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-hduser-datanode.pid: No such file or directory
localhost: mkdir: cannot create directory `/var/run/hadoop': Permission denied
localhost: starting secondarynamenode, logging to /var/log/hadoop/hduser/hadoop-hduser-secondarynamenode-sepdau.com.out
localhost: /usr/sbin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-hduser-secondarynamenode.pid: No such file or directory
mkdir: cannot create directory `/var/run/hadoop': Permission denied
starting jobtracker, logging to /var/log/hadoop/hduser/hadoop-hduser-jobtracker-sepdau.com.out
/usr/sbin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-hduser-jobtracker.pid: No such file or directory
localhost: mkdir: cannot create directory `/var/run/hadoop': Permission denied
localhost: starting tasktracker, logging to /var/log/hadoop/hduser/hadoop-hduser-tasktracker-sepdau.com.out
localhost: /usr/sbin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-hduser-tasktracker.pid: No such file or directory

我使用root用户,但它有同样的问题
我错在这里了。如何使用hadoop插件连接到eclipse。谢谢你的预付款

fdbelqdn

fdbelqdn1#

重新启动终端,首先格式化namenode。
一些罕见的情况是有人在hadoop中更改了bin文件夹中的start-all.sh文件。检查一下。
检查一下bashrc文件配置是否良好?

z9ju0rcb

z9ju0rcb2#

尝试添加

<property>
  <name>dfs.name.dir</name>
   <value>/home/abhinav/hdfs</value>
 </property>

到hdfs-site.xml并确保它存在
我为此写了一个小教程。看看这是否有用http://blog.abhinavmathur.net/2013/01/experience-with-setting-multinode.html

hi3rlvi2

hi3rlvi23#

修改hdfs-site.xml

<property>
  <name>dfs.name.dir</name>
  <value>/home/user_to_run_hadoop/hdfs/name</value>
</property>

<property>
  <name>dfs.data.dir</name>
  <value>/home/user_to_run_hadoop/hdfs/data</value>
</property>

确保创建目录 hdfs/home/user_to_run_hadoop . 然后创建2个目录 name 以及 datahdfs 在那之后你需要 chmod -R 755 ./hdfs/ 以及 path_to_hadoop_home/bin/hadoop namenode -format

clj7thdc

clj7thdc4#

您可以添加pid所在的路径和通过编辑hadoop-env.sh文件创建的日志。此文件存储在conf文件夹中。

export HADOOP_LOG_DIR=/home/username/hadoop-1x/logs

export HADOOP_PID_DIR=/home/username/pids

相关问题