无法运行hadoop字数示例?

zqry0prt  于 2021-06-03  发布在  Hadoop
关注(0)|答案(4)|浏览(353)

我正在vmware中的Ubuntu12.04上的单节点环境中运行hadoop wordcount示例。我把这个例子this:--

hadoop@master:~/hadoop$ hadoop jar hadoop-examples-1.0.4.jar wordcount    
/home/hadoop/gutenberg/ /home/hadoop/gutenberg-output

我在以下位置有输入文件:

/home/hadoop/gutenberg

输出文件的位置为:

/home/hadoop/gutenberg-output

当我运行wordcount程序时,我得到了以下信息errors:--

13/04/18 06:02:10 INFO mapred.JobClient: Cleaning up the staging area     
hdfs://localhost:54310/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201304180554_0001       
13/04/18 06:02:10 ERROR security.UserGroupInformation: PriviledgedActionException       
as:hadoop cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory 
/home/hadoop/gutenberg-output already exists 
org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory 
/home/hadoop/gutenberg-output already exists at 

org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.j 
ava:137) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:887) at 
org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:416) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at   
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850) at  
org.apache.hadoop.mapreduce.Job.submit(Job.java:500) at  
org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530) at 
org.apache.hadoop.examples.WordCount.main(WordCount.java:67) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:616) at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68) 
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) at 
org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:616) at   
org.apache.hadoop.util.RunJar.main(RunJar.java:156) hadoop@master:~/hadoop$ bin/stop-
all.sh Warning: $HADOOP_HOME is deprecated. stopping jobtracker localhost: stopping   
tasktracker stopping namenode localhost: stopping datanode localhost: stopping 
secondarynamenode    hadoop@master:~/hadoop$
46scxncf

46scxncf1#

就像dave(以及例外情况)所说的,您的输出目录已经存在。您要么需要输出到其他目录,要么首先删除现有目录,使用:

hadoop fs -rmr /home/hadoop/gutenberg-output
ippsafx7

ippsafx72#

如果您已经创建了自己的.jar并试图运行它,请注意:
为了运行您的作业,您必须编写以下内容:

hadoop jar <jar-path> <package-path> <input-in-hdfs-path> <output-in-hdfs-path>

但是如果你仔细查看你的驱动程序代码,你会发现你已经设置了 arg[0] 作为您的输入和 arg[1] 作为您的输出。。。我会展示给你看:

FileInputFormart.addInputPath(conf, new Path(args[0]));
FileOutFormart.setOutputPath(conf, new Path(args[1]));

但是,hadoop正在 arg[0 ]作为 <package-path> 而不是 <input-in-hdfs-path> 和arg[1]as <input-in-hdfs-path> 而不是 <output-in-hdfs-path> 因此,为了使其工作,您应该使用:

FileInputFormart.addInputPath(conf, new Path(args[1]));
FileOutFormart.setOutputPath(conf, new Path(args[2]));

arg[1] 以及 arg[2] ,所以它会得到正确的东西!:)希望有帮助。干杯。

kxkpmulp

kxkpmulp3#

检查是否有“tmp”文件夹。
hadoop fs-ls版/
如果看到输出文件夹或“tmp”,请同时删除这两个文件夹(考虑到没有运行活动作业)
hadoop fs-rmr/tmp

xuo3flqw

xuo3flqw4#

删除已存在的输出文件,或输出到其他文件。
(我有点好奇您对错误消息的其他解释。)

相关问题