hadoop上的mapreduce说“输出文件已经存在”

4dc9hkyq  于 2021-06-02  发布在  Hadoop
关注(0)|答案(3)|浏览(495)

我第一次使用mapreduce运行了一个wordcount示例,结果成功了。然后,我停止了集群,过了一会儿又重新启动,并遵循相同的过程。
显示此错误:

10P:/$  hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /user/test/tester /user/output
15/08/05 00:16:04 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
15/08/05 00:16:04 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
org.apache.hadoop.mapred.FileAlreadyExistsException:**Output directory hdfs://localhost:54310/user/output already exists**
    at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:146)
    at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:562)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
    at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
    at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
    at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
icnyk63a

icnyk63a1#

只需像这样编写驱动程序代码

public class TestDriver extends Configured implements Tool {
    static Configuration cf;
    @Override
    public int run(String[] arg0) throws IOException,InterruptedException,ClassNotFoundException {
        cf=new Configuration();
        Job j=Job.getInstance(cf);
        j.setJarByClass(TestDriver.class);
        j.setMapperClass(CustMapper.class);
        j.setMapperClass(TxnMapper.class);
        j.setMapOutputKeyClass(CustKey.class);
        j.setMapOutputValueClass(Text.class);
        j.setReducerClass(JoinReducer.class);
        j.setOutputKeyClass(CustKey.class);
        j.setOutputValueClass(Text.class);
//FOCUS ON THE LINE BELOW
        Path op=new Path(arg0[2]);
        j.setInputFormatClass(CustInputFormat.class);
        MultipleInputs.addInputPath(j, new Path(arg0[0]),CustInputFormat.class,CustMapper.class);
        MultipleInputs.addInputPath(j, new Path(arg0[1]),ShopIpFormat.class,TxnMapper.class);
        j.setOutputFormatClass(CustTxOutFormat.class);
        FileOutputFormat.setOutputPath(j, op);
//WRITING THIS LINE SHALL DELETE THE OUTPUT FOLDER AFTER YOU'RE DONE WITH THE //JOB
        op.getFileSystem(cf).delete(op,true);

        return j.waitForCompletion(true)?0:1;
    }

    public static void main(String argv[])throws Exception{
        int res=ToolRunner.run(cf, new TestDriver(), argv);
        System.exit(res);
    }
}

希望这能消除你的疑虑。
谢谢:)

yk9xbfzb

yk9xbfzb2#

在配置类中添加以下代码段。

// Delete output if exists
    FileSystem hdfs = FileSystem.get(conf);
    if (hdfs.exists(outputDir))
      hdfs.delete(outputDir, true);

    // Execute job
    int code = job.waitForCompletion(true) ? 0 : 1;
    System.exit(code);
sr4lhrrt

sr4lhrrt3#

hdfs://localhost:54310/用户/输出
在运行作业之前删除输出目录。
i、 e执行以下命令:

hadoop fs -rm -r /user/output

在运行作业之前。

相关问题