hadoop中keyvaluetextinputformat.class异常

6fe3ivhb  于 2021-06-02  发布在  Hadoop
关注(0)|答案(1)|浏览(329)

我使用maven插件在eclipse中使用hadoop,并尝试执行链接map reduce函数,在第二个Map中,我使用keyvaluetextinputformat.class作为输入文件类,而不是textinputformat.class。因为我想传递“文本”作为一个Map不可长写键。当我这样做的时候,我得到了这个例外,我尝试了所有的解决方案,我在stackoverflow中找到了它。

Exception in thread "main" java.lang.IncompatibleClassChangeError: Found class org.apache.hadoop.mapreduce.JobContext, but interface was expected
    at org.apache.hadoop.mapreduce.lib.input.KeyValueTextInputFormat.isSplitable(KeyValueTextInputFormat.java:52)
    at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:246)
    at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)

驱动程序类:

Configuration conf = new Configuration(true);

    // Create job1
    Job job1 = new Job(conf, "job1");
    job1.setJarByClass(Mapper1.class);
    job1.setMapperClass(Mapper1.class);
    job1.setMapOutputKeyClass(ByteWritable.class);
    job1.setMapOutputValueClass(RowNumberWritable.class);
    job1.setReducerClass(Reducer1.class);
    job1.setNumReduceTasks(1);
    job1.setOutputKeyClass(Text.class);
    job1.setOutputValueClass(Text.class);
    FileInputFormat.addInputPath(job1, inputPath);
    job1.setInputFormatClass(TextInputFormat.class);
    Path path1 = new Path(out1);
    FileOutputFormat.setOutputPath(job1, path1);
    job1.setOutputFormatClass(TextOutputFormat.class);
    // Delete output if exists
    FileSystem hdfs = FileSystem.get(conf);
    if (hdfs.exists(countedPath))
        hdfs.delete(countedPath, true);

    // Execute the job1
    int code = job1.waitForCompletion(true) ? 0 : 1;

    // Create job2
    Job job2 = new Job(conf, "WordPerDocument");
    job2.setJarByClass(Mapper2.class);
    job2.setMapperClass(Mapper2.class);
    job2.setMapOutputKeyClass(WordDocumentWritable.class);
    job2.setMapOutputValueClass(Text.class);
    job2.setReducerClass(Reducer2.class);
    job2.setOutputKeyClass(Text.class);
    job2.setOutputValueClass(Text.class);
    FileInputFormat.addInputPath(job2, path1);
    job2.setInputFormatClass(KeyValueTextInputFormat.class);
    Path path2 = new Path(ou2);
    FileOutputFormat.setOutputPath(job2, path2);
    job2.setOutputFormatClass(TextOutputFormat.class);

    code = job2.waitForCompletion(true) ? 0 : 1;

我的进口是:

import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.ByteWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.KeyValueTextInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;

请,如果现在有人知道这个疯狂的例外:(谢谢大家

ffx8fchx

ffx8fchx1#

我怀疑这个错误是由这条线引起的

FileOutputFormat.setOutputPath(job2, path2);
``` `job2` 是一个 `Job` 不属于那里。。你应该使用 `TextOutputFormat.class` 或者类似的东西

相关问题