totalorderpartitioner提供错误的键类错误

dwthyt8l  于 2021-06-02  发布在  Hadoop
关注(0)|答案(3)|浏览(387)

我正在尝试TotalOrderPartitionerHadoop。这样做时,我得到以下错误。错误声明-“错误的密钥类”
驱动程序代码-

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.partition.InputSampler;
import org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner;

public class WordCountJobTotalSort {

    public static void main (String args[]) throws Exception
    {
        if (args.length < 2 ) 
        {
            System.out.println("Plz provide I/p and O/p directory ");
            System.exit(-1);
        }

        Job job = new Job();

        job.setJarByClass(WordCountJobTotalSort.class);
        job.setJobName("WordCountJobTotalSort");            
        FileInputFormat.setInputPaths(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        job.setInputFormatClass(SequenceFileInputFormat.class);
        job.setMapperClass(WordMapper.class);
        job.setPartitionerClass(TotalOrderPartitioner.class);
        job.setReducerClass(WordReducer.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(IntWritable.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        job.setNumReduceTasks(2);

        TotalOrderPartitioner.setPartitionFile(job.getConfiguration(), new Path("/tmp/partition.lst"));

        InputSampler.writePartitionFile(job, new InputSampler.RandomSampler<IntWritable, Text>(1,2,2));

        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}

Map程序代码-

import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class WordMapper extends Mapper <LongWritable,Text,Text, IntWritable >  
{

    public void map(IntWritable mkey, Text value,Context context)
            throws IOException, InterruptedException {

        String s = value.toString();

        for (String word : s.split(" "))
        {
            if (word.length() > 0 ){
                context.write(new Text(word), new IntWritable(1));

            }
        }
    }
}

减速机代码-

import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class WordReducer extends  Reducer <Text, IntWritable, Text, IntWritable> {

    public void reduce(Text rkey, Iterable<IntWritable> values ,Context context )
            throws IOException, InterruptedException {

        int count=0;

        for (IntWritable value : values){

            count = count + value.get();
        }

        context.write(rkey, new IntWritable(count));    
    }
}

错误-

[cloudera@localhost workspace]$ hadoop jar WordCountJobTotalSort.jar WordCountJobTotalSort file_seq/part-m-00000 file_out
15/05/18 00:45:13 INFO input.FileInputFormat: Total input paths to process : 1
15/05/18 00:45:13 INFO partition.InputSampler: Using 2 samples
15/05/18 00:45:13 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
15/05/18 00:45:13 INFO compress.CodecPool: Got brand-new compressor [.deflate]
Exception in thread "main" java.io.IOException: wrong key class: org.apache.hadoop.io.LongWritable is not class org.apache.hadoop.io.Text
    at org.apache.hadoop.io.SequenceFile$RecordCompressWriter.append(SequenceFile.java:1340)
    at org.apache.hadoop.mapreduce.lib.partition.InputSampler.writePartitionFile(InputSampler.java:336)
    at WordCountJobTotalSort.main(WordCountJobTotalSort.java:47)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:208)

输入文件-
[cloudera@localhost 工作区]$hadoop fs-文本文件\u seq/part-m-00000
0你好你好
12如何
20是
26你的
36工作

sxpgvts3

sxpgvts31#

注解这两行并执行hadoop作业

job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);

好的,如果它不工作,那么在注解这两行之后,您必须设置输入和输出格式类

job.setInputFormatClass(SequenceFileInputFormat.class);
job.setOutputFormatClass(SequenceFileOutputFormat.class);
nkkqxpd9

nkkqxpd92#

在我的例子中,我得到了同样的错误键类错误,因为我使用的是带有自定义可写的combiner。当我评论combiner时,它工作得很好。

8nuwlpux

8nuwlpux3#

inputsampler在Map阶段(在shuffle和reduce之前)执行采样,采样通过Map器的输入键完成。我们需要确保Map器的输入和输出键是相同的;否则mr框架将找不到合适的bucket将输出键、值对放入采样空间。
在这种情况下,输入键是可长写的,因此inputsampler将基于所有可长写键的子集创建一个分区。但是输出键是text,因此mr框架将无法从分区中的with中找到合适的bucket。
我们可以通过引入准备阶段来解决这个问题。

相关问题