尝试运行mrunit示例时发生冲突的api

643ylb08  于 2021-06-02  发布在  Hadoop
关注(0)|答案(2)|浏览(350)

我一直在玩mrunit,并尝试在一个hadoop wordcount示例中运行它,该示例遵循wordcount和单元测试教程
虽然我不是一个粉丝,但我一直在使用eclipse来运行代码,而且我不断收到setmapper函数的一个错误

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;

import org.apache.hadoop.mrunit.mapreduce.MapDriver;
import org.apache.hadoop.mrunit.mapreduce.MapReduceDriver;
import org.apache.hadoop.mrunit.mapreduce.ReduceDriver;

import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;

import org.junit.Before;
import org.junit.Test;

public class TestWordCount {
  MapReduceDriver<LongWritable, Text, Text, IntWritable, Text, IntWritable> mapReduceDriver;
  MapDriver<LongWritable, Text, Text, IntWritable> mapDriver;
  ReduceDriver<Text, IntWritable, Text, IntWritable> reduceDriver;

  @Before
  public void setUp() throws IOException
  {
      WordCountMapper mapper = new WordCountMapper();
      mapDriver = new MapDriver<LongWritable, Text, Text, IntWritable>();
      mapDriver.setMapper(mapper);  //<--Issue here

      WordCountReducer reducer = new WordCountReducer();
      reduceDriver = new ReduceDriver<Text, IntWritable, Text, IntWritable>();
      reduceDriver.setReducer(reducer);

      mapReduceDriver = new MapReduceDriver<LongWritable, Text, Text, IntWritable,     Text, IntWritable>();
      mapReduceDriver.setMapper(mapper); //<--Issue here
      mapReduceDriver.setReducer(reducer);
  }

错误消息:

java.lang.Error: Unresolved compilation problems: 
    The method setMapper(Mapper<LongWritable,Text,Text,IntWritable>) in the type MapDriver<LongWritable,Text,Text,IntWritable> is not applicable for the arguments (WordCountMapper)
    The method setMapper(Mapper<LongWritable,Text,Text,IntWritable>) in the type MapReduceDriver<LongWritable,Text,Text,IntWritable,Text,IntWritable> is not applicable for the arguments (WordCountMapper)

查找此问题时,我认为可能是api冲突,但我不确定在何处查找它。以前有人有这个问题吗?
编辑我正在使用一个带有hadoop2jar和最新的junit(4.10)jar的用户定义库。
编辑2这里是wordcountmapper的代码

import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class WordCountMapper extends Mapper<Object, Text, Text, IntWritable> 
{

    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(Object key, Text value, Context context)throws IOException, InterruptedException 
    {
        StringTokenizer itr = new StringTokenizer(value.toString());
        while (itr.hasMoreTokens()) 
        {
            word.set(itr.nextToken());
            context.write(word, one);
        }
    }
}

最终编辑/成功
原来我需要

WordCountMapper mapper = new WordCountMapper();

Mapper mapper = new WordCountMapper();

因为泛型有问题。还需要将mockito库导入到我的用户定义库中。

pkwftd7m

pkwftd7m1#

这是你的问题

public class WordCountMapper extends Mapper<Object, Text, Text, IntWritable>
....
MapDriver<LongWritable, Text, Text, IntWritable> mapDriver;

你的 WordCountMapper 输入类型( Object )与不兼容 MapDriver 输入类型( LongWritable ). 改变你的想法 Mapper 定义到

class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable>

你可能想改变你的想法 map 方法参数来自 Object keyLongWritable key 也。

elcex8rz

elcex8rz2#

确保导入了正确的类,我遇到了与上面不同的错误我的程序在reducer和reduce\u测试中都有正确的参数,但是由于导入了错误的类,我遇到了上面报告的相同错误消息
错误导入的类--
导入org.apache.hadoop.mrunit.reducedriver;
正确的等级---
导入org.apache.hadoop.mrunit.mapreduce.reducedriver;
如果您确定mapper\u类和mapper\u测试中的参数相同,则在mapper\u测试中使用相同的解决方案

相关问题