我正在为word count hadoop编译一个java文件,但编译时会抛出一个错误:
记数簿。java:33:错误:预期的public void reduce(text\u key、iteratorvalues、outputcollectoroutput、reporter)引发ioexception
这是我的密码
public class CountBook
{
public static class EMapper extends MapReducebase implements
Mapper<LongWritable,Text,Text,IntWritable>
{
private final static Intwritable one = new Intwritable(1);
public void map(LongWritable key,Text value,OutputCollector<Text,IntWritable>output,Reporter reporter)throws IOException
{
String line = value.toString();
String[] Data = line.split("\";\"");
output.collect(new text(Data[0]),one);
}
}
public static class EReduce extends MapReduceBase implements
Reducer<Text,IntWritable,Text,IntWritable>
{
public void reduce(Text_key,Iterator<IntWritable>values,OutputCollector<text,intWritable>output,Reporter reporter)throws IOException
{
Text key=_key;
int authid=0;
while(values.hasNext())
{
IntWritable value = (IntWritable)values.next();
authid+=value.get();
}
output.collect(key,new intWritable(authid));
}
}
public static void main(String args[])throws Exception
{
JobConf conf = new JbConf(CountBook.class);
conf.setjobName("CountBookByAuthor");
conf.setOutputkeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(EMapper.class);
conf.setCombinerClass(EReduce.class);
conf.setReducerClass(EReducer.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf,new path(args[0]));
FileOutputFormat.setOutputPath(conf,new Path(args[1]));
JobCLient.runJob(conf);
}
}
我使用hadoop-core-1.2.1.jar作为类路径库,并在centos 7中运行
1条答案
按热度按时间xj3cbfub1#
您目前拥有:
应该是:
主要区别在于
key
需要在它和Text
以及OutputCollector<>
需要进行资本化。