我希望我的第一个reduce任务生成smth like(当然,<sum,count>);在第二个reduce任务中,我将计算每个课程的sum/count。第一个减速机任务充当合并器、求和器和计数器;第二个reduce任务找到每个课程的平均值并输出平均值。我只是找不到什么是最好的类型存储输出值作为密钥对,然后能够检索和计算他们。hashmap不起作用。
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.FloatWritable;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.Mapper.Context;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.MultipleOutputs;
public class AvgGrading {
public static void main(String[] args) throws IllegalArgumentException, IOException, ClassNotFoundException, InterruptedException {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "avg grading");
job.setJarByClass(AvgGrading.class);
job.setMapperClass(MapForAverage.class);
job.setCombinerClass(ReduceForAverage.class);
job.setNumReduceTasks(2);
job.setReducerClass(ReduceForFinal.class);
job.setMapOutputKeyClass(LongWritable.class);
job.setMapOutputValueClass(Object.class);
job.setOutputKeyClass(LongWritable.class);
job.setOutputValueClass(FloatWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
public static class MapForAverage extends Mapper<LongWritable, Text, LongWritable, Object> {
public void map(LongWritable key, Text value, Context con) throws IOException, InterruptedException {
String [] word = value.toString().split(", ");
float grade = Integer.parseInt(word[1]);
int course = Integer.parseInt(word[0]);
Map <Float,Long> m = new HashMap<Float,Long>();
m.put(grade, (long) 1);
con.write(new LongWritable(course), m);
}
}
public static class ReduceForAverage extends Reducer<LongWritable, Object, LongWritable, Object> {
private FloatWritable result = new FloatWritable();
public void reduce(LongWritable course, Map<Float,Long> values, Context con)
throws IOException, InterruptedException {
Map <Float,Long> m = new HashMap<Float,Long>();
float sum = 0;
long count =0;
for (Map.Entry<Float, Long> entry : values.entrySet()) {
sum += entry.getKey();
count++;
}
m.put(sum, count);
con.write(course, m);
}
}
public static class ReduceForFinal extends Reducer<LongWritable, Object, LongWritable, FloatWritable> {
private FloatWritable result = new FloatWritable();
public void reduce(LongWritable course, Map<Long,Float>values, Context con)
throws IOException, InterruptedException {
long key = 0;
float value=0;
for ( Map.Entry<Long, Float> entry : values.entrySet()) {
key = entry.getKey();
value = entry.getValue();
}
float res= key/value;
con.write(course, new FloatWritable(res));
}
}
}
请注意,我无法遍历 Iterable < Map<Float,Int>>
在reduce任务中,我传递的是简单的map,这可能是不正确的。
错误代码为:
Unable to initialize MapOutputCollector org.apache.hadoop.mapred.MapTask$MapOutputBuffer
java.lang.nullpointerexception异常
二级减速器故障
1条答案
按热度按时间ecbunoof1#
map不实现可写的,你说你的combiner和reducer输入值的类是object,而你发射map,你只需要为此创建一个自定义类。记住,如果您想在hadoop中发出一些东西,那么定制类必须实现可写的。你可以这样做:
因此,在您的第一个Map器中,您可以通过以下方式创建并发出计数器:
此时,在第一个减速机中,您将拥有一个表示进程的键和一个iterable值,该值是所有计数器的iterable值,使用该iterable可以计算平均值。记住更新mapper和reducers类参数,使其与新参数保持一致。