hadoopMapreduce-reduce中iterable< text>值上的嵌套循环在将它们写入上下文时忽略文本结果

hwazgwia  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(335)

我是hadoop新手,我尝试在一个简单的输入文件上运行map reduce(参见示例)。我试着用两个for循环从属性列表中得到某种笛卡尔积,由于某种原因,我得到的结果值总是空的。我曾经尝试过使用它,但最终只有在迭代时设置结果文本时它才起作用(我知道,这听起来也很奇怪)。如果你能帮助我理解这个问题,我将不胜感激,也许我做错了什么。
这是我的输入文件。

A 1
B 2
C 1
D 2
C 2
E 1

我希望得到以下输出:

1 A-C, A-E, C-E
2 B-C, B-D, C-D

所以我尝试实现以下map reduce类:公共类digittopairofleters{

public static class TokenizerMapper
            extends Mapper<Object, Text, Text, Text> {

        private Text digit = new Text();
        private Text letter = new Text();

        public void map(Object key, Text value, Context context
                ) throws IOException, InterruptedException {
            StringTokenizer itr = new StringTokenizer(value.toString());
            while (itr.hasMoreTokens()) {
                letter.set(itr.nextToken());
                digit.set(itr.nextToken());
                context.write(digit, letter);
            }
        }
    }

    public static class DigitToLetterReducer
            extends Reducer<Text, Text, Text, Text> {
        private Text result = new Text();

        public void reduce(Text key, Iterable<Text> values,
                Context context
                ) throws IOException, InterruptedException {
            List<String> valuesList = new ArrayList<>();
            for (Text value :values) {
                valuesList.add(value.toString());
            }
            StringBuilder builder = new StringBuilder();
            for (int i=0; i<valuesList.size(); i++) {
                for (int j=i+1; j<valuesList.size(); j++) {
                    builder.append(valuesList.get(i)).append(" 
").append(valuesList.get(j)).append(",");
                }
            }
            context.write(key, result);
        }
    }

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf, "digit to letter");
        job.setJarByClass(DigitToPairOfLetters.class);
        job.setMapperClass(TokenizerMapper.class);
        job.setCombinerClass(DigitToLetterReducer.class);
        job.setReducerClass(DigitToLetterReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);
        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}

但此代码将为我提供以下空列表输出:

1
2

当我在for循环中为result添加set时,它似乎起了作用:公共类digittopairofletters{

public static class TokenizerMapper
            extends Mapper<Object, Text, Text, Text> {

        private Text digit = new Text();
        private Text letter = new Text();

        public void map(Object key, Text value, Context context
                ) throws IOException, InterruptedException {
            StringTokenizer itr = new StringTokenizer(value.toString());
            while (itr.hasMoreTokens()) {
                letter.set(itr.nextToken());
                digit.set(itr.nextToken());
                context.write(digit, letter);
            }
        }
    }

    public static class DigitToLetterReducer
            extends Reducer<Text, Text, Text, Text> {
        private Text result = new Text();

        public void reduce(Text key, Iterable<Text> values,
                Context context
                ) throws IOException, InterruptedException {
            List<String> valuesList = new ArrayList<>();
            for (Text value :values) {
                valuesList.add(value.toString());
                // TODO: We set the valuesList in the result since otherwise the 
hadoop process will ignore the values
                // in it.
                result.set(valuesList.toString());
            }
            StringBuilder builder = new StringBuilder();
            for (int i=0; i<valuesList.size(); i++) {
                for (int j=i+1; j<valuesList.size(); j++) {
                    builder.append(valuesList.get(i)).append(" 
").append(valuesList.get(j)).append(",");
                    // TODO: We set the builder every iteration in the loop since otherwise the hadoop process will
                    // ignore the values
                    result.set(builder.toString());
                }
            }
            context.write(key, result);
        }
    }

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf, "digit to letter");
        job.setJarByClass(DigitToPairOfLetters.class);
        job.setMapperClass(TokenizerMapper.class);
        job.setCombinerClass(DigitToLetterReducer.class);
        job.setReducerClass(DigitToLetterReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);
        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}

这将得到以下结果:

1   [A C,A E,C E]
2   [B C,B D,C D]

我会感激你的帮助

yqkkidmi

yqkkidmi1#

您的第一种方法似乎很好,您只需添加以下行:

result.set(builder.toString());

之前

context.write(key, result);

就像你在第二个函数中做的那样。
context.write刷新输出,由于result只是一个空对象,因此没有任何内容作为值传递,因此只传递键。因此,在传递之前,需要将值(a-e等)设置到结果中。

相关问题