map reduce hadoop错误

7cjasjjr  于 2021-06-04  发布在  Hadoop
关注(0)|答案(2)|浏览(336)

我试图运行一个map reduce程序,但它在运行时出错。

import java.io.IOException;

import java.util.*;

import javax.naming.Context;

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;

public class WordCount {

    public static class Map extends MapReduceBase implements Mapper<Text, Text,Text, Text> {

        private Text word = new Text();

        public void map(Text key, Text value, OutputCollector<Text,Text> output,         `              Reporter reporter) throws IOException {
            StringTokenizer itr = new StringTokenizer(value.toString(),",");
            while(itr.hasMoreTokens())
            {
                word.set(itr.nextToken());
                output.collect(key, word);
            }

        }

    }

    public class Reduce extends MapReduceBase implements Reducer<Text, Text, Text, Text> {
        private Text results = new Text();

        public void reduce(Text key, Iterator<Text> values, OutputCollector<Text, 

                Text> output, Reporter reporter) throws IOException {
            //    int sum = 0;
            String translation = "";
            while(values.hasNext())
            {
                translation += "|" + values.toString() + "|";
            }

            results.set(translation);
            output.collect(key, results);
        }

    }
    public static void main(String[] args) throws Exception {
        JobConf conf = new JobConf(WordCount.class);
        conf.setJobName("wordcount");

        conf.setMapperClass(Map.class);
        //   conf.setCombinerClass(Reduce.class);
        conf.setReducerClass(Reduce.class);
        conf.setJarByClass(WordCount.class);
        conf.setOutputKeyClass(Text.class);
        conf.setOutputValueClass(Text.class);
        conf.setMapOutputKeyClass(Text.class);
        conf.setMapOutputValueClass(Text.class);

        conf.setInputFormat(KeyValueTextInputFormat.class);
        conf.setOutputFormat(TextOutputFormat.class);

        FileInputFormat.setInputPaths(conf, new Path(args[0]));
        FileOutputFormat.setOutputPath(conf, new Path(args[1]));

        JobClient.runJob(conf);

    }
}

它给出的误差如下

14/03/12 04:34:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/03/12 04:34:56 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/03/12 04:34:56 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
14/03/12 04:34:56 WARN snappy.LoadSnappy: Snappy native library not loaded
14/03/12 04:34:56 INFO mapred.FileInputFormat: Total input paths to process: 2
14/03/12 04:34:56 INFO mapred.JobClient: Running job: job_local_0001
14/03/12 04:34:56 INFO util.ProcessTree: setsid exited with exit code 0
14/03/12 04:34:57 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@53f64158
14/03/12 04:34:57 INFO mapred.MapTask: numReduceTasks: 1
14/03/12 04:34:57 INFO mapred.MapTask: io.sort.mb = 100
14/03/12 04:34:57 INFO mapred.MapTask: data buffer = 79691776/99614720
14/03/12 04:34:57 INFO mapred.MapTask: record buffer = 262144/327680
14/03/12 04:34:57 INFO mapred.MapTask: Starting flush of map output
14/03/12 04:34:57 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
14/03/12 04:34:57 INFO mapred.JobClient:  map 0% reduce 0%
14/03/12 04:34:59 INFO mapred.LocalJobRunner: file:/root/Desktop/wordcount/sample.txt:0+587
14/03/12 04:34:59 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
14/03/12 04:34:59 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@2b5356d5

14/03/12 04:34:59 INFO mapred.MapTask: numReduceTasks: 1
14/03/12 04:34:59 INFO mapred.MapTask: io.sort.mb = 100
14/03/12 04:35:00 INFO mapred.MapTask: data buffer = 79691776/99614720
14/03/12 04:35:00 INFO mapred.MapTask: record buffer = 262144/327680
14/03/12 04:35:00 INFO mapred.MapTask: Starting flush of map output
14/03/12 04:35:00 INFO mapred.Task: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting
14/03/12 04:35:00 INFO mapred.JobClient:  map 100% reduce 0%
14/03/12 04:35:02 INFO mapred.LocalJobRunner: file:/root/Desktop/wordcount/sample.txt~:0+353
14/03/12 04:35:02 INFO mapred.Task: Task 'attempt_local_0001_m_000001_0' done.
14/03/12 04:35:03 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@76a9b9c
14/03/12 04:35:03 INFO mapred.LocalJobRunner: 
14/03/12 04:35:03 INFO mapred.Merger: Merging 2 sorted segments
14/03/12 04:35:03 INFO mapred.Merger: Down to the last merge-pass, with 0 segments left of total size: 0 bytes
14/03/12 04:35:03 INFO mapred.LocalJobRunner: 
14/03/12 04:35:03 WARN mapred.LocalJobRunner: job_local_0001 

java.lang.RuntimeException: java.lang.NoSuchMethodException: WordCount$Reduce.<init>()
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:115)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:485)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:260)
       Caused by: java.lang.NoSuchMethodException: WordCount$Reduce.<init>()
 at java.lang.Class.getConstructor0(Class.java:2723)
at java.lang.Class.getDeclaredConstructor(Class.java:2002)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:109)
... 3 more

14/03/12 04:35:03 INFO mapred.JobClient: Job complete: job_local_0001
14/03/12 04:35:03 INFO mapred.JobClient: Counters: 20
14/03/12 04:35:03 INFO mapred.JobClient:   File Input Format Counters 
14/03/12 04:35:03 INFO mapred.JobClient:     Bytes Read=940
14/03/12 04:35:03 INFO mapred.JobClient:   FileSystemCounters
14/03/12 04:35:03 INFO mapred.JobClient:     FILE_BYTES_READ=2243
14/03/12 04:35:03 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=64560
14/03/12 04:35:03 INFO mapred.JobClient:   Map-Reduce Framework
14/03/12 04:35:03 INFO mapred.JobClient:     Map output materialized bytes=12
14/03/12 04:35:03 INFO mapred.JobClient:     Map input records=36
14/03/12 04:35:03 INFO mapred.JobClient:     Reduce shuffle bytes=0
14/03/12 04:35:03 INFO mapred.JobClient:     Spilled Records=0
14/03/12 04:35:03 INFO mapred.JobClient:     Map output bytes=0
14/03/12 04:35:03 INFO mapred.JobClient:     Total committed heap usage (bytes)=603389952
14/03/12 04:35:03 INFO mapred.JobClient:     CPU time spent (ms)=0
14/03/12 04:35:03 INFO mapred.JobClient:     Map input bytes=940
14/03/12 04:35:03 INFO mapred.JobClient:     SPLIT_RAW_BYTES=185
14/03/12 04:35:03 INFO mapred.JobClient:     Combine input records=0
14/03/12 04:35:03 INFO mapred.JobClient:     Reduce input records=0
14/03/12 04:35:03 INFO mapred.JobClient:     Reduce input groups=0
14/03/12 04:35:03 INFO mapred.JobClient:     Combine output records=0
14/03/12 04:35:03 INFO mapred.JobClient:     Physical memory (bytes) snapshot=0
14/03/12 04:35:03 INFO mapred.JobClient:     Reduce output records=0
14/03/12 04:35:03 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=0
14/03/12 04:35:03 INFO mapred.JobClient:     Map output records=0
14/03/12 04:35:03 INFO mapred.JobClient: Job Failed: NA
Exception in thread "main" java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1265)
at WordCount.main(WordCount.java:68)

看起来他们的Map没有问题,但是有了reducer,任何人都能找出错误的原因吗??

ttp71kqs

ttp71kqs1#

将“public class reduce”更改为静态内部类“public static class reduce”

pvabu6sv

pvabu6sv2#

类reduce应该声明为static。

public static class reduce extends ...

应该有用。

相关问题