getcredentials方法错误

mrfwxfqh  于 2021-06-03  发布在  Hadoop
关注(0)|答案(1)|浏览(300)

错误:线程“main”java.lang.nosuchmethoderror中出现异常:org.apache.hadoop.security.usergroupinformation.getcredentials()lorg/apache/hadoop/security/credentials;在org.apache.hadoop.mapreduce.job上。java:135)在org.apache.hadoop.mapreduce.job.getinstance(job。java:176)在org.apache.hadoop.mapreduce.job.getinstance(job。java:195)在wordcount.main(wordcount。java:20)
hadoop版本2.2.0
字数.java

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Job;

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
public class WordCount {
 public static void main(String[] args) throws Exception {
        if (args.length != 2) {
          System.out.println("usage: [input] [output]");
          System.exit(-1);
        }

        Job job = Job.getInstance(new Configuration(), "word count");
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);

        job.setMapperClass(WordMapper.class); 
        job.setReducerClass(SumReducer.class);  

        job.setInputFormatClass(TextInputFormat.class);
        job.setOutputFormatClass(TextOutputFormat.class);

        FileInputFormat.setInputPaths(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));

        job.setJarByClass(WordCount.class);
        job.setJobName("WordCount");

        job.submit();

 }
}

wordmapper.java文件

import java.io.IOException;    
import java.util.StringTokenizer;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class WordMapper extends Mapper<Object, Text, Text, IntWritable> {
private Text word = new Text();
private final static IntWritable one = new IntWritable(1);
 @Override
        public void map(Object key, Text value,
        Context contex) throws IOException, InterruptedException {
        // Break line into words for processing
        StringTokenizer wordList = new StringTokenizer(value.toString());
        while (wordList.hasMoreTokens()) {
       word.set(wordList.nextToken());
       contex.write(word, one);
      }
     }
    }

sumreducer.java文件

import java.io.IOException;
import java.util.Iterator;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class SumReducer extends Reducer<Text, IntWritable, Text, IntWritable> {

 private IntWritable totalWordCount = new IntWritable();

 @Override
 public void reduce(Text key, Iterable<IntWritable> values, Context context)
            throws IOException, InterruptedException {
  int wordCount = 0;
  Iterator<IntWritable> it=values.iterator();
  while (it.hasNext()) {
   wordCount += it.next().get();
  }
  totalWordCount.set(wordCount);
  context.write(key, totalWordCount);
 }
}

请让我知道可以做什么?最新的MapReduceAPI用于程序。hadoop2.2.0附带的所有jar也被导入到eclipse中。
谢谢:)

qxgroojn

qxgroojn1#

你在用eclipse插件做hadoop吗?如果不是,那就是问题所在。在没有插件的情况下,如果只是运行 WordCount 类和hadoop找不到必需的jar。把所有的jar包起来,包括 WordCount 在集群中运行。
如果您想从eclipse运行它,您需要eclipse插件。如果没有,可以按照以下说明构建插件

相关问题