map reduce:由于错误的数量,无法运行代码

mitkmikd  于 2021-06-03  发布在  Hadoop
关注(0)|答案(2)|浏览(389)

请看下面的代码
Map.java

  1. public class Map extends Mapper<longwritable, intwritable="" text,=""> {
  2. private final static IntWritable one = new IntWritable(1);
  3. private Text word = new Text();
  4. @Override
  5. public void map(LongWritable key, Text value, Context context)
  6. throws IOException, InterruptedException {
  7. String line = value.toString();
  8. StringTokenizer tokenizer = new StringTokenizer(line);
  9. while (tokenizer.hasMoreTokens()) {
  10. word.set(tokenizer.nextToken());
  11. context.write(word, one);
  12. }
  13. }
  14. }
  15. </longwritable,>

减少.java

  1. public class Reduce extends Reducer<text, intwritable,="" intwritable="" text,=""> {
  2. @Override
  3. protected void reduce(
  4. Text key,
  5. java.lang.Iterable<intwritable> values,
  6. org.apache.hadoop.mapreduce.Reducer<text, intwritable,="" intwritable="" text,="">.Context context)
  7. throws IOException, InterruptedException {
  8. int sum = 0;
  9. for (IntWritable value : values) {
  10. sum += value.get();
  11. }
  12. context.write(key, new IntWritable(sum));
  13. }
  14. }
  15. </text,></intwritable></text,>

字数.java

  1. public class WordCount {
  2. public static void main(String[] args) throws Exception {
  3. Configuration conf = new Configuration();
  4. Job job = new Job(conf, "wordcount");
  5. job.setJarByClass(WordCount.class);
  6. job.setOutputKeyClass(Text.class);
  7. job.setOutputValueClass(IntWritable.class);
  8. job.setMapperClass(Map.class);
  9. job.setReducerClass(Reduce.class);
  10. job.setInputFormatClass(TextInputFormat.class);
  11. job.setOutputFormatClass(TextOutputFormat.class);
  12. FileInputFormat.addInputPath(job, new Path(args[0]));
  13. FileOutputFormat.setOutputPath(job, new Path(args[1]));
  14. job.waitForCompletion(true);
  15. }
  16. }

整个代码都是从这个 Map Reduce 教程(http://cloud.dzone.com/articles/how-run-elastic-mapreduce-job)
. 当我将这些类复制到eclipse中时,它显示了很多错误,比如cannotbe Resolved By Type . 这是合理的,因为此代码用作示例的类在默认jdk中找不到,而且教程也没有给出下载任何库的任何说明。我忽略了它,以为它与 Elastic Map Reduce 在服务器端。
当我把这个上传到amazonelasticmapreduce,创建了一个作业流并运行了这个程序,它给了我以下错误。

  1. Exception in thread "main" java.lang.Error: Unresolved compilation problems:
  2. Configuration cannot be resolved to a type
  3. Configuration cannot be resolved to a type
  4. Job cannot be resolved to a type
  5. Job cannot be resolved to a type
  6. Text cannot be resolved to a type
  7. IntWritable cannot be resolved to a type
  8. TextInputFormat cannot be resolved to a type
  9. TextOutputFormat cannot be resolved to a type
  10. FileInputFormat cannot be resolved
  11. Path cannot be resolved to a type
  12. FileOutputFormat cannot be resolved
  13. Path cannot be resolved to a type
  14. at WordCount.main(WordCount.java:5)
  15. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  16. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  17. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  18. at java.lang.reflect.Method.invoke(Method.java:606)
  19. at org.apache.hadoop.util.RunJar.main(RunJar.java:187)

我怎样才能使这个代码工作?我需要下载图书馆吗?如何运行此代码并查看结果?这是我在amazon和elastic map reduce的第一次体验,也是第一次体验大数据。
请帮忙。

6rvt4ljy

6rvt4ljy1#

在eclipse中将所有hadoopjar添加到项目中,如果您的代码没有错误,那么您可以将其导出为jar并在hadoop中运行jar。
要将jar添加到“构建路径”,请选择“配置构建路径”和“添加外部jar”(选择所有hadoop jar并添加它们)

gk7wooem

gk7wooem2#

所以,你的意思是,你没有在你的项目中添加任何hadoopjar,你忽略了编译错误,希望这个可以在安装了hadoop客户机的服务器端运行?
如果这是真的,那是不可能的。
必须将hadoop-client.xx.jar添加到项目中,任何版本都可以。

相关问题