线程“main”java.io.ioexception中的异常:无法初始化集群

3hvapo4f  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(689)

我试图在windows中的eclipse上运行一个简单的hadoop map reduce程序。我得到以下异常。

  1. Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
  2. at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:121)
  3. at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:83)
  4. at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:76)
  5. at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1188)
  6. at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1184)
  7. at java.security.AccessController.doPrivileged(Native Method)
  8. at javax.security.auth.Subject.doAs(Unknown Source)
  9. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
  10. at org.apache.hadoop.mapreduce.Job.connect(Job.java:1183)
  11. at org.apache.hadoop.mapreduce.Job.submit(Job.java:1212)
  12. at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1236)
  13. at com.hadoop.mapreduce.WordCountDriverClass.main(WordCountDriverClass.java:41)

这些是我在项目中添加的jar文件。

  1. com.google.guava_1.6.0.jar
  2. commons-configuration-1.7.jar
  3. commons-lang-2.6.jar
  4. commons-logging-1.1.3.jar
  5. commons.collections-3.2.1.jar
  6. guava-13.0.1.jar
  7. hadoop-annotations-2.7.2.jar
  8. hadoop-auth-2.6.0.jar
  9. hadoop-common-2.3.0.jar
  10. hadoop-common.jar
  11. hadoop-mapreduce-client-core-2.0.2-alpha.jar
  12. hadoop-mapreduce-client-core-2.7.2.jar
  13. hadoop-mapreduce-client-jobclient-2.2.0.jar
  14. hadoop-test-1.2.1.jar
  15. log4j-1.2.17.jar
  16. slf4j-api-1.7.7.jar
  17. slf4j-simple-1.6.1.jar

在控制台中检查异常消息之后,我添加了这些jar文件。但我不能理解这个例外。谁能帮我修一下这个吗。
这是我的驾驶课。

  1. Configuration conf = new Configuration();
  2. // Creating a job
  3. Job job = Job.getInstance(conf,"WordCountDriverClass");
  4. job.setJarByClass(WordCountDriverClass.class);
  5. job.setMapperClass(WordCountMapper.class);
  6. job.setReducerClass(WordCountReducer.class);
  7. job.setNumReduceTasks(2);
  8. job.setInputFormatClass(KeyValueTextInputFormat.class);
  9. job.setOutputKeyClass(Text.class);
  10. job.setOutputValueClass(IntWritable.class);
  11. FileInputFormat.addInputPath(job, new Path("inputfiles"));
  12. FileOutputFormat.setOutputPath(job, new Path("outputfiles"));
  13. job.waitForCompletion(true);
lvjbypge

lvjbypge1#

看起来您正在运行wordcount示例,它需要的是1.2.1hadoop核心和2.2.0hadoop公共。如果改用maven,配置应该像

  1. <dependency>
  2. <groupId>org.apache.hadoop</groupId>
  3. <artifactId>hadoop-core</artifactId>
  4. <version>1.2.1</version>
  5. </dependency>
  6. <dependency>
  7. <groupId>org.apache.hadoop</groupId>
  8. <artifactId>hadoop-common</artifactId>
  9. <version>2.2.0</version>
  10. </dependency>

相关问题