ioexception:无法初始化集群| hadoop 2.4.0

vd2z7a6w  于 2021-05-30  发布在  Hadoop
关注(0)|答案(1)|浏览(328)

我正在尝试使用hadoop2.4.0运行map reduce我的代码对第三方jar有一些依赖性,所以我使用eclipse export->runnable jar选项创建了一个胖jar。
现在当我用

  1. hadoop jar ~/Documents/job.jar

我有个例外

  1. java.lang.reflect.InvocationTargetException

上述异常是由以下原因引起的:

  1. Caused by: java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
  2. at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
  3. at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
  4. at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
  5. at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1255)
  6. at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1251)
  7. at java.security.AccessController.doPrivileged(Native Method)
  8. at javax.security.auth.Subject.doAs(Subject.java:415)
  9. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
  10. at org.apache.hadoop.mapreduce.Job.connect(Job.java:1250)
  11. at org.apache.hadoop.mapreduce.Job.submit(Job.java:1279)
  12. at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
  13. at imgProc.MasterClass.main(MasterClass.java:84)
  14. ... 10 more

hadoop类路径

  1. hduser@livingstream:/usr/local/hadoop$ hadoop classpath
  2. /usr/local/hadoop-2.4.0/etc/hadoop:/usr/local/hadoop-`2.4.0/share/hadoop/common/lib/*:/usr/local/hadoop-2.4.0/share/hadoop/common/*:/usr/local/hadoop-2.4.0/share/hadoop/hdfs:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/*:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/*:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/*:/usr/local/hadoop-2.4.0/share/hadoop/yarn/*:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/*:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/*:/usr/local/hadoop/contrib/capacity-scheduler/*.jar`

我的配置文件
mapred-site.xml文件

  1. <configuration>
  2. <property>
  3. <name>mapreduce.framework.name</name>
  4. <value>yarn</value>
  5. </property>
  6. </configuration>

core-site.xml文件

  1. <configuration>
  2. <property>
  3. <name>fs.default.name</name>
  4. <value>hdfs://localhost:54310</value>
  5. </property>
  6. <property>
  7. <name>hadoop.tmp.dir</name>
  8. <value>/usr/local/hadoop/data</value>
  9. </property>
  10. </configuration>

yarn-site.xml文件

  1. <configuration>
  2. <!-- Site specific YARN configuration properties -->
  3. <property>
  4. <name>yarn.nodemanager.aux-services</name>
  5. <value>mapreduce_shuffle</value>
  6. </property>
  7. <property>
  8. <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
  9. <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  10. </property>
  11. <property>
  12. <name>yarn.resourcemanager.resource-tracker.address</name>
  13. <value>localhost:8025</value>
  14. </property>
  15. <property>
  16. <name>yarn.resourcemanager.scheduler.address</name>
  17. <value>localhost:8030</value>
  18. </property>
  19. <property>
  20. <name>yarn.resourcemanager.address</name>
  21. <value>localhost:8050</value>
  22. </property>
  23. </configuration>

我不太确定现在发生了什么,是因为jar还是我的配置文件。任何人有任何想法,任何事都是值得赞赏的!:)

3vpjnl9f

3vpjnl9f1#

错误消息cleary指出,

  1. Caused by: java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.

您必须配置 mapred-site.xml and core-site.xml 以及其他许多你应该做的配置。。
对于逐步,您可以参考以下链接hadoopv2安装
希望这对你有帮助。

相关问题