hadoop安装,以启动辅助名称节点、节点管理器和资源管理器

mrfwxfqh  于 2021-07-13  发布在  Hadoop
关注(0)|答案(1)|浏览(628)

我已经在4台linux机器上安装了hadoop3.1.0集群,分别是hadoop1(master)、hadoop2、hadoop3和hadoop4。
我跑了 start-dfs.sh 以及 start-yarn.sh ,只看到namenodes和datanodes与一起运行 jps . 辅助名称节点、节点管理器和资源管理器失败。我尝试了一些解决方案,这就是我得到的。如何配置和启动辅助namenodes、nodemanagers和resroucemanagers?
关于次要名称节点日志

  1. java.net.BindException: Port in use: hadoop1:9000
  2. ...
  3. Caused by: java.net.BindException: Address already in use
  4. ...

关于节点管理器和资源管理器日志

  1. 2021-02-21 03:29:03,463 WARN org.eclipse.jetty.webapp.WebAppContext: Failed startup of context o.e.j.w.WebAppContext@51d719bc{/,file:///tmp/jetty-0.0.0.0-8042-node-_-any-8548809575065892553.dir/webapp/,UNAVAILABLE}{/node}
  2. com.google.inject.ProvisionException: Unable to provision, see the following errors:
  3. 1) Error injecting constructor, java.lang.NoClassDefFoundError: javax/activation/DataSource
  4. at org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver.<init>(JAXBContextResolver.java:52)
  5. at org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer$NMWebApp.setup(WebServer.java:153)
  6. while locating org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver

我有hdfs-site.xml

  1. <property>
  2. <name>dfs.namenode.secondary.http-address</name>
  3. <value>hadoop1:9000</value>
  4. </property>
  5. <property>
  6. <name>dfs.namenode.name.dir</name>
  7. <value>file:/app/hadoop/hadoop-3.1.0/name</value>
  8. </property>
  9. <property>
  10. <name>dfs.datanode.data.dir</name>
  11. <value>file:/app/hadoop/hadoop-3.1.0/data</value>
  12. </property>
  13. <property>
  14. <name>dfs.replication</name>
  15. <value>3</value>
  16. </property>
  17. <property>
  18. <name>dfs.webhdfs.enabled</name>
  19. <value>true</value>
  20. </property>

yarn-site.xml文件

  1. <property>
  2. <name>yarn.nodemanager.aux-services</name>
  3. <value>mapreduce_shuffle</value>
  4. </property>
  5. <property>
  6. <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
  7. <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  8. </property>
  9. <property>
  10. <name>yarn.resourcemanager.address</name>
  11. <value>hadoop1:8032</value>
  12. </property>
  13. <property>
  14. <name>yarn.resourcemanager.scheduler.address</name>
  15. <value>hadoop1:8030</value>
  16. </property>
  17. <property>
  18. <name>yarn.resourcemanager.resource-tracker.address</name>
  19. <value>hadoop1:8031</value>
  20. </property>
  21. <property>
  22. <name>yarn.resourcemanager.admin.address</name>
  23. <value>hadoop1:8033</value>
  24. </property>
  25. <property>
  26. <name>yarn.resourcemanager.webapp.address</name>
  27. <value>hadoop1:8088</value>
  28. </property>
  29. <property>
  30. <name>yarn.nodemanager.resource.memory-mb</name>
  31. <value>1024</value>
  32. </property>
  33. <property>
  34. <name>yarn.nodemanager.resource.cpu-vcores</name>
  35. <value>1</value>
  36. </property>

工人

  1. hadoop1
  2. hadoop2
  3. hadoop3
  4. hadoop4

etc/主机

  1. 192.168.0.111 hadoop1
  2. 192.168.0.112 hadoop2
  3. 192.168.0.113 hadoop3
  4. 192.168.0.114 hadoop4
  5. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  6. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
o4tp2gmn

o4tp2gmn1#

我安装了jdk15.0.2,它在hadoop3.1.0中出现了一些问题。后来我安装了jdk8并将javau改为home。一切顺利!
关于辅助节点管理器,我对fs.defaultfs和dfs.namenode.secondary.http-address都使用了hadoop1:9000,因此产生了冲突。我把中学换成了9001,一切顺利!

相关问题