执行hdfs zkfc命令时出错

watbbzwu  于 2021-06-02  发布在  Hadoop
关注(0)|答案(1)|浏览(389)

我刚接触hadoop,hdfs。。我要做下一步:
我已经在三个名称节点中启动了zookeeper:

  1. * vagrant@172:~$ zkServer.sh start

我可以看到状态:

  1. * vagrant@172:~$ zkServer.sh status

结果状态:

  1. JMX enabled by default
  2. Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
  3. Mode: follower

使用jps命令时,只显示jps,有时也显示为quaroom:

  1. * vagrant@172:~$ jps
  2. 2237 Jps

当我运行下一个命令时。

  1. * vagrant@172:~$ hdfs zkfc -formatZK
  2. 16/01/07 16:10:09 INFO zookeeper.ClientCnxn: Opening socket connection to server 172.16.8.192/172.16.8.192:2181. Will not attempt to authenticate using SASL (unknown error)
  3. 16/01/07 16:10:10 INFO zookeeper.ClientCnxn: Socket connection established to 172.16.8.192/172.16.8.192:2181, initiating session
  4. 16/01/07 16:10:11 INFO zookeeper.ClientCnxn: Session establishment complete on server 172.16.8.192/172.16.8.192:2181, sessionid = 0x2521cd93c970022, negotiated timeout = 6000
  5. Usage: java zkfc [ -formatZK [-force] [-nonInteractive] ]
  6. 16/01/07 16:10:11 INFO ha.ActiveStandbyElector: Session connected.
  7. 16/01/07 16:10:11 INFO zookeeper.ZooKeeper: Session: 0x2521cd93c970022 closed
  8. 16/01/07 16:10:11 INFO zookeeper.ClientCnxn: EventThread shut down
  9. 16/01/07 16:10:12 FATAL tools.DFSZKFailoverController: Got a fatal error, exiting now
  10. org.apache.hadoop.HadoopIllegalArgumentException: Bad argument: formatZK
  11. at org.apache.hadoop.ha.ZKFailoverController.badArg(ZKFailoverController.java:251)
  12. at org.apache.hadoop.ha.ZKFailoverController.doRun(ZKFailoverController.java:214)
  13. at org.apache.hadoop.ha.ZKFailoverController.access$000(ZKFailoverController.java:61)
  14. at org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:172)
  15. at org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:168)
  16. at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
  17. at org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:168)
  18. at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:181)

对这个错误的任何帮助对我都是很大的帮助。
我的配置如下:

巴什尔

  1. ### JAVA CONFIGURATION###
  2. JAVA_HOME=/usr/lib/jvm/java-8-oracle
  3. export PATH=$PATH:$JAVA_HOME/bin
  4. ### HADOOP CONFIGURATION###
  5. HADOOP_PREFIX=/opt/hadoop-2.7.1/
  6. export PATH=$PATH:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin
  7. ### ZOOKEPER###
  8. export PATH=$PATH:/opt/zookeeper-3.4.6/bin

hdfs-site.xml文件

  1. <configuration>
  2. <property>
  3. <name>dfs.replication</name>
  4. <value>2</value>
  5. </property>
  6. <property>
  7. <name>dfs.name.dir</name>
  8. <value>file:///hdfs/name</value>
  9. </property>
  10. <property>
  11. <name>dfs.data.dir</name>
  12. <value>file:///hdfs/data</value>
  13. </property>
  14. <property>
  15. <name>dfs.permissions</name>
  16. <value>false</value>
  17. </property>
  18. <property>
  19. <name>dfs.nameservices</name>
  20. <value>auto-ha</value>
  21. </property>
  22. <property>
  23. <name>dfs.ha.namenodes.auto-ha</name>
  24. <value>nn01,nn02</value>
  25. </property>
  26. <property>
  27. <name>dfs.namenode.rpc-address.auto-ha.nn01</name>
  28. <value>172.16.8.191:8020</value>
  29. </property>
  30. <property>
  31. <name>dfs.namenode.http-address.auto-ha.nn01</name>
  32. <value>172.16.8.191:50070</value>
  33. </property>
  34. <property>
  35. <name>dfs.namenode.rpc-address.auto-ha.nn02</name>
  36. <value>172.16.8.192:8020</value>
  37. </property>
  38. <property>
  39. <name>dfs.namenode.http-address.auto-ha.nn02</name>
  40. <value>172.16.8.192:50070</value>
  41. </property>
  42. <property>
  43. <name>dfs.namenode.shared.edits.dir</name>
  44. <value>qjournal://172.16.8.191:8485;172.16.8.192:8485;172.16.8.193:8485/auto-ha</value>
  45. </property>
  46. <property>
  47. <name>dfs.journalnode.edits.dir</name>
  48. <value>/hdfs/journalnode</value>
  49. </property>
  50. <property>
  51. <name>dfs.ha.fencing.methods</name>
  52. <value>sshfence</value>
  53. </property>
  54. <property>
  55. <name>dfs.ha.fencing.ssh.private-key-files</name>
  56. <value>/home/vagrant/.ssh/id_rsa</value>
  57. </property>
  58. <property>
  59. <name>dfs.ha.automatic-failover.enabled.auto-ha</name>
  60. <value>true</value>
  61. </property>
  62. <property>
  63. <name>ha.zookeeper.quorum</name>
  64. <value>172.16.8.191:2181,172.16.8.192:2181,172.16.8.193:2181</value>
  65. </property>
  66. </configuration>

core-site.xml文件

  1. <configuration>
  2. <property>
  3. <name>fs.default.name</name>
  4. <value>hdfs://auto-ha</value>
  5. </property>
  6. </configuration>

动物园.cfg

  1. tickTime=2000
  2. dataDir=/opt/ZooData
  3. clientPort=2181
  4. initLimit=5
  5. syncLimit=2
  6. server.1=172.16.8.191:2888:3888
  7. server.2=172.16.8.192:2888:3888
  8. server.3=172.16.8.193:2888:3888
0dxa2lsx

0dxa2lsx1#

在hdfs-site.xml文件中:

  • 我已经更改了机器名称的所有IP地址。示例:172.16.8.191-->机器名称1

然后在文件etc/hosts中:

  • 我添加了所有IP及其各自的名称

现在它工作得很好。

相关问题