从kv.local/172.20.12.168调用localhost:8020 failed 使用tera gen时出现连接异常

tzxcd3kk  于 2021-06-01  发布在  Hadoop
关注(0)|答案(1)|浏览(474)

我正在与hadoopteragen合作,检查hadoopmapreduce与terasort的基准测试。但当我执行以下命令时,
hadoop jar/users/**/documents/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar teragen-dmapreduce.job.maps=100 1t随机数据
我有以下例外,

  1. 17/06/01 15:09:21 WARN util.NativeCodeLoader: Unable to load native-hadoop
  2. library for your platform... using builtin-java classes where applicable
  3. 17/06/01 15:09:22 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:8032
  4. 17/06/01 15:09:23 INFO terasort.TeraSort: Generating -727379968 using 100
  5. 17/06/01 15:09:23 INFO mapreduce.JobSubmitter: number of splits:100
  6. 17/06/01 15:09:23 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1496303775726_0003
  7. 17/06/01 15:09:23 INFO impl.YarnClientImpl: Submitted application application_1496303775726_0003
  8. 17/06/01 15:09:23 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1496303775726_0003/
  9. 17/06/01 15:09:23 INFO mapreduce.Job: Running job: job_1496303775726_0003
  10. 17/06/01 15:09:27 INFO mapreduce.Job: Job job_1496303775726_0003 running in uber mode : false
  11. 17/06/01 15:09:27 INFO mapreduce.Job: map 0% reduce 0%
  12. 17/06/01 15:09:27 INFO mapreduce.Job: Job job_1496303775726_0003 failed with state FAILED due to: Application application_1496303775726_0003 failed 2 times due to AM Container for appattempt_1496303775726_0003_000002 exited with exitCode: -1000
  13. For more detailed output, check application tracking page:http://localhost:8088/proxy/application_1496303775726_0003/Then, click on links to logs of each attempt.
  14. Diagnostics: Call From KV.local/172.20.12.168 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
  15. java.net.ConnectException: Call From KV.local/172.20.12.168 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
  16. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  17. at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
  18. at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  19. at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
  20. at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
  21. at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
  22. at org.apache.hadoop.ipc.Client.call(Client.java:1473)
  23. at org.apache.hadoop.ipc.Client.call(Client.java:1400)
  24. at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
  25. at com.sun.proxy.$Proxy34.getFileInfo(Unknown Source)
  26. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752)
  27. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  28. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  29. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  30. at java.lang.reflect.Method.invoke(Method.java:498)
  31. at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
  32. at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
  33. at com.sun.proxy.$Proxy35.getFileInfo(Unknown Source)
  34. at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1977)
  35. at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118)
  36. at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
  37. at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
  38. at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
  39. at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:251)
  40. at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61)
  41. at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
  42. at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:357)
  43. at java.security.AccessController.doPrivileged(Native Method)
  44. at javax.security.auth.Subject.doAs(Subject.java:422)
  45. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
  46. at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356)
  47. at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
  48. at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  49. at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  50. at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  51. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  52. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  53. at java.lang.Thread.run(Thread.java:745)
  54. Caused by: java.net.ConnectException: Connection refused
  55. at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
  56. at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
  57. at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
  58. at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
  59. at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
  60. at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:608)
  61. at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:706)
  62. at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:369)
  63. at org.apache.hadoop.ipc.Client.getConnection(Client.java:1522)
  64. at org.apache.hadoop.ipc.Client.call(Client.java:1439)
  65. ... 31 more

如错误所示,它无法连接到localhost:8020,但是当我选中namenodewebui时,它显示namenode是活动的。请看下面的截图:

我发现了许多与此相关的帖子,但都没有帮到我。我还 checkout 了hosts文件,其中包含以下行:
127.0.0.1本地主机
172.20.12.168本地主机
有人能帮我解决这个问题吗?

qij5mzcb

qij5mzcb1#

以下程序帮助我解决了问题:
停止所有服务。
删除hdfs-site.xml中指定的namenode和datanode目录。
创建新的namenode和datanode目录,并相应地修改hdfs-site.xml。
在core-site.xml中,进行以下更改或添加以下属性:

  1. <property>
  2. <name>fs.defaultFS</name>
  3. <value>hdfs://172.20.12.168/</value>
  4. </property>
  5. <property>
  6. <name>fs.default.name</name>
  7. <value>hdfs://172.20.12.168:8020</value>
  8. </property>

在hadoop-2.6.4/etc/hadoop/hadoop-env.sh文件中进行以下更改:

  1. export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home

重新启动dfs、Yarn和mr,如下所示:

  1. start-dfs.sh
  2. start-yarn.sh
  3. mr-jobhistory-daemon.sh start historyserver
展开查看全部

相关问题