hadoop wordcount示例由于am容器而失败

r8uurelv  于 2021-05-29  发布在  Hadoop
关注(0)|答案(2)|浏览(477)

我已经尝试运行hadoop wordcount示例一段时间了,但是我面临一些问题。我有hadoop2.7.1并在windows上运行它。以下是错误详细信息:
命令:

  1. yarn jar C:\hadoop-2.7.1\share\hadoop\mapreduce\hadoop-mapreduce-examples-2.7.1.jar wordcount input output

输出:

  1. INFO input.FileInputFormat: Total input paths to process : 1
  2. INFO mapreduce.JobSubmitter: number of splits:1
  3. INFO mapreduce.JobSubmitter: Submitting tokens for job: job_14
  4. 90853163147_0009
  5. INFO impl.YarnClientImpl: Submitted application application_14
  6. 90853163147_0009
  7. INFO mapreduce.Job: The url to track the job: http://*****
  8. *****/proxy/application_1490853163147_0009/
  9. INFO mapreduce.Job: Running job: job_1490853163147_0009
  10. INFO mapreduce.Job: Job job_1490853163147_0009 running in uber
  11. mode : false
  12. INFO mapreduce.Job: map 0% reduce 0%
  13. INFO mapreduce.Job: Job job_1490853163147_0009 failed with sta
  14. te FAILED due to: Application application_1490853163147_0009 failed 2 times due
  15. to AM Container for appattempt_1490853163147_0009_000002 exited with exitCode:
  16. 1639
  17. For more detailed output, check application tracking page:http://********
  18. :****/cluster/app/application_1490853163147_0009Then, click on links to logs of
  19. each attempt.
  20. Diagnostics: Exception from container-launch.
  21. Container id: container_1490853163147_0009_02_000001
  22. Exit code: 1639
  23. Exception message: Incorrect command line arguments.
  24. Stack trace: ExitCodeException exitCode=1639: Incorrect command line arguments.
  25. at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
  26. at org.apache.hadoop.util.Shell.run(Shell.java:456)
  27. at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:
  28. 722)
  29. at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.la
  30. unchContainer(DefaultContainerExecutor.java:211)
  31. at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.C
  32. ontainerLaunch.call(ContainerLaunch.java:302)
  33. at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.C
  34. ontainerLaunch.call(ContainerLaunch.java:82)
  35. at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  36. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
  37. java:1142)
  38. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
  39. .java:617)
  40. at java.lang.Thread.run(Thread.java:745)
  41. Shell output: Usage: task create [TASKNAME] [COMMAND_LINE] |
  42. task isAlive [TASKNAME] |
  43. task kill [TASKNAME]
  44. task processList [TASKNAME]
  45. Creates a new task jobobject with taskname
  46. Checks if task jobobject is alive
  47. Kills task jobobject
  48. Prints to stdout a list of processes in the task
  49. along with their resource usage. One process per line
  50. and comma separated info per process
  51. ProcessId,VirtualMemoryCommitted(bytes),
  52. WorkingSetSize(bytes),CpuTime(Millisec,Kernel+User)
  53. Container exited with a non-zero exit code 1639
  54. Failing this attempt. Failing the application.
  55. INFO mapreduce.Job: Counters: 0

yarn-site.xml:

  1. <configuration>
  2. <property>
  3. <name>yarn.application.classpath</name>
  4. <value>
  5. C:\hadoop-2.7.1\etc\hadoop,
  6. C:\hadoop-2.7.1\share\hadoop\common\*,
  7. C:\hadoop-2.7.1\share\hadoop\common\lib\*,
  8. C:\hadoop-2.7.1\share\hadoop\hdfs\*,
  9. C:\hadoop-2.7.1\share\hadoop\hdfs\lib\*,
  10. C:\hadoop-2.7.1\share\hadoop\mapreduce\*,
  11. C:\hadoop-2.7.1\share\hadoop\mapreduce\lib\*,
  12. C:\hadoop-2.7.1\share\hadoop\yarn\*,
  13. C:\hadoop-2.7.1\share\hadoop\yarn\lib\*
  14. </value>
  15. </property>
  16. <property>
  17. <name>yarn.nodemanager.aux-services</name>
  18. <value>mapreduce_shuffle</value>
  19. </property>
  20. <property>
  21. <name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name>
  22. <value>98.5</value>
  23. </property>
  24. <property>
  25. <name>yarn.nodemanager.resource.memory-mb</name>
  26. <value>2200</value>
  27. <description>Amount of physical memory, in MB, that can be allocated for containers.</description>
  28. </property>
  29. <property>
  30. <name>yarn.scheduler.minimum-allocation-mb</name>
  31. <value>500</value>
  32. </property>
  33. <property>
  34. <name>yarn.log-aggregation-enable</name>
  35. <value>true</value>
  36. </property>
  37. <property>
  38. <description>Where to aggregate logs to.</description>
  39. <name>yarn.nodemanager.remote-app-log-dir</name>
  40. <value>/tmp/logs</value>
  41. </property>
  42. <property>
  43. <name>yarn.log-aggregation.retain-seconds</name>
  44. <value>259200</value>
  45. </property>
  46. <property>
  47. <name>yarn.log-aggregation.retain-check-interval-seconds</name>
  48. <value>3600</value>
  49. </property>
  50. </configuration>

mapred.xml文件:

  1. <?xml version="1.0"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3. <!--
  4. Licensed under the Apache License, Version 2.0 (the "License");
  5. you may not use this file except in compliance with the License.
  6. You may obtain a copy of the License at
  7. http://www.apache.org/licenses/LICENSE-2.0
  8. Unless required by applicable law or agreed to in writing, software
  9. distributed under the License is distributed on an "AS IS" BASIS,
  10. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  11. See the License for the specific language governing permissions and
  12. limitations under the License. See accompanying LICENSE file.
  13. -->
  14. <!-- Put site-specific property overrides in this file. -->
  15. <configuration>
  16. <property>
  17. <name>mapreduce.framework.name</name>
  18. <value>yarn</value>
  19. </property>
  20. </configuration>

你知道哪里出了问题吗?

xghobddn

xghobddn1#

我也面临着同样的问题。我遵循的是关于如何安装hadoop2.6.0的指南(http://www.ics.uci.edu/~shantas/install_hadoop-2.6.0_on_windows10.pdf)在实际安装hadoop2.8.0时。我一做完就跑
hadoop jar d:\hadoop-2.8.0\share\hadoop\mapreduce\hadoop-mapreduce-examples-2.8.0.jar wordcount/foo/bar/license.txt/out1
并得到(来自Yarn管理者的日志):
19年6月17日13:15:30 info monitor.containersmonitorimpl:正在启动容器的资源监视\u 1497902417767 \u 0004 \u 01 \u000001
17/06/19 13:15:30 info nodemanager.defaultcontainerexecutor:launchcontainer:[d:\hadoop-2.8.0\bin\winutils.exe,任务,创建,-m,-1,-c,-1,容器\u 149790217767\u 0004\u 01\u000001,cmd/c d:/hadoop/temp/nm localdir/usercache/*****/appcache/application\u 149790217767\u 0004/container\u 149790217767\u 0004\u 01\u000001/default\u container\u executor.cmd]
19年6月17日13:15:30警告nodemanager.defaultcontainerexecutor:集装箱出口代码\u 1497902417767 \u 0004 \u 01 \u000001 is:1639
19年6月17日13:15:30警告nodemanager.defaultcontainerexecutor:容器启动异常,容器id:container\u 1497902417767\u 0004\u 01\u000001,退出代码:1639
exitcodeexception exitcode=1639:命令行参数不正确。
taskexit:错误(1639):命令行参数无效。有关详细的命令行帮助,请参阅windows installer sdk。
另一个症状是(根据Yarn管理者的日志):
17/06/19 13:25:49 warn util.sysinfowindows:预期sysinfo的拆分长度为11。有7个
解决方案是获得兼容(与hadoop 2.8.0)二进制文件:https://github.com/steveloughran/winutils/tree/master/hadoop-2.8.0-rc3/bin
一旦我得到一个正确的winutils.exe,我的问题就消失了。

展开查看全部
lo8azlld

lo8azlld2#

exitcode:1639看起来您的应用程序正在windows上运行hadoop。
https://github.com/octopusdeploy/issues/issues/1346

相关问题