hipi平均像素数程序由于cloudera quickstart vm中的错误而失败

ryoqjall  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(421)

我是hipi/hadoop的新手,所以我选择cloudera quickstart vm(5.4.2)。我遵循入门教程来做这件事。在运行时,我发现hib文件中有如下文件:

  1. [cloudera@quickstart tools]$ ./hibInfo.sh examples/sampleNew.hib --show-meta
  2. Input HIB: examples/sampleNew.hib
  3. Display meta data: true
  4. Display EXIF data: false
  5. IMAGE INDEX: 0
  6. 1244 x 829
  7. format: 1
  8. meta: {source=/home/cloudera/Downloads/hipi-release/web/examples/testimages/01.jpg}
  9. IMAGE INDEX: 1
  10. 1106 x 829
  11. format: 1
  12. meta: {source=/home/cloudera/Downloads/hipi-release/web/examples/testimages/02.jpg}
  13. IMAGE INDEX: 2
  14. 933 x 700
  15. format: 1
  16. meta: {source=/home/cloudera/Downloads/hipi-release/web/examples/testimages/03.jpg}
  17. IMAGE INDEX: 3
  18. 1106 x 829
  19. format: 1
  20. meta: {source=/home/cloudera/Downloads/hipi-release/web/examples/testimages/04.jpg}
  21. IMAGE INDEX: 4
  22. 1244 x 829
  23. format: 1
  24. meta: {source=/home/cloudera/Downloads/hipi-release/web/examples/testimages/05.jpg}
  25. IMAGE INDEX: 5
  26. 1555 x 1166
  27. format: 1
  28. meta: {source=/home/cloudera/Downloads/hipi-release/web/examples/testimages/06.jpg}
  29. IMAGE INDEX: 6
  30. 1244 x 829
  31. format: 1
  32. meta: {source=/home/cloudera/Downloads/hipi-release/web/examples/testimages/07.jpg}
  33. IMAGE INDEX: 7
  34. 1244 x 829
  35. format: 1
  36. meta: {source=/home/cloudera/Downloads/hipi-release/web/examples/testimages/08.jpg}
  37. IMAGE INDEX: 8
  38. 576 x 383
  39. format: 1
  40. meta: {source=/home/cloudera/Downloads/hipi-release/web/examples/testimages/09.jpg}
  41. IMAGE INDEX: 9
  42. 576 x 383
  43. format: 1
  44. meta: {source=/home/cloudera/Downloads/hipi-release/web/examples/testimages/10.jpg}
  45. IMAGE INDEX: 10
  46. 737 x 475
  47. format: 1
  48. meta: {source=/home/cloudera/Downloads/hipi-release/web/examples/testimages/11.jpg}
  49. IMAGE INDEX: 11
  50. 614 x 460
  51. format: 2
  52. meta: {source=/home/cloudera/Downloads/hipi-release/web/examples/testimages/12.png}
  53. Found [12] images.

但是当我试图用jar文件执行时,我得到了以下错误:

  1. [cloudera@quickstart helloWorld]$ hadoop jar build/libs/helloWorld.jar examples/sampleNew.hib sampleimages_average
  2. 15/10/27 03:22:30 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
  3. 15/10/27 03:22:37 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
  4. 15/10/27 03:22:40 INFO input.FileInputFormat: Total input paths to process : 1
  5. Spawned 1map tasks
  6. 15/10/27 03:22:48 INFO mapreduce.JobSubmitter: number of splits:1
  7. 15/10/27 03:22:50 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1445938202522_0003
  8. 15/10/27 03:23:00 INFO impl.YarnClientImpl: Submitted application application_1445938202522_0003
  9. 15/10/27 03:23:00 INFO mapreduce.Job: The url to track the job: http://quickstart.cloudera:8088/proxy/application_1445938202522_0003/
  10. 15/10/27 03:23:00 INFO mapreduce.Job: Running job: job_1445938202522_0003
  11. 15/10/27 03:23:43 INFO mapreduce.Job: Job job_1445938202522_0003 running in uber mode : false
  12. 15/10/27 03:23:43 INFO mapreduce.Job: map 0% reduce 0%
  13. 15/10/27 03:23:43 INFO mapreduce.Job: Job job_1445938202522_0003 failed with state FAILED due to: Application application_1445938202522_0003 failed 2 times due to AM Container for appattempt_1445938202522_0003_000002 exited with exitCode: 1
  14. For more detailed output, check application tracking page:http://quickstart.cloudera:8088/proxy/application_1445938202522_0003/Then, click on links to logs of each attempt.
  15. Diagnostics: Exception from container-launch.
  16. Container id: container_1445938202522_0003_02_000001
  17. Exit code: 1
  18. Stack trace: ExitCodeException exitCode=1:
  19. at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
  20. at org.apache.hadoop.util.Shell.run(Shell.java:455)
  21. at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
  22. at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
  23. at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
  24. at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
  25. at java.util.concurrent.FutureTask.run(FutureTask.java:262)
  26. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  27. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  28. at java.lang.Thread.run(Thread.java:745)
  29. Container exited with a non-zero exit code 1

资源管理器下的日志文件显示:

  1. Failed while trying to construct the redirect url to the log server. Log Server url may not be configured
  2. java.lang.Exception: Unknown container. Container either has not started or has already completed or doesn't belong to this node at all.

帮助我。我找不到错误。
我使用的是Java1.8,Hadoop2.6.0-cdh5.4.2,
hadoop\u classpath:/usr/lib/hadoop/lib hadoop\u yarn\u home:/usr/lib/hadoop yarn hadoop\u mapred\u home:/usr/lib/hadoop mapreduce hadoop\u conf\u dir:/etc/hadoop/conf
我已经用wordcount示例进行了测试,这是一个成功的例子。

ipakzgxi

ipakzgxi1#

我通过还原两个hipi和hadoop之间不匹配的java版本修复了这个问题。我用hadoopjava版本构建hipi,这个问题就解决了。

相关问题