hadoop3.2:找不到logger的appender(org.apache.hadoop.mapreduce.v2.app.mrappmaster)

aoyhnmkz  于 2021-05-29  发布在  Hadoop
关注(0)|答案(0)|浏览(404)

我有一个本地hadoop3.2安装:1个主程序+1个工作程序都在我的笔记本电脑上运行。这是一个实验设置,用于在提交到实际集群之前进行快速测试。
一切都很健康:

  1. $ jps
  2. 22326 NodeManager
  3. 21641 DataNode
  4. 25530 Jps
  5. 22042 ResourceManager
  6. 21803 SecondaryNameNode
  7. 21517 NameNode
  8. $ hdfs fsck /
  9. Connecting to namenode via http://master:9870/fsck?ugi=david&path=%2F
  10. FSCK started by david (auth:SIMPLE) from /127.0.0.1 for path / at Wed Sep 04 13:54:59 CEST 2019
  11. Status: HEALTHY
  12. Number of data-nodes: 1
  13. Number of racks: 1
  14. Total dirs: 1
  15. Total symlinks: 0
  16. Replicated Blocks:
  17. Total size: 0 B
  18. Total files: 0
  19. Total blocks (validated): 0
  20. Minimally replicated blocks: 0
  21. Over-replicated blocks: 0
  22. Under-replicated blocks: 0
  23. Mis-replicated blocks: 0
  24. Default replication factor: 1
  25. Average block replication: 0.0
  26. Missing blocks: 0
  27. Corrupt blocks: 0
  28. Missing replicas: 0
  29. Erasure Coded Block Groups:
  30. Total size: 0 B
  31. Total files: 0
  32. Total block groups (validated): 0
  33. Minimally erasure-coded block groups: 0
  34. Over-erasure-coded block groups: 0
  35. Under-erasure-coded block groups: 0
  36. Unsatisfactory placement block groups: 0
  37. Average block group size: 0.0
  38. Missing block groups: 0
  39. Corrupt block groups: 0
  40. Missing internal blocks: 0
  41. FSCK ended at Wed Sep 04 13:54:59 CEST 2019 in 0 milliseconds
  42. The filesystem under path '/' is HEALTHY

运行提供的pi示例时,出现以下错误:

  1. $ yarn jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.0.jar pi 16 1000
  2. Number of Maps = 16
  3. Samples per Map = 1000
  4. Wrote input for Map #0
  5. Wrote input for Map #1
  6. Wrote input for Map #2
  7. Wrote input for Map #3
  8. Wrote input for Map #4
  9. Wrote input for Map #5
  10. Wrote input for Map #6
  11. Wrote input for Map #7
  12. Wrote input for Map #8
  13. Wrote input for Map #9
  14. Wrote input for Map #10
  15. Wrote input for Map #11
  16. Wrote input for Map #12
  17. Wrote input for Map #13
  18. Wrote input for Map #14
  19. Wrote input for Map #15
  20. Starting Job
  21. 2019-09-04 13:55:47,665 INFO client.RMProxy: Connecting to ResourceManager at master/0.0.0.0:8032
  22. 2019-09-04 13:55:47,887 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/david/.staging/job_1567598091808_0001
  23. 2019-09-04 13:55:48,020 INFO input.FileInputFormat: Total input files to process : 16
  24. 2019-09-04 13:55:48,450 INFO mapreduce.JobSubmitter: number of splits:16
  25. 2019-09-04 13:55:48,508 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
  26. 2019-09-04 13:55:49,000 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1567598091808_0001
  27. 2019-09-04 13:55:49,003 INFO mapreduce.JobSubmitter: Executing with tokens: []
  28. 2019-09-04 13:55:49,164 INFO conf.Configuration: resource-types.xml not found
  29. 2019-09-04 13:55:49,164 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
  30. 2019-09-04 13:55:49,375 INFO impl.YarnClientImpl: Submitted application application_1567598091808_0001
  31. 2019-09-04 13:55:49,411 INFO mapreduce.Job: The url to track the job: http://cyclimse:8088/proxy/application_1567598091808_0001/
  32. 2019-09-04 13:55:49,412 INFO mapreduce.Job: Running job: job_1567598091808_0001
  33. 2019-09-04 13:55:55,477 INFO mapreduce.Job: Job job_1567598091808_0001 running in uber mode : false
  34. 2019-09-04 13:55:55,480 INFO mapreduce.Job: map 0% reduce 0%
  35. 2019-09-04 13:55:55,509 INFO mapreduce.Job: Job job_1567598091808_0001 failed with state FAILED due to: Application application_1567598091808_0001 failed 2 times due to AM Container for appattempt_1567598091808_0001_000002 exited with exitCode: 1
  36. Failing this attempt.Diagnostics: [2019-09-04 13:55:54.458]Exception from container-launch.
  37. Container id: container_1567598091808_0001_02_000001
  38. Exit code: 1
  39. [2019-09-04 13:55:54.464]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
  40. Last 4096 bytes of prelaunch.err :
  41. Last 4096 bytes of stderr :
  42. log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
  43. log4j:WARN Please initialize the log4j system properly.
  44. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
  45. [2019-09-04 13:55:54.465]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
  46. Last 4096 bytes of prelaunch.err :
  47. Last 4096 bytes of stderr :
  48. log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
  49. log4j:WARN Please initialize the log4j system properly.
  50. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
  51. For more detailed output, check the application tracking page: http://cyclimse:8088/cluster/app/application_1567598091808_0001 Then click on links to logs of each attempt.
  52. . Failing the application.
  53. 2019-09-04 13:55:55,546 INFO mapreduce.Job: Counters: 0
  54. Job job_1567598091808_0001 failed!

似乎log4j的配置有问题: No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster). . 但是它使用的是默认配置( $HADOOP_CONF_DIR/log4j.properties ).
执行之后,hdfs状态如下所示:

  1. $ hdfs fsck /
  2. Connecting to namenode via http://master:9870/fsck?ugi=david&path=%2F
  3. FSCK started by david (auth:SIMPLE) from /127.0.0.1 for path / at Wed Sep 04 14:01:43 CEST 2019
  4. /tmp/hadoop-yarn/staging/david/.staging/job_1567598091808_0001/job.jar: Under replicated BP-24234081-0.0.0.0-1567598050928:blk_1073741841_1017. Target Replicas is 10 but found 1 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).
  5. /tmp/hadoop-yarn/staging/david/.staging/job_1567598091808_0001/job.split: Under replicated BP-24234081-0.0.0.0-1567598050928:blk_1073741842_1018. Target Replicas is 10 but found 1 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).
  6. Status: HEALTHY
  7. Number of data-nodes: 1
  8. Number of racks: 1
  9. Total dirs: 11
  10. Total symlinks: 0
  11. Replicated Blocks:
  12. Total size: 510411 B
  13. Total files: 20
  14. Total blocks (validated): 20 (avg. block size 25520 B)
  15. Minimally replicated blocks: 20 (100.0 %)
  16. Over-replicated blocks: 0 (0.0 %)
  17. Under-replicated blocks: 2 (10.0 %)
  18. Mis-replicated blocks: 0 (0.0 %)
  19. Default replication factor: 1
  20. Average block replication: 1.0
  21. Missing blocks: 0
  22. Corrupt blocks: 0
  23. Missing replicas: 18 (47.36842 %)
  24. Erasure Coded Block Groups:
  25. Total size: 0 B
  26. Total files: 0
  27. Total block groups (validated): 0
  28. Minimally erasure-coded block groups: 0
  29. Over-erasure-coded block groups: 0
  30. Under-erasure-coded block groups: 0
  31. Unsatisfactory placement block groups: 0
  32. Average block group size: 0.0
  33. Missing block groups: 0
  34. Corrupt block groups: 0
  35. Missing internal blocks: 0
  36. FSCK ended at Wed Sep 04 14:01:43 CEST 2019 in 5 milliseconds
  37. The filesystem under path '/' is HEALTHY

因为我在网上找不到任何解决办法,所以我来了:)。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题