hadoopYarn性能:在集群上运行wordcount的例子在集群上非常慢

afdcj2ne  于 2021-06-03  发布在  Hadoop
关注(0)|答案(1)|浏览(411)

最近我建立了hadoop集群进行测试,集群有两个节点用于任务,并且是基于yarn的。
我知道hadoop不适合这个例子,它在非常大的数据级别上有很好的性能,但是它仍然太慢。我是说非常慢。我的输入文件是一个500000字的文档,减少的数字是2。
日志如下:

hadoop jar /home/hadoop/hadoopTest.jar  com.hadoop.WordCountJob /wordcountest /wordcountresult

Job started: Mon Dec 23 12:38:13 CST 2013
13/12/23 12:38:13 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is inited.
13/12/23 12:38:14 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is started.
13/12/23 12:38:14 WARN mapreduce.JobSubmitter: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/12/23 12:38:27 INFO input.FileInputFormat: Total input paths to process : 1
13/12/23 12:38:27 INFO mapreduce.JobSubmitter: number of splits:1
13/12/23 12:38:27 WARN conf.Configuration: mapred.jar is deprecated. Instead, use mapreduce.job.jar
13/12/23 12:38:27 WARN conf.Configuration: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
13/12/23 12:38:27 WARN conf.Configuration: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
13/12/23 12:38:27 WARN conf.Configuration: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
13/12/23 12:38:27 WARN conf.Configuration: mapred.job.name is deprecated. Instead, use mapreduce.job.name
13/12/23 12:38:27 WARN conf.Configuration: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
13/12/23 12:38:27 WARN conf.Configuration: mapreduce.inputformat.class is deprecated. Instead, use mapreduce.job.inputformat.class
13/12/23 12:38:27 WARN conf.Configuration: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
13/12/23 12:38:27 WARN conf.Configuration: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
13/12/23 12:38:27 WARN conf.Configuration: mapreduce.outputformat.class is deprecated. Instead, use mapreduce.job.outputformat.class
13/12/23 12:38:27 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
13/12/23 12:38:27 WARN conf.Configuration: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
13/12/23 12:38:27 WARN conf.Configuration: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
13/12/23 12:38:29 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1383617275312_0021
13/12/23 12:38:30 INFO client.YarnClientImpl: Submitted application application_1383617275312_0021 to ResourceManager at Hadoop1/111.11.11.11:8032
13/12/23 12:38:30 INFO mapreduce.Job: The url to track the job: http://kmHadoop1:8088/proxy/application_1383617275312_0021/
13/12/23 12:38:30 INFO mapreduce.Job: Running job: job_1383617275312_0021
13/12/23 12:43:22 INFO mapreduce.Job: Job job_1383617275312_0021 running in uber mode : false
13/12/23 12:43:22 INFO mapreduce.Job:  map 0% reduce 0%
13/12/23 13:03:37 INFO mapreduce.Job:  map 67% reduce 0%
13/12/23 13:03:43 INFO mapreduce.Job:  map 100% reduce 0%
13/12/23 13:07:04 INFO mapreduce.Job:  map 100% reduce 37%
13/12/23 13:07:07 INFO mapreduce.Job:  map 100% reduce 51%
13/12/23 13:07:10 INFO mapreduce.Job:  map 100% reduce 67%
13/12/23 13:07:51 INFO mapreduce.Job:  map 100% reduce 69%
13/12/23 13:07:52 INFO mapreduce.Job:  map 100% reduce 70%
13/12/23 13:07:54 INFO mapreduce.Job:  map 100% reduce 85%
13/12/23 13:07:54 INFO mapreduce.Job:  map 100% reduce 100%
13/12/23 13:07:54 INFO mapreduce.Job: Job job_1383617275312_0021 completed successfully
13/12/23 13:07:55 INFO mapreduce.Job: Counters: 43
        File System Counters
                FILE: Number of bytes read=519233
                FILE: Number of bytes written=1254635
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=2356520
                HDFS: Number of bytes written=427594
                HDFS: Number of read operations=9
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=4
        Job Counters 
                Launched map tasks=1
                Launched reduce tasks=2
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=1225928
                Total time spent by all reduces in occupied slots (ms)=495508
        Map-Reduce Framework
                Map input records=8646
                Map output records=420146
                Map output bytes=4187027
                Map output materialized bytes=519225
                Input split bytes=122
                Combine input records=0
                Combine output records=0
                Reduce input groups=35430
                Reduce shuffle bytes=519225
                Reduce input records=420146
                Reduce output records=35430
                Spilled Records=840292
                Shuffled Maps =2
                Failed Shuffles=0
                Merged Map outputs=2
                GC time elapsed (ms)=263996
                CPU time spent (ms)=222750
                Physical memory (bytes) snapshot=529215488
                Virtual memory (bytes) snapshot=4047876096
                Total committed heap usage (bytes)=479268864
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=2356398
        File Output Format Counters 
                Bytes Written=427594
Job ended: Mon Dec 23 13:07:55 CST 2013
The job took 1782 seconds.

我们可以在日志的每一行之前看到时间戳。
似乎每一步都很慢:初始化、检查输入路径、启动Yarn、mapreduce等等。
整个过程用了1783秒。发生什么事了?我做错什么了吗?
我的hadoop版本是cdh4.3.0,集群有两个节点。hdfs中有数千个小文件,这是个问题吗?

omtl5h9j

omtl5h9j1#

我从你的输出中看到了

Map output bytes=4187027
Map output materialized bytes=519225
...

您正在(至少)对中间Map输出数据进行压缩。您可以尝试在关闭压缩的情况下重新运行示例;众所周知,gzip压缩会对机器的处理器造成负担。也许在关闭压缩之前,您可以考虑监视cpu负载,以验证这确实是您的瓶颈。
我看到在gzip压缩打开的情况下运行2或3个节点的集群时,作业时间过长。这会随着您开始添加节点而改变。当我将集群扩展到10个节点并重新运行同一个作业时,压缩实际上变得非常有益(相对于不使用压缩,100gbterasort的总体作业时间提高了约40%)。

相关问题