我正在amazonemr上运行一个mapreduce作业,它创建了40个输出文件,每个文件大约130mb。最后9个reduce任务失败,出现“设备上没有剩余空间”异常。这是群集的错误配置问题吗?使用更少的输入文件、更少的输出文件和更少的缩减器,作业可以正常运行。任何帮助都将不胜感激。谢谢!完整跟踪如下:
Error: java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:345)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
at java.security.DigestOutputStream.write(DigestOutputStream.java:148)
at com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream.write(MultipartUploadOutputStream.java:135)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:60)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:83)
at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
at org.apache.hadoop.io.compress.CompressorStream.close(CompressorStream.java:105)
at java.io.FilterOutputStream.close(FilterOutputStream.java:160)
at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:111)
at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(ReduceTask.java:558)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:637)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
编辑
我做了一些进一步的尝试,但不幸的是,我仍然得到错误。我认为我的示例上可能没有足够的内存,因为下面的评论中提到了复制因子,所以我尝试使用大型示例,而不是我一直在尝试的中型示例。但这次我有另一个例外:
Error: java.io.IOException: Error closing multipart upload
at com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream.uploadMultiParts(MultipartUploadOutputStream.java:207)
at com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream.close(MultipartUploadOutputStream.java:222)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:105)
at org.apache.hadoop.io.compress.CompressorStream.close(CompressorStream.java:106)
at java.io.FilterOutputStream.close(FilterOutputStream.java:160)
at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:111)
at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(ReduceTask.java:558)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:637)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: java.util.concurrent.ExecutionException: com.amazonaws.services.s3.model.AmazonS3Exception: The Content-MD5 you specified did not match what we received. (Service: Amazon S3; Status Code: 400; Error Code: BadDigest;
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:188)
结果是只生成了大约70%的预期输出文件,其余的reduce任务失败。然后我尝试将一个大文件上传到我的s3存储桶中,以防那里没有足够的内存,但这似乎不是问题所在。
我正在使用aws弹性mapreduce服务。有什么想法吗?
2条答案
按热度按时间rvpgvaaj1#
这个问题意味着没有空间来存储mapreduce作业的输出(或临时输出)。
需要检查的是:
你是否从hdfs中删除了不必要的文件?跑
hadoop dfs -ls /
命令检查存储在hdfs上的文件(如果你使用垃圾桶,一定要把它也倒空。)您是否使用压缩来存储作业的输出(或临时输出)?您可以将sequencefileoutputformat设置为输出格式,或者
setCompressMapOutput(true);
什么是复制因子?默认情况下,它被设置为3,但是如果有空间问题,您可以冒险将它设置为2或1,以使您的程序运行。这可能是一个问题,您的一些减速机输出的数据量远远大于其他减速机,所以请检查您的代码。
izkcnapc2#
我在AMI3.2.x上遇到了空间不足的错误,而在AMI3.1.x上没有。切换amis,看看会发生什么。