编辑:
查看namenode日志,我注意到会定期引发异常。有关系吗?
2013-04-10 19:23:50,613 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping (IPC Server handler 43 on 9000): got exception trying to get groups for user job_201304101854_0005
org.apache.hadoop.util.Shell$ExitCodeException: id: job_201304101854_0005: No such user
at org.apache.hadoop.util.Shell.runCommand(Shell.java:255)
at org.apache.hadoop.util.Shell.run(Shell.java:182)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:78)
at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:53)
at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)
at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1037)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.<init>(FSPermissionChecker.java:50)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5218)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:5201)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:2030)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getFileInfo(NameNode.java:850)
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:573)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
2013-04-10 19:23:50,614 INFO org.apache.hadoop.security.ShellBasedUnixGroupsMapping (IPC Server handler 43 on 9000): add job_201304101854_0005 to shell userGroupsCache
2013-04-10 19:23:50,614 WARN org.apache.hadoop.security.UserGroupInformation (IPC Server handler 43 on 9000): No groups available for user job_201304101854_0005
2013-04-10 19:23:55,886 WARN org.apache.hadoop.security.UserGroupInformation (IPC Server handler 46 on 9000): No groups available for user job_201304101854_0005
我们已经生成了定制的二进制文件来进行map和reduce,并使用常识性的“cat file | map | sort | reduce>output”模式测试了它们的正确操作。我们确保对二进制文件进行静态编译,以尽可能多地引入依赖项,并且我们还通过手动将二进制文件上传到主文件来确认二进制文件在amazon的emr amis上运行。如果相关的话,我们选择的语言是haskell,编译结果是一个简单的本地二进制可执行文件。
举个最简单的例子:
bin/hadoop jar contrib/streaming/hadoop-streaming.jar \
-input s3n://path/to/input \
-output s3n://path/to/output \
-mapper "s3n://path/to/Program map" \
-reducer "s3n://path/to/Program reduce"
作业确实开始了,但是它被卡在map 0%阶段,并且没有移动。它并没有从那里继续下去,而且日志中似乎也没有任何有用的东西。每一个Map任务都会在600秒内因为“不报告”而被杀死。每个Map器的状态如下所示,同时显示0%的完成率:
s3n://path/to/file.csv.gz:0+38175575
counters部分显示了从s3n读取的17.5kb。
如果我们现在将作业修改为以下内容以进行测试:
bin/hadoop jar contrib/streaming/hadoop-streaming.jar \
-input s3n://path/to/input \
-output s3n://path/to/output \
-mapper s3n://elasticmapreduce/samples/wordcount/wordSplitter.py \
-reducer aggregate
然后mapper阶段完成100%,但reducer引发了以下异常:
java.io.IOException: exception in uploadSinglePart
at org.apache.hadoop.fs.s3native.MultipartUploadOutputStream.uploadSinglePart(MultipartUploadOutputStream.java:163)
at org.apache.hadoop.fs.s3native.MultipartUploadOutputStream.close(MultipartUploadOutputStream.java:219)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:70)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:96)
at org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:109)
at org.apache.hadoop.mapred.ReduceTask$OldTrackingRecordWriter.close(ReduceTask.java:475)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:539)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:429)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.RuntimeException: exception in putObject
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.storeFile(Jets3tNativeFileSystemStore.java:128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:83)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at org.apache.hadoop.fs.s3native.$Proxy3.storeFile(Unknown Source)
at org.apache.hadoop.fs.s3native.MultipartUploadOutputStream.uploadSinglePart(MultipartUploadOutputStream.java:160)
... 12 more
Caused by: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: 8220819721FFE29E, AWS Error Code: AccessDenied, AWS Error Message: Access Denied, S3 Extended Request ID: TekkBZzgaBlK0e8SkoC7bcBsu1w7Nbpy2U7hPCGp5IPrrsqaPTxUg7QQ09xTXRYC
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:619)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:317)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:170)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:2943)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1123)
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.storeFile(Jets3tNativeFileSystemStore.java:121)
... 20 more
令人沮丧的是,例如,在同一类emr集群上运行hive,在s3上创建新的外部Map表和文件似乎没有任何问题。
我尝试了几个想法,如果有人能给我们正确的方向,让我们的设置工作,我会非常感激。
谢谢你,奥亚
1条答案
按热度按时间qfe3c7zg1#
我想这很可能是你的问题:
很可能是空白给你带来了麻烦。我可能会尝试构建两个独立的二进制文件,一个用于map,一个用于reduce,您可以直接调用它们,而不是传递参数。至少这能帮你找出问题所在。
如果做不到这一点,这听起来像是s3许可或mime类型的问题。我将检查您的bucket上的权限,以验证您用于emr作业的凭据是否可以访问该bucket。
一旦你确定了,我会检查二进制文件本身的权限和属性;当s3 mime类型设置不正确时,我遇到了一些奇怪的问题。例如,以下是wordsplitter信息:
您的二进制文件可能默认为某种程度上妨碍执行的mime类型。