无法使用aws emr集群中的配置单元创建外部表,其中位置指向某个s3位置

vecaoik1  于 2021-06-24  发布在  Hive
关注(0)|答案(3)|浏览(672)

我正在尝试使用aws emr cluster的配置单元服务创建一个外部表。在这里,这个外部表指向一些s3位置。下面是我的创建表定义:

EXTERNAL TABLE if not exists Myschema.MyTable
(
   columnA INT,
   columnB INT, 
   columnC String, 
)
partitioned BY ( columnD INT )
STORED AS PARQUET
LOCATION 's3://{bucket-locaiton}/{key-path}/';

以下是我得到的例外:

2019-04-11T14:44:59,449 INFO  [6a95bad7-18e7-49de-856d-43219b7c5069 main([])]: util.PlatformInfo (PlatformInfo.java:getJobFlowId(54)) - Unable to read clusterId from http://localhost:8321/configuration, trying extra instance data file: /var/lib/instance-controller/extraInstanceData.json
2019-04-11T14:44:59,450 INFO  [6a95bad7-18e7-49de-856d-43219b7c5069 main([])]: util.PlatformInfo (PlatformInfo.java:getJobFlowId(61)) - Unable to read clusterId from /var/lib/instance-controller/extraInstanceData.json, trying EMR job-flow data file: /var/lib/info/job-flow.json
2019-04-11T14:44:59,450 INFO  [6a95bad7-18e7-49de-856d-43219b7c5069 main([])]: util.PlatformInfo (PlatformInfo.java:getJobFlowId(69)) - Unable to read clusterId from /var/lib/info/job-flow.json, out of places to look
2019-04-11T14:45:01,073 INFO  [6a95bad7-18e7-49de-856d-43219b7c5069 main([])]: conf.HiveConf (HiveConf.java:getLogIdVar(3956)) - Using the default value passed in for log id: 6a95bad7-18e7-49de-856d-43219b7c5069
2019-04-11T14:45:01,073 INFO  [6a95bad7-18e7-49de-856d-43219b7c5069 main([])]: session.SessionState (SessionState.java:resetThreadName(432)) - Resetting thread name to  main
2019-04-11T14:45:01,072 ERROR [6a95bad7-18e7-49de-856d-43219b7c5069 main([])]: ql.Driver (SessionState.java:printError(1126)) - FAILED: $ComputationException java.lang.ArrayIndexOutOfBoundsException: 16227
com.amazon.ws.emr.hadoop.fs.shaded.com.google.inject.internal.util.$ComputationException: java.lang.ArrayIndexOutOfBoundsException: 16227
        at com.amazon.ws.emr.hadoop.fs.shaded.com.google.inject.internal.util.$MapMaker$StrategyImpl.compute(MapMaker.java:553)
        at com.amazon.ws.emr.hadoop.fs.shaded.com.google.inject.internal.util.$MapMaker$StrategyImpl.compute(MapMaker.java:419)
        at com.amazon.ws.emr.hadoop.fs.shaded.com.google.inject.internal.util.$CustomConcurrentHashMap$ComputingImpl.get(CustomConcurrentHashMap.java:2041)
        at com.amazon.ws.emr.hadoop.fs.shaded.com.google.inject.internal.util.$StackTraceElements.forMember(StackTraceElements.java:53)
        at com.amazon.ws.emr.hadoop.fs.shaded.com.google.inject.internal.Errors.formatSource(Errors.java:690)
        at com.amazon.ws.emr.hadoop.fs.shaded.com.google.inject.internal.Errors.format(Errors.java:555)
        at com.amazon.ws.emr.hadoop.fs.shaded.com.google.inject.ProvisionException.getMessage(ProvisionException.java:59)
        at java.lang.Throwable.getLocalizedMessage(Throwable.java:391)
        at java.lang.Throwable.toString(Throwable.java:480)
        at java.lang.Throwable.<init>(Throwable.java:311)
        at java.lang.Exception.<init>(Exception.java:102)
        at org.apache.hadoop.hive.ql.metadata.HiveException.<init>(HiveException.java:41)
        at org.apache.hadoop.hive.ql.parse.SemanticException.<init>(SemanticException.java:41)
        at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.toReadEntity(BaseSemanticAnalyzer.java:1659)
        at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.toReadEntity(BaseSemanticAnalyzer.java:1651)
        at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.toReadEntity(BaseSemanticAnalyzer.java:1647)
        at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeCreateTable(SemanticAnalyzer.java:11968)
        at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:11020)
        at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11133)
        at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
        at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
        at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
        at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
        at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
        at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
        at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 16227
        at com.amazon.ws.emr.hadoop.fs.shaded.com.google.inject.internal.asm.$ClassReader.readClass(Unknown Source)
        at com.amazon.ws.emr.hadoop.fs.shaded.com.google.inject.internal.asm.$ClassReader.accept(Unknown Source)
        at com.amazon.ws.emr.hadoop.fs.shaded.com.google.inject.internal.asm.$ClassReader.accept(Unknown Source)
        at com.amazon.ws.emr.hadoop.fs.shaded.com.google.inject.internal.util.$LineNumbers.<init>(LineNumbers.java:62)
        at com.amazon.ws.emr.hadoop.fs.shaded.com.google.inject.internal.util.$StackTraceElements$1.apply(StackTraceElements.java:36)
        at com.amazon.ws.emr.hadoop.fs.shaded.com.google.inject.internal.util.$StackTraceElements$1.apply(StackTraceElements.java:33)
        at com.amazon.ws.emr.hadoop.fs.shaded.com.google.inject.internal.util.$MapMaker$StrategyImpl.compute(MapMaker.java:549)
        ... 37 more

注意:使用hdfs location创建时使用相同的表。我成功地创建了它。

ubof19bj

ubof19bj1#

从master节点运行hadoopfs-lss3://以查看是否收到相同的错误 Caused by: java.lang.ArrayIndexOutOfBoundsException: 16227 检查用户是否具有足够的s3/dynamodb权限的iam角色。

t3irkdon

t3irkdon2#

我不确定确切的问题是什么,但是当我遇到这个问题时,我能够通过使用新创建的s3 bucket使它工作。Hive只是不喜欢我的旧水桶。
编辑:我实际上可以用一个现有的bucket来修复这个问题。我的电子病历配置有一个错误规范 fs.s3.maxConnections . 当我将其设置为一个有效值并构建一个新集群时,问题就消失了。

lymnna71

lymnna713#

在对hadoop和aws的代码进行调试之后,我发现java.lang.arrayindexoutofboundsexception与背后的真正错误无关。
事实上,emr/hadoop已经生成了另一个错误(取决于您的情况),但是当它格式化这个错误消息时,它触发了另一个异常:java.lang.arrayindexoutofboundsexception。有一个问题与此相关:https://github.com/google/guice/issues/757
为了找到背后的真正原因,你有一些选择:
通过使用命令模拟您正在做的事情并启用调试模式。例如,我在使用emrfs从s3读取/向s3写入数据时出错,因此改用命令“hdfs dfs-ls s3://xx/”。在这个命令之前,我用变量export hadoop\u root\u logger=debug启用了调试模式,console它可以显示一些有趣的错误
如果第一个选项仍然没有显示任何内容,那么您可以像我所做的那样:2.1导出hadoop\u opts=“-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005“2.2启动命令“hdfs dfs-ls s3://x/”。它将等待远程客户端连接到jvm进行调试(我声明suspend=y)2.3使用ide工具连接到jvm。当然,在此之前,您需要将相关jar导入或下载到ide中。
amazon确实需要通过升级版本来纠正googleguice库错误。

相关问题