invocationtargetexception在hadoop的yarn任务中

1zmg4dgp  于 2021-05-29  发布在  Hadoop
关注(0)|答案(2)|浏览(484)

跑步时 Kafka -> Apache Apex -> Hbase ,表示Yarn任务中出现以下异常:

  1. com.datatorrent.stram.StreamingAppMasterService: Application master, appId=4, clustertimestamp=1479188884109, attemptId=2
  2. 2016-11-15 11:59:51,068 INFO org.apache.hadoop.service.AbstractService: Service com.datatorrent.stram.StreamingAppMasterService failed in state INITED; cause: java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
  3. java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
  4. at org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:130)
  5. at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:156)
  6. at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:241)
  7. at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:333)
  8. at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:330)
  9. at java.security.AccessController.doPrivileged(Native Method)
  10. at javax.security.auth.Subject.doAs(Subject.java:422)
  11. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
  12. at org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:330)
  13. at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:444)

我的datatorrent日志显示以下异常。我正在运行一个应用程序,它可以与kafka->apex->hbase流媒体应用程序通信。

  1. Connecting to ResourceManager at hduser1/127.0.0.1:8032
  2. 16/11/15 17:47:38 WARN client.EventsAgent: Cannot read events for application_1479208737206_0008: java.io.FileNotFoundException: File does not exist: /user/hduser1/datatorrent/apps/application_1479208737206_0008/events/index.txt
  3. at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)
  4. at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)
  5. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1893)
  6. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1834)
  7. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1814)
  8. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1786)
  9. at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:552)
  10. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)
  11. at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  12. at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
  13. at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
  14. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
  15. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
  16. at java.security.AccessController.doPrivileged(Native Method)
  17. at javax.security.auth.Subject.doAs(Subject.java:422)
  18. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
  19. at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)

添加代码:

  1. public void populateDAG(DAG dag, Configuration conf){
  2. KafkaSinglePortInputOperator in
  3. = dag.addOperator("kafkaIn", new KafkaSinglePortInputOperator());
  4. in.setInitialOffset(AbstractKafkaInputOperator.InitialOffset.EARLIEST.name());
  5. LineOutputOperator out = dag.addOperator("fileOut", new LineOutputOperator());
  6. dag.addStream("data", in.outputPort, out.input);}

lineoutputoperator扩展了abstractfileoutputoperator

  1. private static final String NL = System.lineSeparator();
  2. private static final Charset CS = StandardCharsets.UTF_8;
  3. @NotNull
  4. private String baseName;
  5. @Override
  6. public byte[] getBytesForTuple(byte[] t) {
  7. String result = new String(t, CS) + NL;
  8. return result.getBytes(CS);
  9. }
  10. @Override
  11. protected String getFileName(byte[] tuple) {
  12. return baseName;
  13. }
  14. public String getBaseName() { return baseName; }
  15. public void setBaseName(String v) { baseName = v; }

如何解决这个问题?
谢谢。

yrwegjxp

yrwegjxp1#

你能分享一些关于你的环境的细节吗,比如什么版本的hadoop和apex?另外,此异常出现在哪个日志中?
作为一个简单的健全性检查,您能否运行简单的maven原型生成的应用程序,如所述:http://docs.datatorrent.com/beginner/
如果可行,请尝试在以下位置运行fileio和kafka应用程序:https://github.com/datatorrent/examples/tree/master/tutorials
如果这些工作正常,我们可以看看你的代码的细节。

bogh5gae

bogh5gae2#

我找到解决办法了,
这个问题与我的许可证到期有关,所以重新安装了一个新的,对于实际代码来说效果很好。

相关问题