在yarn集群模式下失败的oozie启动器

sigwle7e  于 2021-06-02  发布在  Hadoop
关注(0)|答案(0)|浏览(283)

因此,我尝试在yarn cluster模式下运行spark作业(在本地模式和yarn client下成功运行了它),但是我遇到了oozie启动程序失败的问题。下面是来自的错误消息 stderr .

  1. Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.JavaMain], main() threw exception, java.lang.NoSuchMethodError: org.apache.spark.network.util.JavaUtils.byteStringAsBytes(Ljava/lang/String;)J
  2. org.apache.oozie.action.hadoop.JavaMainException: java.lang.NoSuchMethodError: org.apache.spark.network.util.JavaUtils.byteStringAsBytes(Ljava/lang/String;)J
  3. at org.apache.oozie.action.hadoop.JavaMain.run(JavaMain.java:60)
  4. at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:46)
  5. at org.apache.oozie.action.hadoop.JavaMain.main(JavaMain.java:38)
  6. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  7. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  8. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  9. at java.lang.reflect.Method.invoke(Method.java:497)
  10. at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:228)
  11. at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
  12. at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
  13. at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
  14. at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.runSubtask(LocalContainerLauncher.java:370)
  15. at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.runTask(LocalContainerLauncher.java:295)
  16. at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.access$200(LocalContainerLauncher.java:181)
  17. at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler$1.run(LocalContainerLauncher.java:224)
  18. at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  19. at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  20. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  21. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  22. at java.lang.Thread.run(Thread.java:745)
  23. Caused by: java.lang.NoSuchMethodError: org.apache.spark.network.util.JavaUtils.byteStringAsBytes(Ljava/lang/String;)J
  24. at org.apache.spark.util.Utils$.memoryStringToMb(Utils.scala:993)
  25. at org.apache.spark.util.MemoryParam$.unapply(MemoryParam.scala:27)
  26. at org.apache.spark.deploy.yarn.ClientArguments.parseArgs(ClientArguments.scala:168)
  27. at org.apache.spark.deploy.yarn.ClientArguments.<init>(ClientArguments.scala:58)
  28. at org.apache.spark.deploy.yarn.Client$.main(Client.scala:966)
  29. at org.apache.spark.deploy.yarn.Client.main(Client.scala)
  30. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  31. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  32. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  33. at java.lang.reflect.Method.invoke(Method.java:497)
  34. at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)
  35. at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
  36. at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
  37. at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
  38. at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
  39. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  40. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  41. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  42. at java.lang.reflect.Method.invoke(Method.java:497)
  43. at org.apache.oozie.action.hadoop.JavaMain.run(JavaMain.java:57)
  44. ... 19 more

这个作业在spark 1.5.2上运行,所以我下载了 spark-assembly-1.5.2-hadoop2.6.0.jar 文件到hdfs,并设置 spark.yarn.jar 字段来指向jar路径,并设置 oozie.libpath 字段,指向jar所在的目录。
我在中的classpath部分搜索了spark的其他可能版本 stdout 找到了两个 spark-1.3.0-cdh5.4.5-yarn-shuffle.jar 被人捡到了(幸运的是, spark-assembly-1.5.2-hadoop2.6.0.jar 在其他地方也被捡到了,所以我正确地设置了路径)。
因此,问题似乎是oozie或oozie launcher出于某种原因(安装在作业试图运行的系统上)默认使用spark1.3。我试着设置 oozie.use.system.libpath 在job.properties文件中将字段设置为false,但似乎没有帮助。关于我能做些什么来防止spark 1.3被选中,或者任何其他解决方案来解决我所面临的nosuchmethoderror问题,有什么想法吗?
任何帮助都将不胜感激,谢谢。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题