当我用spark submit提交一个小应用程序时,它在基于yarn的spark集群上运行良好,如下所示:
~/spark-1.4.0-bin-hadoop2.4$ bin/spark-submit --class MyClass --master yarn-cluster --queue testing myApp.jar hdfs://nameservice1/user/XXX/README.md_count
但是,我希望避免每次上载spark-assembly.jar文件,因此我设置 spark.yarn.jar
配置参数:
~/spark-1.4.0-bin-hadoop2.4$ bin/spark-submit --class MyClass --master yarn-cluster --queue testing --conf "spark.yarn.jar=hdfs://nameservice1/user/spark/share/lib/spark-assembly.jar" myApp.jar hdfs://nameservice1/user/XXX/README.md_count
一开始似乎还不错:
15/07/08 13:57:17 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/07/08 13:57:18 INFO yarn.Client: Requesting a new application from cluster with 24 NodeManagers
15/07/08 13:57:18 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
15/07/08 13:57:18 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
15/07/08 13:57:18 INFO yarn.Client: Setting up container launch context for our AM
15/07/08 13:57:18 INFO yarn.Client: Preparing resources for our AM container
15/07/08 13:57:18 INFO yarn.Client: Source and destination file systems are the same. Not copying hdfs://nameservice1/user/spark/share/lib/spark-assembly.jar
[...]
然而,它最终失败了:
15/07/08 13:57:18 INFO yarn.Client: Submitting application 670 to ResourceManager
15/07/08 13:57:18 INFO impl.YarnClientImpl: Submitted application application_1434986503384_0670
15/07/08 13:57:19 INFO yarn.Client: Application report for application_1434986503384_0670 (state: ACCEPTED)
15/07/08 13:57:19 INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: testing
start time: 1436356638869
final status: UNDEFINED
tracking URL: http://node-00a/cluster/app/application_1434986503384_0670
user: XXX
15/07/08 13:57:20 INFO yarn.Client: Application report for application_1434986503384_0670 (state: ACCEPTED)
15/07/08 13:57:21 INFO yarn.Client: Application report for application_1434986503384_0670 (state: ACCEPTED)
15/07/08 13:57:23 INFO yarn.Client: Application report for application_1434986503384_0670 (state: FAILED)
15/07/08 13:57:23 INFO yarn.Client:
client token: N/A
diagnostics: Application application_1434986503384_0670 failed 2 times due to AM Container for appattempt_1434986503384_0670_000002 exited with exitCode: 1 due to: Exception from container-launch.
Container id: container_1434986503384_0670_02_000001
Exit code: 1
[...]
在yarn日志中,我发现以下错误消息表明参数使用错误:
Container: container_1434986503384_0670_01_000001 on node-01b_8041
===================================================================================================
LogType:stderr
Log Upload Time:Mi Jul 08 13:57:22 +0200 2015
LogLength:764
Log Contents:
Unknown/unsupported param List(--arg, hdfs://nameservice1/user/XXX/README.md_count, --executor-memory, 1024m, --executor-cores, 1, --num-executors, 2)
Usage: org.apache.spark.deploy.yarn.ApplicationMaster [options]
Options:
--jar JAR_PATH Path to your application's JAR file (required)
--class CLASS_NAME Name of your application's main class (required)
--args ARGS Arguments to be passed to your application's main class.
Mutliple invocations are possible, each will be passed in order.
--num-executors NUM Number of executors to start (Default: 2)
--executor-cores NUM Number of cores for the executors (Default: 1)
--executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G)
End of LogType:stderr
由于在提交时上载本地程序集文件时运行的是同一个应用程序,因此它似乎可以归结为程序集文件。集群上的版本可能是错误的/不同的版本吗?我怎样才能证实这一点?原因可能是什么?是警告吗 WARN util.NativeCodeLoader: ...
可能有关系?
当我设置(不推荐使用的)环境变量时也会发生同样的情况 SPARK_JAR
而不是设置 spark.yarn.jar
.
1条答案
按热度按时间q5iwbnjs1#
在这里问一个明显的问题:您确定hdfs上的spark-assembly.jar与本地的相同吗?如果没有,可以尝试将本地spark程序集上载到hdfs上的主目录并重试吗?