我正在dataproc集群中运行一个below spark submit命令,但是我注意到很少有spark配置被忽略。我能知道他们被忽视的原因吗?
gcloud dataproc jobs submit spark --cluster=<Cluster> --class=<class_name> --jars=<list_of_jars> --region=<region> --files=<list_of_files> --properties=spark.driver.extraJavaOptions="-Dconfig.file=application_dev.json -Dlog4j.configuration=log4j.properties",spark.executor.extraJavaOptions="-Dconfig.file=application_dev.json -Dlog4j.configuration=log4j.properties, spark.executor.instances=36, spark.executor.cores=4, spark.executor.memory=4G, spark.driver.memory=8G, spark.shuffle.service.enabled=true, spark.yarn.maxAppAttempts=1, spark.sql.shuffle.partitions=200, spark.executor.memoryOverhead=7680, spark.driver.maxResultSize=0, spark.port.maxRetries=250, spark.dynamicAllocation.initialExecutors=20, spark.dynamicAllocation.minExecutors=10"
Warning: Ignoring non-Spark config property: spark.driver.maxResultSize
Warning: Ignoring non-Spark config property: spark.driver.memory
Warning: Ignoring non-Spark config property: spark.dynamicAllocation.minExecutors
Warning: Ignoring non-Spark config property: spark.executor.cores
Warning: Ignoring non-Spark config property: spark.port.maxRetries
Warning: Ignoring non-Spark config property: spark.yarn.maxAppAttempts
Warning: Ignoring non-Spark config property: spark.dynamicAllocation.initialExecutors
Warning: Ignoring non-Spark config property: spark.executor.memory
Warning: Ignoring non-Spark config property: spark.executor.memoryOverhead
Warning: Ignoring non-Spark config property: spark.sql.shuffle.partitions
Warning: Ignoring non-Spark config property: spark.executor.instances
2条答案
按热度按时间njthzxwz1#
试试下面的。他们不是
extraJavaOptions
,但属于properties
.以更易读的形式:
mitkmikd2#
你能试试这个吗?