YarnSpark:am容器尺寸

jdgnovmf  于 2021-05-26  发布在  Spark
关注(0)|答案(0)|浏览(465)

我运行的YarnSpark工作,我有问题的容器大小。我以集群模式部署它。开始时,我在日志中看到下面一行。

INFO yarn.Client: Will allocate AM container, with 1408 MB memory including 384 MB overhead

偶尔,由于工作相当大,我会

INFO - ERROR yarn.Client: Application diagnostics message: Application application_1606984713739_1759 failed 2 times due to AM Container for appattempt_1606984713739_1759_000002 exited with  exitCode: -104    
INFO - Failing this attempt.Diagnostics: Container [pid=386064,containerID=container_e69_1606984713739_1759_02_000001] is running 5566464B beyond the 'PHYSICAL' memory limit. Current usage: 1.5 GB of 1.5 GB physical memory used; 3.3 GB of 3.1 GB virtual memory used. Killing container.

如何增加am尺寸?我的理解是,这需要记忆 spark.driver.memory ,或与之相近的东西 yarn.scheduler.minimum-allocation-mb (我将最小值设置为1GB,最大值设置为32GiB)。
给定作业的spark配置如下所示(我可以在spark ui环境中看到这一点,所以它确实已经设置好了):

(spark.submit.deployMode,cluster)
...
(spark.executor.extraJavaOptions,-XX:InitiatingHeapOccupancyPercent=35)
(spark.executor.memory,20g)
(spark.executor.cores,5)
(spark.executor.instances,8)
(spark.driver.memoryOverhead,4g)
(spark.driver.memory,12g)
(spark.dynamicAllocation.minExecutors,4)
(spark.dynamicAllocation.maxExecutors,30)
(spark.dynamicAllocation.enabled,true)

是什么原因让我变得这么小?

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题