我运行了这样一个python脚本:
spark-submit \
--master yarn \
--deploy-mode client \
--driver-memory 2G \
--driver-cores 2 \
--executor-memory 8G \
--num-executors 3 \
--executor-cores 3 \
script.py
我得到这样的日志:
spark.yarn.driver.memoryOverhead is set but does not apply in client mode.
[Stage 1:=================================================> (13 + 2) / 15]18/04/13 13:49:18 ERROR YarnScheduler: Lost executor 3 on serverw19.domain: Container killed by YARN for exceeding memory limits. 12.0 GB of 12 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
[Stage 1:=====================================================> (14 + 1) / 15]18/04/13 14:01:43 ERROR YarnScheduler: Lost executor 1 on serverw51.domain: Container killed by YARN for exceeding memory limits. 12.0 GB of 12 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
[Stage 1:====================================================> (14 + -1) / 15]18/04/13 14:02:48 ERROR YarnScheduler: Lost executor 2 on serverw15.domain: Container killed by YARN for exceeding memory limits. 12.0 GB of 12 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
[Stage 1:====================================================> (14 + -8) / 15]18/04/13 14:02:49 ERROR YarnScheduler: Lost an executor 2 (already removed): Pending loss reason.
[Stage 1:=======================================================(26 + -11) / 15]18/04/13 14:29:53 ERROR YarnScheduler: Lost executor 5 on serverw38.domain: Container killed by YARN for exceeding memory limits. 12.0 GB of 12 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
[Stage 1:=======================================================(28 + -13) / 15]^[18/04/13 14:43:35 ERROR YarnScheduler: Lost executor 6 on serverw10.domain: Slave lost
18/04/13 14:43:35 ERROR TransportChannelHandler: Connection to serverw22.domain/10.252.139.122:54308 has been quiet for 120000 ms while there are outstanding requests. Assuming connection is dead; please adjust spark.network.timeout if this is wrong.
[Stage 1:=======================================================(28 + -15) / 15]18/04/13 14:44:22 ERROR TransportClient: Failed to send RPC 9128980605450004417 to serverw22.domain/10.252.139.122:54308: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
18/04/13 14:44:22 ERROR YarnScheduler: Lost executor 4 on serverw36.domain: Slave lost
[Stage 1:=======================================================(31 + -25) / 15]18/04/13 15:05:11 ERROR TransportClient: Failed to send RPC 7766740408770504900 to serverw22.domain/10.252.139.122:54308: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
18/04/13 15:05:11 ERROR YarnScheduler: Lost executor 7 on serverw38.domain: Slave lost
[Stage 1:=======================================================(31 + -25) / 15]
背景中的值是什么意思(13+2)/15之后(28+-13)/15等,最后(31+-25)/15
为什么遗嘱执行人会失踪?
这个应用程序是死了,我应该杀了它,否则它会成功完成?
当做
棘爪
1条答案
按热度按时间o2g1uqev1#
背景中的值是什么意思(13+2)/15之后(28+-13)/15等,最后(31+-25)/15
第一个数字是为当前操作完成的分区数。
第二个数字是当前正在处理的分区数。如果数字为负数,则表示分区的结果无效,必须重新计算。
最后,最后一个数字,如果是当前操作拥有的分区总数。
为什么遗嘱执行人会失踪?
正如日志错误消息所说,任务使用的内存比执行器实际分配的内存要多。
这个应用程序是死了,我应该杀了它,否则它会成功完成?
通常spark应该能够完成应用程序(不管它是成功结束还是出错结束)。然而,在这种情况下,我不会有太多的希望,它顺利完成无论如何-所以如果我是你,我只会杀死它,并审查内存设置。