driver无缘无故地停止执行器

vbopmzt1  于 2021-05-27  发布在  Spark
关注(0)|答案(1)|浏览(1143)

我有一个基于spark structured streaming 3和kafka的应用程序,它正在处理一些用户日志,过了一段时间驱动程序开始杀死执行者,我不明白为什么。执行者不包含任何错误。我留下遗嘱执行人和司机的日志
执行人1:

0/08/31 10:01:31 INFO executor.Executor: Finished task 5.0 in stage 791.0 (TID 46411). 1759 bytes result sent to driver
20/08/31 10:01:33 INFO executor.YarnCoarseGrainedExecutorBackend: Driver commanded a shutdown

执行人2:

20/08/31 10:14:33 INFO executor.YarnCoarseGrainedExecutorBackend: Driver commanded a shutdown
20/08/31 10:14:34 INFO memory.MemoryStore: MemoryStore cleared
20/08/31 10:14:34 INFO storage.BlockManager: BlockManager stopped
20/08/31 10:14:34 INFO util.ShutdownHookManager: Shutdown hook called

在驱动程序上:

20/08/31 10:01:33 ERROR cluster.YarnScheduler: Lost executor 3 on xxx.xxx.xxx.xxx: Executor heartbeat timed out after 130392 ms

20/08/31 10:53:33 ERROR cluster.YarnScheduler: Lost executor 2 on xxx.xxx.xxx.xxx: Executor heartbeat timed out after 125773 ms
20/08/31 10:53:33 ERROR cluster.YarnScheduler: Ignoring update with state FINISHED for TID 129308 because its task set is gone (this is likely the result of receiving duplicate task finished status updates) or its executor has been marked as failed.
20/08/31 10:53:33 ERROR cluster.YarnScheduler: Ignoring update with state FINISHED for TID 129314 because its task set is gone (this is likely the result of receiving duplicate task finished status updates) or its executor has been marked as failed.
20/08/31 10:53:33 ERROR cluster.YarnScheduler: Ignoring update with state FINISHED for TID 129311 because its task set is gone (this is likely the result of receiving duplicate task finished status updates) or its executor has been marked as failed.
20/08/31 10:53:33 ERROR cluster.YarnScheduler: Ignoring update with state FINISHED for TID 129305 because its task set is gone (this is likely the result of receiving duplicate task finished status updates) or its executor has been marked as failed.

有没有人有同样的问题并解决了呢?

rhfm7lfc

rhfm7lfc1#

查看手头的可用信息:
没有错误
司机命令停车
Yarn日志显示“状态完成”
这似乎是意料之中的行为。
如果忘记等待spark streaming查询的终止,通常会发生这种情况。如果你没有用

query.awaitTermination()

处理完所有数据后,流应用程序将立即关闭。

相关问题