我在kubernetes上运行一个spark作业,由于数据量较大,我经常会遇到“executor lost”,执行者被杀死,作业失败。我已经做了一个 kubectl logs -f
在所有运行的executor pod上,但是我从来没有看到抛出任何异常(我期望类似于 OutOfMemoryError
或者类似的)。这些豆荚只是突然停止计算,然后直接被移除,所以它们甚至不会呆在里面 Error
状态,以便能够挖掘周围和故障排除。他们就这样消失了。
我应该如何解决这个问题?在我看来,kubernetes自己杀死豆荚是因为我认为它们超出了一些界限,但据我所知,豆荚应该在里面 Evicted
陈述(或者他们不应该?)
这似乎与内存使用有关,因为当我出现时 spark.executor.memory
我的工作即将完成(但是执行者少了很多,导致速度慢了很多)。
使用运行作业时 local[*]
作为主机,即使内存设置低得多,它也会运行到完成。
后续1
我从只有一个遗嘱执行人开始工作,做了一个 kubectl logs -f
并观察驱动程序的输出(在客户机模式下运行)。首先,驱动程序上有“executor lost”消息,然后executor pod退出,没有任何异常或错误消息。
后续2
当执行者死亡时,驱动程序的日志如下所示:
20/08/18 10:36:40 INFO TaskSchedulerImpl: Removed TaskSet 15.0, whose tasks have all completed, from pool
20/08/18 10:36:40 INFO TaskSetManager: Starting task 3.0 in stage 18.0 (TID 1554, 10.244.1.64, executor 1, partition 3, NODE_LOCAL, 7717 bytes)
20/08/18 10:36:40 INFO DAGScheduler: ShuffleMapStage 15 (parquet at DataTasks.scala:208) finished in 5.913 s
20/08/18 10:36:40 INFO DAGScheduler: looking for newly runnable stages
20/08/18 10:36:40 INFO DAGScheduler: running: Set(ShuffleMapStage 18)
20/08/18 10:36:40 INFO DAGScheduler: waiting: Set(ShuffleMapStage 20, ShuffleMapStage 21, ResultStage 22)
20/08/18 10:36:40 INFO DAGScheduler: failed: Set()
20/08/18 10:36:40 INFO BlockManagerInfo: Added broadcast_11_piece0 in memory on 10.244.1.64:43809 (size: 159.0 KiB, free: 2.2 GiB)
20/08/18 10:36:40 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 1 to 10.93.111.35:20221
20/08/18 10:36:41 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 3 to 10.93.111.35:20221
20/08/18 10:36:49 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Disabling executor 1.
20/08/18 10:36:49 INFO DAGScheduler: Executor lost: 1 (epoch 12)
20/08/18 10:36:49 INFO BlockManagerMasterEndpoint: Trying to remove executor 1 from BlockManagerMaster.
20/08/18 10:36:49 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(1, 10.244.1.64, 43809, None)
20/08/18 10:36:49 INFO BlockManagerMaster: Removed 1 successfully in removeExecutor
20/08/18 10:36:49 INFO DAGScheduler: Shuffle files lost for executor: 1 (epoch 12)
在遗嘱执行人身上,看起来是这样的:
20/08/18 10:36:40 INFO Executor: Running task 3.0 in stage 18.0 (TID 1554)
20/08/18 10:36:40 INFO TorrentBroadcast: Started reading broadcast variable 11 with 1 pieces (estimated total size 4.0 MiB)
20/08/18 10:36:40 INFO MemoryStore: Block broadcast_11_piece0 stored as bytes in memory (estimated size 159.0 KiB, free 2.2 GiB)
20/08/18 10:36:40 INFO TorrentBroadcast: Reading broadcast variable 11 took 7 ms
20/08/18 10:36:40 INFO MemoryStore: Block broadcast_11 stored as values in memory (estimated size 457.3 KiB, free 2.2 GiB)
20/08/18 10:36:40 INFO MapOutputTrackerWorker: Don't have map outputs for shuffle 1, fetching them
20/08/18 10:36:40 INFO MapOutputTrackerWorker: Doing the fetch; tracker endpoint = NettyRpcEndpointRef(spark://MapOutputTracker@node01.maas:34271)
20/08/18 10:36:40 INFO MapOutputTrackerWorker: Got the output locations
20/08/18 10:36:40 INFO ShuffleBlockFetcherIterator: Getting 30 (142.3 MiB) non-empty blocks including 30 (142.3 MiB) local and 0 (0.0 B) host-local and 0 (0.0 B) remote blocks
20/08/18 10:36:40 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
20/08/18 10:36:41 INFO CodeGenerator: Code generated in 3.082897 ms
20/08/18 10:36:41 INFO CodeGenerator: Code generated in 5.132359 ms
20/08/18 10:36:41 INFO MapOutputTrackerWorker: Don't have map outputs for shuffle 3, fetching them
20/08/18 10:36:41 INFO MapOutputTrackerWorker: Doing the fetch; tracker endpoint = NettyRpcEndpointRef(spark://MapOutputTracker@node01.maas:34271)
20/08/18 10:36:41 INFO MapOutputTrackerWorker: Got the output locations
20/08/18 10:36:41 INFO ShuffleBlockFetcherIterator: Getting 0 (0.0 B) non-empty blocks including 0 (0.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) remote blocks
20/08/18 10:36:41 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
20/08/18 10:36:41 INFO CodeGenerator: Code generated in 6.770762 ms
20/08/18 10:36:41 INFO CodeGenerator: Code generated in 3.150645 ms
20/08/18 10:36:41 INFO CodeGenerator: Code generated in 2.81799 ms
20/08/18 10:36:41 INFO CodeGenerator: Code generated in 2.989827 ms
20/08/18 10:36:41 INFO CodeGenerator: Code generated in 3.024777 ms
20/08/18 10:36:41 INFO CodeGenerator: Code generated in 4.32011 ms
然后,执行人退出。
奇怪的是:阶段18.0从任务3.0开始,而不是1.0
后续3
我现在将executor日志级别更改为 DEBUG
我注意到一件有趣的事,在遗嘱执行人离开之前:
20/08/18 14:19:26 DEBUG TaskMemoryManager: Task 1155 acquired 64.0 KiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
20/08/18 14:19:26 DEBUG TaskMemoryManager: Task 1155 acquired 64.0 KiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@4ef2dc4a
20/08/18 14:19:26 DEBUG TaskMemoryManager: Task 1155 release 64.0 KiB from org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@4ef2dc4a
20/08/18 14:19:26 DEBUG TaskMemoryManager: Task 1155 acquired 64.0 MiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
20/08/18 14:19:26 DEBUG TaskMemoryManager: Task 1155 acquired 128.0 KiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d 20/08/18 14:19:26 DEBUG TaskMemoryManager: Task 1155 release 64.0 KiB from org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
20/08/18 14:19:26 DEBUG TaskMemoryManager: Task 1155 acquired 256.0 KiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d 20/08/18 14:19:26 DEBUG TaskMemoryManager: Task 1155 release 128.0 KiB from org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d 20/08/18 14:19:26 DEBUG TaskMemoryManager: Task 1155 acquired 512.0 KiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d 20/08/18 14:19:26 DEBUG TaskMemoryManager: Task 1155 release 256.0 KiB from org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d 20/08/18 14:19:26 DEBUG TaskMemoryManager: Task 1155 acquired 1024.0 KiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d20/08/18 14:19:26 DEBUG TaskMemoryManager: Task 1155 release 512.0 KiB from org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d 20/08/18 14:19:26 DEBUG TaskMemoryManager: Task 1155 acquired 2.0 MiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
20/08/18 14:19:26 DEBUG TaskMemoryManager: Task 1155 release 1024.0 KiB from org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d20/08/18 14:19:26 DEBUG TaskMemoryManager: Task 1155 acquired 4.0 MiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
20/08/18 14:19:26 DEBUG TaskMemoryManager: Task 1155 release 2.0 MiB from org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
20/08/18 14:19:27 DEBUG TaskMemoryManager: Task 1155 acquired 8.0 MiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
20/08/18 14:19:27 DEBUG TaskMemoryManager: Task 1155 release 4.0 MiB from org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
20/08/18 14:19:27 DEBUG TaskMemoryManager: Task 1155 acquired 16.0 MiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
20/08/18 14:19:27 DEBUG TaskMemoryManager: Task 1155 release 8.0 MiB from org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
20/08/18 14:19:27 DEBUG TaskMemoryManager: Task 1155 acquired 32.0 MiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
20/08/18 14:19:27 DEBUG TaskMemoryManager: Task 1155 release 16.0 MiB from org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
20/08/18 14:19:29 DEBUG TaskMemoryManager: Task 1155 acquired 64.0 MiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
20/08/18 14:19:30 DEBUG TaskMemoryManager: Task 1155 acquired 64.0 MiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
20/08/18 14:19:30 DEBUG TaskMemoryManager: Task 1155 release 32.0 MiB from org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
20/08/18 14:19:34 DEBUG TaskMemoryManager: Task 1155 acquired 64.0 MiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
20/08/18 14:19:34 DEBUG TaskMemoryManager: Task 1155 acquired 128.0 MiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d 20/08/18 14:19:34 DEBUG TaskMemoryManager: Task 1155 release 64.0 MiB from org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
20/08/18 14:19:36 DEBUG TaskMemoryManager: Task 1155 acquired 64.0 MiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
20/08/18 14:19:36 DEBUG TaskMemoryManager: Task 1155 acquired 64.0 MiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
20/08/18 14:19:37 DEBUG TaskMemoryManager: Task 1155 acquired 256.0 MiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d 20/08/18 14:19:37 DEBUG TaskMemoryManager: Task 1155 release 128.0 MiB from org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d 20/08/18 14:19:37 DEBUG TaskMemoryManager: Task 1155 acquired 64.0 MiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
20/08/18 14:19:38 DEBUG TaskMemoryManager: Task 1155 acquired 64.0 MiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
20/08/18 14:19:38 DEBUG TaskMemoryManager: Task 1155 acquired 64.0 MiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
20/08/18 14:19:39 DEBUG TaskMemoryManager: Task 1155 acquired 64.0 MiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
20/08/18 14:19:39 DEBUG TaskMemoryManager: Task 1155 acquired 512.0 MiB for org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@5050038d
我给了执行者4gb的内存 spark.executor.memory
,这些分配加起来就有1344mb。使用4gb内存和默认内存分割设置,40%的内存为1400mb。
我能不能限制一下 UnsafeExternalSorter
拿?
后续4
我遇到了一个罕见的情况,出于某种原因,spark没有杀死“完成”的执行者,我看到豆荚是 OOMKilled
. 看来 spark.executor.memory
设置pod的请求内存和spark executor中的内存配置。
1条答案
按热度按时间rkue9o1l1#
接下来的第4条就是答案。我又和他一起干了那份工作
kubectl get pod -w
我看到遗嘱执行人的吊舱OOMKilled
. 我现在和你一起跑步spark.kubernetes.memoryOverheadFactor=0.5
以及spark.memory.fraction=0.2
,调整spark.executor.memory
高到每个节点只启动一个执行器spark.executor.cores
每个节点的核心数减去1。这样,它就跑了。我还调整了我的算法,因为它有一个大的分区倾斜,必须做一些不容易并行化的计算,这导致了大量的洗牌。