spark提交在作业完成后继续挂起

44u64gxh  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(640)

我试图测试Spark1.6与hdfs在美国焊接学会。我正在使用示例文件夹中提供的wordcountpython示例。我用spark submit提交作业,作业成功完成,并在控制台上打印结果。webui还表示它已经完成了。然而,Spark永不熄灭。我已经验证了在word count示例代码中上下文也停止了。
有什么问题吗?
这是我在控制台上看到的。

6-05-24 14:58:04,749 INFO  [Thread-3] handler.ContextHandler (ContextHandler.java:doStop(843)) - stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
2016-05-24 14:58:04,749 INFO  [Thread-3] handler.ContextHandler (ContextHandler.java:doStop(843)) - stopped o.s.j.s.ServletContextHandler{/stages/json,null}
2016-05-24 14:58:04,749 INFO  [Thread-3] handler.ContextHandler (ContextHandler.java:doStop(843)) - stopped o.s.j.s.ServletContextHandler{/stages,null}
2016-05-24 14:58:04,749 INFO  [Thread-3] handler.ContextHandler (ContextHandler.java:doStop(843)) - stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
2016-05-24 14:58:04,750 INFO  [Thread-3] handler.ContextHandler (ContextHandler.java:doStop(843)) - stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
2016-05-24 14:58:04,750 INFO  [Thread-3] handler.ContextHandler (ContextHandler.java:doStop(843)) - stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
2016-05-24 14:58:04,750 INFO  [Thread-3] handler.ContextHandler (ContextHandler.java:doStop(843)) - stopped o.s.j.s.ServletContextHandler{/jobs,null}
2016-05-24 14:58:04,802 INFO  [Thread-3] ui.SparkUI (Logging.scala:logInfo(58)) - Stopped Spark web UI at http://172.30.2.239:4040
2016-05-24 14:58:04,805 INFO  [Thread-3] cluster.SparkDeploySchedulerBackend (Logging.scala:logInfo(58)) - Shutting down all executors
2016-05-24 14:58:04,805 INFO  [dispatcher-event-loop-2] cluster.SparkDeploySchedulerBackend (Logging.scala:logInfo(58)) - Asking each executor to shut down
2016-05-24 14:58:04,814 INFO  [dispatcher-event-loop-5] spark.MapOutputTrackerMasterEndpoint (Logging.scala:logInfo(58)) - MapOutputTrackerMasterEndpoint stopped!
2016-05-24 14:58:04,818 INFO  [Thread-3] storage.MemoryStore (Logging.scala:logInfo(58)) - MemoryStore cleared
2016-05-24 14:58:04,818 INFO  [Thread-3] storage.BlockManager (Logging.scala:logInfo(58)) - BlockManager stopped
2016-05-24 14:58:04,820 INFO  [Thread-3] storage.BlockManagerMaster (Logging.scala:logInfo(58)) - BlockManagerMaster stopped
2016-05-24 14:58:04,821 INFO  [dispatcher-event-loop-3] scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint (Logging.scala:logInfo(58)) - OutputCommitCoordinator stopped!
2016-05-24 14:58:04,824 INFO  [Thread-3] spark.SparkContext (Logging.scala:logInfo(58)) - Successfully stopped SparkContext
2016-05-24 14:58:04,827 INFO  [sparkDriverActorSystem-akka.actor.default-dispatcher-2] remote.RemoteActorRefProvider$RemotingTerminator (Slf4jLogger.scala:apply$mcV$sp(74)) - Shutting down remote daemon.
2016-05-24 14:58:04,828 INFO  [sparkDriverActorSystem-akka.actor.default-dispatcher-2] remote.RemoteActorRefProvider$RemotingTerminator (Slf4jLogger.scala:apply$mcV$sp(74)) - Remote daemon shut down; proceeding with flushing remote transports.
2016-05-24 14:58:04,843 INFO  [sparkDriverActorSystem-akka.actor.default-dispatcher-2] remote.RemoteActorRefProvider$RemotingTerminator (Slf4jLogger.scala:apply$mcV$sp(74)) - Remoting shut down.

我必须执行ctrl-c来终止spark提交过程。这真是个奇怪的问题,我不知道该怎么解决。请让我知道,如果有任何日志,我应该看看,或在这里做不同的事情。
以下是spark submit进程的jstack输出的pastebin链接:http://pastebin.com/nfnt4xmt

h7wcgrx3

h7wcgrx31#

我的spark作业代码中的自定义线程池也有同样的问题。我发现spark submit在代码中使用自定义的非守护线程池时挂起。你可以查一下 ThreadUtils.newDaemonCachedThreadPool() 为了理解spark开发人员如何创建线程池,或者您可以使用这个utils,但是要小心,因为它们是包私有的。

相关问题