livy会话一直无法启动

ghhaqwfi  于 2021-05-29  发布在  Spark
关注(0)|答案(1)|浏览(1018)

我一直在尝试用运行在ubuntu18.04上的livy0.7服务器创建一个新的spark会话。在同一台机器上,我有一个运行的spark集群和两个worker,我能够创建一个正常的spark会话。
我的问题是,在对livy服务器运行以下请求后,会话将停留在启动状态:

  1. host = 'http://localhost:8998'
  2. data = {'kind': 'spark'}
  3. headers = {'Content-Type': 'application/json'}
  4. r = requests.post(host + '/sessions', data=json.dumps(data), headers=headers)
  5. r.json()

我可以看到会话正在启动,并从会话日志创建了spark会话:

  1. 20/06/03 13:52:31 INFO SparkEntries: Spark context finished initialization in 5197ms
  2. 20/06/03 13:52:31 INFO SparkEntries: Created Spark session.
  3. 20/06/03 13:52:46 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (xxx.xx.xx.xxx:1828) with ID 0
  4. 20/06/03 13:52:47 INFO BlockManagerMasterEndpoint: Registering block manager xxx.xx.xx.xxx:1830 with 434.4 MB RAM, BlockManagerId(0, xxx.xx.xx.xxx, 1830, None)

以及spark master用户界面:

之后呢 livy.rsc.server.idle-timeout 到达会话日志,然后输出:

  1. 20/06/03 14:28:04 WARN RSCDriver: Shutting down RSC due to idle timeout (10m).
  2. 20/06/03 14:28:04 INFO SparkUI: Stopped Spark web UI at http://172.17.52.209:4040
  3. 20/06/03 14:28:04 INFO StandaloneSchedulerBackend: Shutting down all executors
  4. 20/06/03 14:28:04 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
  5. 20/06/03 14:28:04 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
  6. 20/06/03 14:28:04 INFO MemoryStore: MemoryStore cleared
  7. 20/06/03 14:28:04 INFO BlockManager: BlockManager stopped
  8. 20/06/03 14:28:04 INFO BlockManagerMaster: BlockManagerMaster stopped
  9. 20/06/03 14:28:04 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
  10. 20/06/03 14:28:04 INFO SparkContext: Successfully stopped SparkContext
  11. 20/06/03 14:28:04 INFO SparkContext: SparkContext already stopped.

在那之后他们死了:(

我已经尝试增加驱动程序超时没有运气,没有发现任何已知的问题,如我猜这与Spark驱动程序连接到rsc,但我不知道在哪里配置
有人知道原因/解决方法吗?
太棒了!

j8yoct9x

j8yoct9x1#

我们在一个环境中也遇到了类似的问题。工作环境和非工作环境之间的唯一区别是livy.conf文件中的spark master设置。
我从livy.conf中删除了配置livy.spark.master=yarn,并从代码本身设置了这个值。

  1. // pass master as yarn
  2. public static JavaSparkContext getSparkContext(final String master, final String appName) {
  3. LOGGER.info("Creating spark context");
  4. SparkConf conf = new SparkConf().setAppName(appName);
  5. if (Strings.isNullOrEmpty(master)) {
  6. LOGGER.warn("No spark master found setting local!!");
  7. conf.setMaster("local");
  8. } else {
  9. conf.setMaster(master);
  10. }
  11. conf.set("spark.submit.deployMode", "client");
  12. return new JavaSparkContext(conf);
  13. }

这对我有用。
如果有人能指出这对我有什么帮助的话。

展开查看全部

相关问题