mesos上的spark:任务在单个节点上调度

l7mqbcuq  于 2021-06-26  发布在  Mesos
关注(0)|答案(1)|浏览(332)

假设我用一个pyspark壳层来对付一个Mesos星团。我只想占用12个cpu核心。所以我像这样启动它:

uu@r4:~$ pyspark --master mesos://e3.test:5050 --total-executor-cores 12

然后是通常的事情:

Python 2.7.13 |Anaconda 2.5.0 (64-bit)| (default, Dec 20 2016, 23:09:15) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/01/31 08:16:31 INFO SparkContext: Running Spark version 1.6.2
17/01/31 08:16:31 INFO SecurityManager: Changing view acls to: uu
17/01/31 08:16:31 INFO SecurityManager: Changing modify acls to: uu
17/01/31 08:16:31 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(uu); users with modify permissions: Set(uu)
17/01/31 08:16:31 INFO Utils: Successfully started service 'sparkDriver' on port 53336.
17/01/31 08:16:31 INFO Slf4jLogger: Slf4jLogger started
17/01/31 08:16:32 INFO Remoting: Starting remoting
17/01/31 08:16:32 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@r4.test:59860]
17/01/31 08:16:32 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 59860.
17/01/31 08:16:32 INFO SparkEnv: Registering MapOutputTracker
17/01/31 08:16:32 INFO SparkEnv: Registering BlockManagerMaster
17/01/31 08:16:32 INFO DiskBlockManager: Created local directory at /var/tmp/spark/blockmgr-6b16ff11-b0bc-4a71-82f5-c69a363c8c1a
17/01/31 08:16:32 INFO MemoryStore: MemoryStore started with capacity 511.1 MB
17/01/31 08:16:32 INFO SparkEnv: Registering OutputCommitCoordinator
17/01/31 08:16:32 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/01/31 08:16:32 INFO SparkUI: Started SparkUI at http://r4.test:4040
I0131 08:16:32.582038 24965 sched.cpp:226] Version: 1.1.0
I0131 08:16:32.586931 24958 sched.cpp:330] New master detected at master@192.168.0.15:5050
I0131 08:16:32.587162 24958 sched.cpp:341] No credentials provided. Attempting to register without authentication
I0131 08:16:32.596922 24956 sched.cpp:743] Framework registered with 075ef8d0-de21-472d-8198-80805006b93d-0051
17/01/31 08:16:32 INFO CoarseMesosSchedulerBackend: Registered as framework ID 075ef8d0-de21-472d-8198-80805006b93d-0051
17/01/31 08:16:32 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 51135.
17/01/31 08:16:32 INFO NettyBlockTransferService: Server created on 51135
17/01/31 08:16:32 INFO BlockManagerMaster: Trying to register BlockManager
17/01/31 08:16:32 INFO BlockManagerMasterEndpoint: Registering block manager r4.test:51135 with 511.1 MB RAM, BlockManagerId(driver, r4.test, 51135)
17/01/31 08:16:32 INFO BlockManagerMaster: Registered BlockManager
17/01/31 08:16:32 INFO CoarseMesosSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
17/01/31 08:16:32 INFO CoarseMesosSchedulerBackend: Mesos task 0 is now TASK_RUNNING
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 1.6.2
      /_/

Using Python version 2.7.13 (default, Dec 20 2016 23:09:15)
SparkContext available as sc, HiveContext available as sqlContext.

但最终只登记了一名遗嘱执行人:

>>> 17/01/31 08:16:35 INFO CoarseMesosSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (r5.test:42965) with ID 023af0f2-fc60-4d9d-a3db-301ab34764c9-S3
17/01/31 08:16:35 INFO BlockManagerMasterEndpoint: Registering block manager r5.test:33239 with 511.1 MB RAM, BlockManagerId(023af0f2-fc60-4d9d-a3db-301ab34764c9-S3, r5.test, 33239)

这意味着整个spark应用程序即将在单个节点上运行。这不是我想要的调度(主要是由于数据位置的考虑)。我所期待的更像是spark的独立设置方式: --total-executor-cores 大致均匀地分布在簇上。
有什么办法可以做到吗?其余提及执行器/芯数的选项似乎没有任何影响(仅与独立和Yarn配置相关)。
为什么spark with mesos会采用这种逐个填充节点而不是分配工作的布局策略?
文件中提到的upd:conf条目也不起作用:

pyspark --master mesos://e3.test:5050 --conf spark.executor.cores=2 --conf spark.cores.max=12
ldfqzlk8

ldfqzlk81#

version 1.6.2

这就是问题所在。在较新的版本中有一个选项 spark.cores.max 限制每个执行器的核心数。

相关问题