许多死亡的执行者

w1jd8yoj  于 2021-05-27  发布在  Spark
关注(0)|答案(2)|浏览(745)

我试图通过创建一个step应用程序spark,在aws emr集群上执行spark scala应用程序。
我的群集包含4 m3.xlarge
我使用以下命令启动应用程序: spark-submit --deploy-mode cluster --class Main s3://mybucket/myjar_2.11-0.1.jar s3n://oc-mybucket/folder arg1 arg2 我的应用程序有3个参数,第一个是文件夹。
不幸的是,在启动应用程序后,我看到只有一个执行者(+主)是活动的,我有3个执行者死了,所以所有的任务只在第一个上工作。请参见图像

我尝试了许多方法来激活这些excutor,但没有任何结果(“spark.default.parallelism”、“spark.executor.instances”和“spark.executor.cores”)。我应该怎么做才能让所有的执行者都是活跃的并处理数据?
另外,当看神经节时,我的cpu总是在35%以下,有没有办法让cpu的工作效率超过75%呢?
谢谢您
updtae公司
这是死去的执行者的标准内容

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/mnt/yarn/usercache/hadoop/filecache/14/__spark_libs__3671437061469038073.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
20/08/15 23:28:56 INFO CoarseGrainedExecutorBackend: Started daemon with process name: 14765@ip-172-31-39-255
20/08/15 23:28:56 INFO SignalUtils: Registered signal handler for TERM
20/08/15 23:28:56 INFO SignalUtils: Registered signal handler for HUP
20/08/15 23:28:56 INFO SignalUtils: Registered signal handler for INT
20/08/15 23:28:57 INFO SecurityManager: Changing view acls to: yarn,hadoop
20/08/15 23:28:57 INFO SecurityManager: Changing modify acls to: yarn,hadoop
20/08/15 23:28:57 INFO SecurityManager: Changing view acls groups to: 
20/08/15 23:28:57 INFO SecurityManager: Changing modify acls groups to: 
20/08/15 23:28:57 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(yarn, hadoop); groups with view permissions: Set(); users  with modify permissions: Set(yarn, hadoop); groups with modify permissions: Set()
20/08/15 23:28:58 INFO TransportClientFactory: Successfully created connection to ip-172-31-36-83.eu-west-1.compute.internal/172.31.36.83:37115 after 186 ms (0 ms spent in bootstraps)
20/08/15 23:28:58 INFO SecurityManager: Changing view acls to: yarn,hadoop
20/08/15 23:28:58 INFO SecurityManager: Changing modify acls to: yarn,hadoop
20/08/15 23:28:58 INFO SecurityManager: Changing view acls groups to: 
20/08/15 23:28:58 INFO SecurityManager: Changing modify acls groups to: 
20/08/15 23:28:58 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(yarn, hadoop); groups with view permissions: Set(); users  with modify permissions: Set(yarn, hadoop); groups with modify permissions: Set()
20/08/15 23:28:58 INFO TransportClientFactory: Successfully created connection to ip-172-31-36-83.eu-west-1.compute.internal/172.31.36.83:37115 after 2 ms (0 ms spent in bootstraps)
20/08/15 23:28:58 INFO DiskBlockManager: Created local directory at /mnt1/yarn/usercache/hadoop/appcache/application_1597532473783_0002/blockmgr-d0d258ba-4345-45d1-9279-f6a97b63f81c
20/08/15 23:28:58 INFO DiskBlockManager: Created local directory at /mnt/yarn/usercache/hadoop/appcache/application_1597532473783_0002/blockmgr-e7ae1e29-85fa-4df9-acf1-f9923f0664bc
20/08/15 23:28:58 INFO MemoryStore: MemoryStore started with capacity 2.6 GB
20/08/15 23:28:59 INFO CoarseGrainedExecutorBackend: Connecting to driver: spark://CoarseGrainedScheduler@ip-172-31-36-83.eu-west-1.compute.internal:37115
20/08/15 23:28:59 INFO CoarseGrainedExecutorBackend: Successfully registered with driver
20/08/15 23:28:59 INFO Executor: Starting executor ID 3 on host ip-172-31-39-255.eu-west-1.compute.internal
20/08/15 23:28:59 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 40501.
20/08/15 23:28:59 INFO NettyBlockTransferService: Server created on ip-172-31-39-255.eu-west-1.compute.internal:40501
20/08/15 23:28:59 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/08/15 23:29:00 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(3, ip-172-31-39-255.eu-west-1.compute.internal, 40501, None)
20/08/15 23:29:00 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(3, ip-172-31-39-255.eu-west-1.compute.internal, 40501, None)
20/08/15 23:29:00 INFO BlockManager: external shuffle service port = 7337
20/08/15 23:29:00 INFO BlockManager: Registering executor with local external shuffle service.
20/08/15 23:29:00 INFO TransportClientFactory: Successfully created connection to ip-172-31-39-255.eu-west-1.compute.internal/172.31.39.255:7337 after 20 ms (0 ms spent in bootstraps)
20/08/15 23:29:00 INFO BlockManager: Initialized BlockManager: BlockManagerId(3, ip-172-31-39-255.eu-west-1.compute.internal, 40501, None)
20/08/15 23:29:03 INFO CoarseGrainedExecutorBackend: eagerFSInit: Eagerly initialized FileSystem at s3://does/not/exist in 3363 ms
20/08/15 23:30:02 ERROR CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERM
20/08/15 23:30:02 INFO DiskBlockManager: Shutdown hook called
20/08/15 23:30:02 INFO ShutdownHookManager: Shutdown hook called

这个问题一定和记忆有关吗?

bmvo0sr5

bmvo0sr51#

这可能有点晚了,但我发现这个aws大数据博客很有见地,可以确保我的大部分集群都得到利用,并且我能够实现尽可能多的并行性。
https://aws.amazon.com/blogs/big-data/best-practices-for-successfully-managing-memory-for-apache-spark-applications-on-amazon-emr/
更具体地说:
每个示例的执行器数=(每个示例的虚拟核心总数-1)/spark.executors.cores
total executor memory=每个示例的总ram/每个示例的执行器数
然后可以使用控制阶段中并行任务的数量 spark.default.parallelism 或者 repartitioning .

4zcjmb1e

4zcjmb1e2#

默认情况下,您不会使用所有执行器 spark-submit ,可以指定执行者的数量 --num-executors , executor-core 以及 executor-memory .
例如,增加执行者(默认为2)

spark-submit --num-executors N   #where N is desired number of executors like 5,10,50

参见此处文档中的示例
如果它对spark submit没有帮助或覆盖,您可以覆盖它 spark.executor.instancesconf/spark-defaults.conf 文件或类似文件,因此不必在命令行中显式指定它
对于cpu利用率,您应该查看 executor-core 以及 executor-core 或者在spark submit或conf中更改它们。增加cpu内核将有希望提高使用率。
更新:
正如@lamanus和我反复检查指出的那样,emr大于4.4 spark.dynamicAllocation.enabled 设置为 true ,我建议您仔细检查数据的分区,因为启用动态分配后,executor示例的数量取决于分区的数量,分区的数量根据dag执行的阶段而变化。另外,使用动态分配,您可以尝试 spark.dynamicAllocation.initialExecutors , spark.dynamicAllocation.maxExecutors , spark.dynamicAllocation.maxExecutors 控制执行者。

相关问题