引发内存不足的多次迭代

thtygnil  于 2021-06-02  发布在  Hadoop
关注(0)|答案(1)|浏览(386)

我有一个spark作业(在spark1.3.1中运行)必须迭代几个键(大约42个)并处理该作业。这是程序的结构
从Map上取钥匙
从与密钥匹配的配置单元(hadoopYarn)获取数据作为Dataframe
工艺数据
将结果写入配置单元
当我为一个键运行时,一切正常。当我用42个键运行时,大约在第12次迭代时出现内存不足异常。有没有一种方法可以在每次迭代之间清除内存?谢谢你的帮助。
下面是我正在使用的高级代码。

public abstract class SparkRunnable {

public static SparkContext sc = null;
public static JavaSparkContext jsc = null;
public static HiveContext hiveContext = null;
public static SQLContext sqlContext = null;

protected SparkRunnableModel(String appName){
    //get the system properties to setup the model
    // Getting a java spark context object by using the constants
    SparkConf conf = new SparkConf().setAppName(appName);
    sc = new SparkContext(conf);
    jsc = new JavaSparkContext(sc);

    // Creating a hive context object connection by using java spark
    hiveContext = new org.apache.spark.sql.hive.HiveContext(sc);

    // sql context
    sqlContext = new SQLContext(sc);

}

public abstract void processModel(Properties properties) throws Exception;

}

class ModelRunnerMain(model: String) extends SparkRunnableModel(model: String) with Serializable {

  override def processModel(properties: Properties) = {
  val dataLoader = DataLoader.getDataLoader(properties)

//loads keys data frame from a keys table in hive and converts that to a list
val keysList = dataLoader.loadSeriesData()

for (key <- keysList) {
    runModelForKey(key, dataLoader)
}
}

  def runModelForKey(key: String, dataLoader: DataLoader) = {

//loads data frame from a table(~50 col X 800 rows) using "select * from table where key='<key>'"
val keyDataFrame = dataLoader.loadKeyData()

// filter this data frame into two data frames
...

// join them to transpose
...

// convert the data frame into an RDD
...

// run map on the RDD to add bunch of new columns
...
  }

}

我的Dataframe大小在兆欧以下。但是我通过选择和连接等方式创建了几个Dataframe,我假设在迭代完成后,所有这些Dataframe都会被垃圾收集。
这是我运行的配置。
spark.eventlog。enabled:true spark.broadcast.port:7086
斯帕克。司机。memory:12g spark.shuffle.spill:错误
Spark.serializer:org.apache.spark.serializer.kryoserializer
Spark存储记忆yfraction:0.7 spark.executor.cores:8
Spark压缩。codec:lzf spark.shuffle.consolidatefiles:正确
spark.shuffle.service服务。enabled:true spark.master:Yarn客户机
Spark。执行者。instances:8 spark.shuffle.service.port:7337
Spark.rdd。compress:true spark.executor.memory:48克
spark.executor.id:spark.sql.shuffle.partitions:700
Spark芯。max:56
这是我得到的一个例外。

Exception in thread "dag-scheduler-event-loop" java.lang.OutOfMemoryError: Java heap space
at org.apache.spark.util.io.ByteArrayChunkOutputStream.allocateNewChunkIfNeeded(ByteArrayChunkOutputStream.scala:66)
at org.apache.spark.util.io.ByteArrayChunkOutputStream.write(ByteArrayChunkOutputStream.scala:55)
at com.ning.compress.lzf.ChunkEncoder.encodeAndWriteChunk(ChunkEncoder.java:264)
at com.ning.compress.lzf.LZFOutputStream.writeCompressedBlock(LZFOutputStream.java:266)
at com.ning.compress.lzf.LZFOutputStream.write(LZFOutputStream.java:124)
at com.esotericsoftware.kryo.io.Output.flush(Output.java:155)
at com.esotericsoftware.kryo.io.Output.require(Output.java:135)
at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:18)
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:568)
at org.apache.spark.serializer.KryoSerializationStream.writeObject(KryoSerializer.scala:124)
at org.apache.spark.broadcast.TorrentBroadcast$.blockifyObject(TorrentBroadcast.scala:202)
at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:101)
at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:84)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:29)
at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1051)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:839)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskCompletion$15$$anonfun$apply$1.apply$mcVI$sp(DAGScheduler.scala:1042)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskCompletion$15$$anonfun$apply$1.apply(DAGScheduler.scala:1039)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskCompletion$15$$anonfun$apply$1.apply(DAGScheduler.scala:1039)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskCompletion$15.apply(DAGScheduler.scala:1039)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskCompletion$15.apply(DAGScheduler.scala:1038)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1038)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1390)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
t0ybt7op

t0ybt7op1#

使用checkpoint()或localcheckpoint()可以切断spark沿袭,并在迭代中提高应用程序的性能。

相关问题