org.apache.spark.util.Utils.tempFileWith()方法的使用及代码示例

x33g5p2x  于2022-02-01 转载在 其他  
字(3.4k)|赞(0)|评价(0)|浏览(110)

本文整理了Java中org.apache.spark.util.Utils.tempFileWith()方法的一些代码示例,展示了Utils.tempFileWith()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Utils.tempFileWith()方法的具体详情如下:
包路径:org.apache.spark.util.Utils
类名称:Utils
方法名:tempFileWith

Utils.tempFileWith介绍

暂无

代码示例

代码示例来源:origin: org.apache.spark/spark-core_2.10

@VisibleForTesting
void closeAndWriteOutput() throws IOException {
 assert(sorter != null);
 updatePeakMemoryUsed();
 serBuffer = null;
 serOutputStream = null;
 final SpillInfo[] spills = sorter.closeAndGetSpills();
 sorter = null;
 final long[] partitionLengths;
 final File output = shuffleBlockResolver.getDataFile(shuffleId, mapId);
 final File tmp = Utils.tempFileWith(output);
 try {
  try {
   partitionLengths = mergeSpills(spills, tmp);
  } finally {
   for (SpillInfo spill : spills) {
    if (spill.file.exists() && ! spill.file.delete()) {
     logger.error("Error while deleting spill file {}", spill.file.getPath());
    }
   }
  }
  shuffleBlockResolver.writeIndexFileAndCommit(shuffleId, mapId, partitionLengths, tmp);
 } finally {
  if (tmp.exists() && !tmp.delete()) {
   logger.error("Error while deleting temp file {}", tmp.getAbsolutePath());
  }
 }
 mapStatus = MapStatus$.MODULE$.apply(blockManager.shuffleServerId(), partitionLengths);
}

代码示例来源:origin: org.apache.spark/spark-core_2.11

@VisibleForTesting
void closeAndWriteOutput() throws IOException {
 assert(sorter != null);
 updatePeakMemoryUsed();
 serBuffer = null;
 serOutputStream = null;
 final SpillInfo[] spills = sorter.closeAndGetSpills();
 sorter = null;
 final long[] partitionLengths;
 final File output = shuffleBlockResolver.getDataFile(shuffleId, mapId);
 final File tmp = Utils.tempFileWith(output);
 try {
  try {
   partitionLengths = mergeSpills(spills, tmp);
  } finally {
   for (SpillInfo spill : spills) {
    if (spill.file.exists() && ! spill.file.delete()) {
     logger.error("Error while deleting spill file {}", spill.file.getPath());
    }
   }
  }
  shuffleBlockResolver.writeIndexFileAndCommit(shuffleId, mapId, partitionLengths, tmp);
 } finally {
  if (tmp.exists() && !tmp.delete()) {
   logger.error("Error while deleting temp file {}", tmp.getAbsolutePath());
  }
 }
 mapStatus = MapStatus$.MODULE$.apply(blockManager.shuffleServerId(), partitionLengths);
}

代码示例来源:origin: org.apache.spark/spark-core

@VisibleForTesting
void closeAndWriteOutput() throws IOException {
 assert(sorter != null);
 updatePeakMemoryUsed();
 serBuffer = null;
 serOutputStream = null;
 final SpillInfo[] spills = sorter.closeAndGetSpills();
 sorter = null;
 final long[] partitionLengths;
 final File output = shuffleBlockResolver.getDataFile(shuffleId, mapId);
 final File tmp = Utils.tempFileWith(output);
 try {
  try {
   partitionLengths = mergeSpills(spills, tmp);
  } finally {
   for (SpillInfo spill : spills) {
    if (spill.file.exists() && ! spill.file.delete()) {
     logger.error("Error while deleting spill file {}", spill.file.getPath());
    }
   }
  }
  shuffleBlockResolver.writeIndexFileAndCommit(shuffleId, mapId, partitionLengths, tmp);
 } finally {
  if (tmp.exists() && !tmp.delete()) {
   logger.error("Error while deleting temp file {}", tmp.getAbsolutePath());
  }
 }
 mapStatus = MapStatus$.MODULE$.apply(blockManager.shuffleServerId(), partitionLengths);
}

代码示例来源:origin: org.apache.spark/spark-core_2.10

File tmp = Utils.tempFileWith(output);
try {
 partitionLengths = writePartitionedFile(tmp);

代码示例来源:origin: org.apache.spark/spark-core_2.11

File tmp = Utils.tempFileWith(output);
try {
 partitionLengths = writePartitionedFile(tmp);

代码示例来源:origin: org.apache.spark/spark-core

File tmp = Utils.tempFileWith(output);
try {
 partitionLengths = writePartitionedFile(tmp);

相关文章