org.apache.flink.util.IOUtils.copyBytes()方法的使用及代码示例

x33g5p2x  于2022-01-21 转载在 其他  
字(12.2k)|赞(0)|评价(0)|浏览(212)

本文整理了Java中org.apache.flink.util.IOUtils.copyBytes()方法的一些代码示例,展示了IOUtils.copyBytes()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。IOUtils.copyBytes()方法的具体详情如下:
包路径:org.apache.flink.util.IOUtils
类名称:IOUtils
方法名:copyBytes

IOUtils.copyBytes介绍

[英]Copies from one stream to another. closes the input and output streams at the end.
[中]从一个流复制到另一个流。结束时关闭输入和输出流。

代码示例

代码示例来源:origin: apache/flink

/**
 * Copies from one stream to another. <strong>closes the input and output
 * streams at the end</strong>.
 *
 * @param in
 *        InputStream to read from
 * @param out
 *        OutputStream to write to
 * @throws IOException
 *         thrown if an I/O error occurs while copying
 */
public static void copyBytes(final InputStream in, final OutputStream out) throws IOException {
  copyBytes(in, out, BLOCKSIZE, true);
}

代码示例来源:origin: apache/flink

/**
 * Copies from one stream to another.
 *
 * @param in
 *        InputStream to read from
 * @param out
 *        OutputStream to write to
 * @param close
 *        whether or not close the InputStream and OutputStream at the
 *        end. The streams are closed in the finally clause.
 * @throws IOException
 *         thrown if an I/O error occurs while copying
 */
public static void copyBytes(final InputStream in, final OutputStream out, final boolean close) throws IOException {
  copyBytes(in, out, BLOCKSIZE, close);
}

代码示例来源:origin: apache/flink

private static void internalCopyFile(Path sourcePath, Path targetPath, boolean executable, FileSystem sFS, FileSystem tFS) throws IOException {
  try (FSDataOutputStream lfsOutput = tFS.create(targetPath, FileSystem.WriteMode.NO_OVERWRITE); FSDataInputStream fsInput = sFS.open(sourcePath)) {
    IOUtils.copyBytes(fsInput, lfsOutput);
    //noinspection ResultOfMethodCallIgnored
    new File(targetPath.toString()).setExecutable(executable);
  }
}

代码示例来源:origin: apache/flink

private static void unzipPythonLibrary(Path targetDir) throws IOException {
  FileSystem targetFs = targetDir.getFileSystem();
  ClassLoader classLoader = PythonPlanBinder.class.getClassLoader();
  try (ZipInputStream zis = new ZipInputStream(classLoader.getResourceAsStream("python-source.zip"))) {
    ZipEntry entry = zis.getNextEntry();
    while (entry != null) {
      String fileName = entry.getName();
      Path newFile = new Path(targetDir, fileName);
      if (entry.isDirectory()) {
        targetFs.mkdirs(newFile);
      } else {
        try {
          LOG.debug("Unzipping to {}.", newFile);
          FSDataOutputStream fsDataOutputStream = targetFs.create(newFile, FileSystem.WriteMode.NO_OVERWRITE);
          IOUtils.copyBytes(zis, fsDataOutputStream, false);
        } catch (Exception e) {
          zis.closeEntry();
          throw new IOException("Failed to unzip flink python library.", e);
        }
      }
      zis.closeEntry();
      entry = zis.getNextEntry();
    }
    zis.closeEntry();
  }
}

代码示例来源:origin: apache/flink

private static void addToZip(Path fileOrDirectory, FileSystem fs, Path rootDir, ZipOutputStream out) throws IOException {
  String relativePath = fileOrDirectory.getPath().replace(rootDir.getPath() + '/', "");
  if (fs.getFileStatus(fileOrDirectory).isDir()) {
    out.putNextEntry(new ZipEntry(relativePath + '/'));
    for (FileStatus containedFile : fs.listStatus(fileOrDirectory)) {
      addToZip(containedFile.getPath(), fs, rootDir, out);
    }
  } else {
    ZipEntry entry = new ZipEntry(relativePath);
    out.putNextEntry(entry);
    try (FSDataInputStream in = fs.open(fileOrDirectory)) {
      IOUtils.copyBytes(in, out, false);
    }
    out.closeEntry();
  }
}

代码示例来源:origin: apache/flink

public static Path expandDirectory(Path file, Path targetDirectory) throws IOException {
  FileSystem sourceFs = file.getFileSystem();
  FileSystem targetFs = targetDirectory.getFileSystem();
  Path rootDir = null;
  try (ZipInputStream zis = new ZipInputStream(sourceFs.open(file))) {
    ZipEntry entry;
    while ((entry = zis.getNextEntry()) != null) {
      Path relativePath = new Path(entry.getName());
      if (rootDir == null) {
        // the first entry contains the name of the original directory that was zipped
        rootDir = relativePath;
      }
      Path newFile = new Path(targetDirectory, relativePath);
      if (entry.isDirectory()) {
        targetFs.mkdirs(newFile);
      } else {
        try (FSDataOutputStream fileStream = targetFs.create(newFile, FileSystem.WriteMode.NO_OVERWRITE)) {
          // do not close the streams here as it prevents access to further zip entries
          IOUtils.copyBytes(zis, fileStream, false);
        }
      }
      zis.closeEntry();
    }
  }
  return new Path(targetDirectory, rootDir);
}

代码示例来源:origin: com.alibaba.blink/flink-core

/**
 * Copies from one stream to another. <strong>closes the input and output
 * streams at the end</strong>.
 *
 * @param in
 *        InputStream to read from
 * @param out
 *        OutputStream to write to
 * @throws IOException
 *         thrown if an I/O error occurs while copying
 */
public static void copyBytes(final InputStream in, final OutputStream out) throws IOException {
  copyBytes(in, out, BLOCKSIZE, true);
}

代码示例来源:origin: org.apache.flink/flink-core

/**
 * Copies from one stream to another. <strong>closes the input and output
 * streams at the end</strong>.
 *
 * @param in
 *        InputStream to read from
 * @param out
 *        OutputStream to write to
 * @throws IOException
 *         thrown if an I/O error occurs while copying
 */
public static void copyBytes(final InputStream in, final OutputStream out) throws IOException {
  copyBytes(in, out, BLOCKSIZE, true);
}

代码示例来源:origin: org.apache.flink/flink-core

/**
 * Copies from one stream to another.
 *
 * @param in
 *        InputStream to read from
 * @param out
 *        OutputStream to write to
 * @param close
 *        whether or not close the InputStream and OutputStream at the
 *        end. The streams are closed in the finally clause.
 * @throws IOException
 *         thrown if an I/O error occurs while copying
 */
public static void copyBytes(final InputStream in, final OutputStream out, final boolean close) throws IOException {
  copyBytes(in, out, BLOCKSIZE, close);
}

代码示例来源:origin: com.alibaba.blink/flink-core

/**
 * Copies from one stream to another.
 *
 * @param in
 *        InputStream to read from
 * @param out
 *        OutputStream to write to
 * @param close
 *        whether or not close the InputStream and OutputStream at the
 *        end. The streams are closed in the finally clause.
 * @throws IOException
 *         thrown if an I/O error occurs while copying
 */
public static void copyBytes(final InputStream in, final OutputStream out, final boolean close) throws IOException {
  copyBytes(in, out, BLOCKSIZE, close);
}

代码示例来源:origin: org.apache.flink/flink-core

private static void internalCopyFile(Path sourcePath, Path targetPath, boolean executable, FileSystem sFS, FileSystem tFS) throws IOException {
  try (FSDataOutputStream lfsOutput = tFS.create(targetPath, FileSystem.WriteMode.NO_OVERWRITE); FSDataInputStream fsInput = sFS.open(sourcePath)) {
    IOUtils.copyBytes(fsInput, lfsOutput);
    //noinspection ResultOfMethodCallIgnored
    new File(targetPath.toString()).setExecutable(executable);
  }
}

代码示例来源:origin: org.apache.flink/flink-runtime_2.10

/**
   * Reads the given archive file and returns a {@link Collection} of contained {@link ArchivedJson}.
   *
   * @param file archive to extract
   * @return collection of archived jsons
   * @throws IOException if the file can't be opened, read or doesn't contain valid json
   */
  public static Collection<ArchivedJson> getArchivedJsons(Path file) throws IOException {
    try (FSDataInputStream input = file.getFileSystem().open(file);
      ByteArrayOutputStream output = new ByteArrayOutputStream()) {
      IOUtils.copyBytes(input, output);

      JsonNode archive = mapper.readTree(output.toByteArray());

      Collection<ArchivedJson> archives = new ArrayList<>();
      for (JsonNode archivePart : archive.get(ARCHIVE)) {
        String path = archivePart.get(PATH).asText();
        String json = archivePart.get(JSON).asText();
        archives.add(new ArchivedJson(path, json));
      }
      return archives;
    }
  }
}

代码示例来源:origin: org.apache.flink/flink-runtime_2.10

private void get(String fromBlobPath, File toFile) throws IOException {
  checkNotNull(fromBlobPath, "Blob path");
  checkNotNull(toFile, "File");
  if (!toFile.exists() && !toFile.createNewFile()) {
    throw new IOException("Failed to create target file to copy to");
  }
  final Path fromPath = new Path(fromBlobPath);
  boolean success = false;
  try (InputStream is = fileSystem.open(fromPath);
    FileOutputStream fos = new FileOutputStream(toFile)) {
    LOG.debug("Copying from {} to {}.", fromBlobPath, toFile);
    IOUtils.copyBytes(is, fos); // closes the streams
    success = true;
  } finally {
    // if the copy fails, we need to remove the target file because
    // outside code relies on a correct file as long as it exists
    if (!success) {
      try {
        toFile.delete();
      } catch (Throwable ignored) {}
    }
  }
}

代码示例来源:origin: org.apache.flink/flink-runtime

IOUtils.copyBytes(inputStream, os, BUFFER_SIZE, false);
  return os.finish();
} catch (Throwable t) {

代码示例来源:origin: org.apache.flink/flink-core

private static void addToZip(Path fileOrDirectory, FileSystem fs, Path rootDir, ZipOutputStream out) throws IOException {
  String relativePath = fileOrDirectory.getPath().replace(rootDir.getPath() + '/', "");
  if (fs.getFileStatus(fileOrDirectory).isDir()) {
    out.putNextEntry(new ZipEntry(relativePath + '/'));
    for (FileStatus containedFile : fs.listStatus(fileOrDirectory)) {
      addToZip(containedFile.getPath(), fs, rootDir, out);
    }
  } else {
    ZipEntry entry = new ZipEntry(relativePath);
    out.putNextEntry(entry);
    try (FSDataInputStream in = fs.open(fileOrDirectory)) {
      IOUtils.copyBytes(in, out, false);
    }
    out.closeEntry();
  }
}

代码示例来源:origin: com.alibaba.blink/flink-runtime

private static void compressDirectoryToZipfile(FileSystem fs, FileStatus rootDir, FileStatus sourceDir, ZipOutputStream out) throws IOException {
  for (FileStatus file : fs.listStatus(sourceDir.getPath())) {
    LOG.info("Zipping file: {}", file);
    if (file.isDir()) {
      compressDirectoryToZipfile(fs, rootDir, file, out);
    } else {
      String entryName = file.getPath().getPath().replace(rootDir.getPath().getPath(), "");
      LOG.info("Zipping entry: {}, file: {}, rootDir: {}", entryName, file, rootDir);
      ZipEntry entry = new ZipEntry(entryName);
      out.putNextEntry(entry);
      try (FSDataInputStream in = fs.open(file.getPath())) {
        IOUtils.copyBytes(in, out, false);
      }
      out.closeEntry();
    }
  }
}

代码示例来源:origin: org.apache.flink/flink-core

public static Path expandDirectory(Path file, Path targetDirectory) throws IOException {
  FileSystem sourceFs = file.getFileSystem();
  FileSystem targetFs = targetDirectory.getFileSystem();
  Path rootDir = null;
  try (ZipInputStream zis = new ZipInputStream(sourceFs.open(file))) {
    ZipEntry entry;
    while ((entry = zis.getNextEntry()) != null) {
      Path relativePath = new Path(entry.getName());
      if (rootDir == null) {
        // the first entry contains the name of the original directory that was zipped
        rootDir = relativePath;
      }
      Path newFile = new Path(targetDirectory, relativePath);
      if (entry.isDirectory()) {
        targetFs.mkdirs(newFile);
      } else {
        try (FSDataOutputStream fileStream = targetFs.create(newFile, FileSystem.WriteMode.NO_OVERWRITE)) {
          // do not close the streams here as it prevents access to further zip entries
          IOUtils.copyBytes(zis, fileStream, false);
        }
      }
      zis.closeEntry();
    }
  }
  return new Path(targetDirectory, rootDir);
}

代码示例来源:origin: org.apache.flink/flink-runtime

/**
   * Reads the given archive file and returns a {@link Collection} of contained {@link ArchivedJson}.
   *
   * @param file archive to extract
   * @return collection of archived jsons
   * @throws IOException if the file can't be opened, read or doesn't contain valid json
   */
  public static Collection<ArchivedJson> getArchivedJsons(Path file) throws IOException {
    try (FSDataInputStream input = file.getFileSystem().open(file);
      ByteArrayOutputStream output = new ByteArrayOutputStream()) {
      IOUtils.copyBytes(input, output);

      JsonNode archive = mapper.readTree(output.toByteArray());

      Collection<ArchivedJson> archives = new ArrayList<>();
      for (JsonNode archivePart : archive.get(ARCHIVE)) {
        String path = archivePart.get(PATH).asText();
        String json = archivePart.get(JSON).asText();
        archives.add(new ArchivedJson(path, json));
      }
      return archives;
    }
  }
}

代码示例来源:origin: com.alibaba.blink/flink-runtime

/**
   * Reads the given archive file and returns a {@link Collection} of contained {@link ArchivedJson}.
   *
   * @param file archive to extract
   * @return collection of archived jsons
   * @throws IOException if the file can't be opened, read or doesn't contain valid json
   */
  public static Collection<ArchivedJson> getArchivedJsons(Path file) throws IOException {
    try (FSDataInputStream input = file.getFileSystem().open(file);
      ByteArrayOutputStream output = new ByteArrayOutputStream()) {
      IOUtils.copyBytes(input, output);

      JsonNode archive = mapper.readTree(output.toByteArray());

      Collection<ArchivedJson> archives = new ArrayList<>();
      for (JsonNode archivePart : archive.get(ARCHIVE)) {
        String path = archivePart.get(PATH).asText();
        String json = archivePart.get(JSON).asText();
        archives.add(new ArchivedJson(path, json));
      }
      return archives;
    }
  }
}

代码示例来源:origin: org.apache.flink/flink-runtime_2.11

/**
   * Reads the given archive file and returns a {@link Collection} of contained {@link ArchivedJson}.
   *
   * @param file archive to extract
   * @return collection of archived jsons
   * @throws IOException if the file can't be opened, read or doesn't contain valid json
   */
  public static Collection<ArchivedJson> getArchivedJsons(Path file) throws IOException {
    try (FSDataInputStream input = file.getFileSystem().open(file);
      ByteArrayOutputStream output = new ByteArrayOutputStream()) {
      IOUtils.copyBytes(input, output);

      JsonNode archive = mapper.readTree(output.toByteArray());

      Collection<ArchivedJson> archives = new ArrayList<>();
      for (JsonNode archivePart : archive.get(ARCHIVE)) {
        String path = archivePart.get(PATH).asText();
        String json = archivePart.get(JSON).asText();
        archives.add(new ArchivedJson(path, json));
      }
      return archives;
    }
  }
}

相关文章