org.apache.beam.sdk.io.FileSystems.create()方法的使用及代码示例

x33g5p2x  于2022-01-19 转载在 其他  
字(7.7k)|赞(0)|评价(0)|浏览(166)

本文整理了Java中org.apache.beam.sdk.io.FileSystems.create()方法的一些代码示例,展示了FileSystems.create()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。FileSystems.create()方法的具体详情如下:
包路径:org.apache.beam.sdk.io.FileSystems
类名称:FileSystems
方法名:create

FileSystems.create介绍

[英]Returns a write channel for the given ResourceId.

The resource is not expanded; it is used verbatim.
[中]返回给定ResourceId的写入通道。
资源没有扩大;它是逐字使用的。

代码示例

代码示例来源:origin: GoogleCloudPlatform/DataflowTemplates

/** utility method to copy binary (jar file) data from source to dest. */
 private static void copy(ResourceId source, ResourceId dest) throws IOException {
  try (ReadableByteChannel rbc = FileSystems.open(source)) {
   try (WritableByteChannel wbc = FileSystems.create(dest, MimeTypes.BINARY)) {
    ByteStreams.copy(rbc, wbc);
   }
  }
 }
}

代码示例来源:origin: spotify/dbeam

public static void writeToFile(String filename, ByteBuffer contents) throws IOException {
 ResourceId resourceId = FileSystems.matchNewResource(filename, false);
 try (WritableByteChannel out = FileSystems.create(resourceId, MimeTypes.TEXT)) {
  out.write(contents);
 }
}

代码示例来源:origin: com.spotify/scio-core

private static void copyToRemote(Path src, URI dst) throws IOException {
 ResourceId dstId = FileSystems.matchNewResource(dst.toString(), false);
 WritableByteChannel dstCh = FileSystems.create(dstId, MimeTypes.BINARY);
 FileChannel srcCh = FileChannel.open(src, StandardOpenOption.READ);
 long srcSize = srcCh.size();
 long copied = 0;
 do {
  copied += srcCh.transferTo(copied, srcSize - copied, dstCh);
 } while (copied < srcSize);
 dstCh.close();
 srcCh.close();
 Preconditions.checkState(copied == srcSize);
}

代码示例来源:origin: GoogleCloudPlatform/DataflowTemplates

private void saveSchema(String content, String schemaPath) {
  LOG.info("Schema: " + content);
  try {
   WritableByteChannel chan =
     FileSystems.create(FileSystems.matchNewResource(schemaPath, false), "text/plain");
   try (OutputStream stream = Channels.newOutputStream(chan)) {
    stream.write(content.getBytes());
   }
  } catch (IOException e) {
   throw new RuntimeException("Failed to write schema", e);
  }
 }
}

代码示例来源:origin: org.apache.beam/beam-examples-java

public static String copyFile(ResourceId sourceFile, ResourceId destinationFile)
  throws IOException {
 try (WritableByteChannel writeChannel = FileSystems.create(destinationFile, "text/plain")) {
  try (ReadableByteChannel readChannel = FileSystems.open(sourceFile)) {
   final ByteBuffer buffer = ByteBuffer.allocateDirect(16 * 1024);
   while (readChannel.read(buffer) != -1) {
    buffer.flip();
    writeChannel.write(buffer);
    buffer.compact();
   }
   buffer.flip();
   while (buffer.hasRemaining()) {
    writeChannel.write(buffer);
   }
  }
 }
 return destinationFile.toString();
}

代码示例来源:origin: org.apache.beam/beam-sdks-java-io-google-cloud-platform

TableRowWriter(String basename) throws Exception {
 String uId = UUID.randomUUID().toString();
 resourceId = FileSystems.matchNewResource(basename + uId, false);
 LOG.info("Opening TableRowWriter to {}.", resourceId);
 channel = FileSystems.create(resourceId, MimeTypes.TEXT);
 out = new CountingOutputStream(Channels.newOutputStream(channel));
}

代码示例来源:origin: org.apache.beam/beam-sdks-java-core

private static void writeTextToFileSideEffect(String text, String filename) throws IOException {
  ResourceId rid = FileSystems.matchNewResource(filename, false);
  WritableByteChannel chan = FileSystems.create(rid, "text/plain");
  chan.write(ByteBuffer.wrap(text.getBytes(StandardCharsets.UTF_8)));
  chan.close();
 }
}

代码示例来源:origin: org.apache.beam/beam-sdks-java-core

/**
 * Returns a write channel for the given {@link ResourceId}.
 *
 * <p>The resource is not expanded; it is used verbatim.
 *
 * @param resourceId the reference of the file-like resource to create
 * @param mimeType the mine type of the file-like resource to create
 */
public static WritableByteChannel create(ResourceId resourceId, String mimeType)
  throws IOException {
 return create(resourceId, StandardCreateOptions.builder().setMimeType(mimeType).build());
}

代码示例来源:origin: org.apache.beam/beam-sdks-java-io-google-cloud-platform

private void writeRowsHelper(
   List<TableRow> rows, Schema avroSchema, String destinationPattern, int shard) {
  String filename = destinationPattern.replace("*", String.format("%012d", shard));
  try (WritableByteChannel channel =
      FileSystems.create(
        FileSystems.matchNewResource(filename, false /* isDirectory */), MimeTypes.BINARY);
    DataFileWriter<GenericRecord> tableRowWriter =
      new DataFileWriter<>(new GenericDatumWriter<GenericRecord>(avroSchema))
        .create(avroSchema, Channels.newOutputStream(channel))) {
   for (Map<String, Object> record : rows) {
    GenericRecordBuilder genericRecordBuilder = new GenericRecordBuilder(avroSchema);
    for (Map.Entry<String, Object> field : record.entrySet()) {
     genericRecordBuilder.set(field.getKey(), field.getValue());
    }
    tableRowWriter.append(genericRecordBuilder.build());
   }
  } catch (IOException e) {
   throw new IllegalStateException(
     String.format("Could not create destination for extract job %s", filename), e);
  }
 }
}

代码示例来源:origin: GoogleCloudPlatform/DataflowTemplates

try (WritableByteChannel writerChannel = FileSystems.create(tempFile, MimeTypes.TEXT)) {
 ByteStreams.copy(readerChannel, writerChannel);

代码示例来源:origin: org.apache.beam/beam-sdks-java-core

WritableByteChannel tempChannel = FileSystems.create(outputFile, channelMimeType);
try {
 channel = factory.create(tempChannel);

代码示例来源:origin: GoogleCloudPlatform/DataflowTemplates

@ProcessElement
 public void processElement(ProcessContext context) {
  ResourceId inputFile = context.element().resourceId();
  Compression compression = compressionValue.get();
  // Add the compression extension to the output filename. Example: demo.txt -> demo.txt.gz
  String outputFilename = inputFile.getFilename() + compression.getSuggestedSuffix();
  // Resolve the necessary resources to perform the transfer
  ResourceId outputDir = FileSystems.matchNewResource(destinationLocation.get(), true);
  ResourceId outputFile =
    outputDir.resolve(outputFilename, StandardResolveOptions.RESOLVE_FILE);
  ResourceId tempFile =
    outputDir.resolve("temp-" + outputFilename, StandardResolveOptions.RESOLVE_FILE);
  // Perform the copy of the compressed channel to the destination.
  try (ReadableByteChannel readerChannel = FileSystems.open(inputFile)) {
   try (WritableByteChannel writerChannel =
     compression.writeCompressed(FileSystems.create(tempFile, MimeTypes.BINARY))) {
    // Execute the copy to the temporary file
    ByteStreams.copy(readerChannel, writerChannel);
   }
   // Rename the temporary file to the output file
   FileSystems.rename(ImmutableList.of(tempFile), ImmutableList.of(outputFile));
   // Output the path to the uncompressed file
   context.output(outputFile.toString());
  } catch (IOException e) {
   LOG.error("Error occurred during compression of {}", inputFile.toString(), e);
   context.output(DEADLETTER_TAG, KV.of(inputFile.toString(), e.getMessage()));
  }
 }
}

代码示例来源:origin: org.apache.beam/beam-runners-google-cloud-dataflow-java

private StagingResult tryStagePackage(PackageAttributes attributes, CreateOptions createOptions)
  throws IOException, InterruptedException {
 String sourceDescription = attributes.getSourceDescription();
 String target = attributes.getDestination().getLocation();
 LOG.info("Uploading {} to {}", sourceDescription, target);
 try (WritableByteChannel writer =
   FileSystems.create(FileSystems.matchNewResource(target, false), createOptions)) {
  if (attributes.getBytes() != null) {
   ByteSource.wrap(attributes.getBytes()).copyTo(Channels.newOutputStream(writer));
  } else {
   File sourceFile = attributes.getSource();
   checkState(
     sourceFile != null,
     "Internal inconsistency: we tried to stage something to %s, but neither a source file "
       + "nor the byte content was specified",
     target);
   if (sourceFile.isDirectory()) {
    ZipFiles.zipDirectory(sourceFile, Channels.newOutputStream(writer));
   } else {
    Files.asByteSource(sourceFile).copyTo(Channels.newOutputStream(writer));
   }
  }
 }
 return StagingResult.uploaded(attributes);
}

代码示例来源:origin: org.apache.beam/beam-runners-google-cloud-dataflow-java

new BufferedWriter(
     new OutputStreamWriter(
       Channels.newOutputStream(FileSystems.create(fileResource, MimeTypes.TEXT)),
       UTF_8)))) {
printWriter.print(workSpecJson);

相关文章