com.amazonaws.services.s3.transfer.Upload.addProgressListener()方法的使用及代码示例

x33g5p2x  于2022-02-01 转载在 其他  
字(6.6k)|赞(0)|评价(0)|浏览(105)

本文整理了Java中com.amazonaws.services.s3.transfer.Upload.addProgressListener()方法的一些代码示例,展示了Upload.addProgressListener()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Upload.addProgressListener()方法的具体详情如下:
包路径:com.amazonaws.services.s3.transfer.Upload
类名称:Upload
方法名:addProgressListener

Upload.addProgressListener介绍

暂无

代码示例

代码示例来源:origin: awsdocs/aws-doc-sdk-examples

u.addProgressListener(new ProgressListener() {
  public void progressChanged(ProgressEvent e) {
    double pct = e.getBytesTransferred() * 100.0 / e.getBytes();

代码示例来源:origin: prestodb/presto

upload.addProgressListener(createProgressListener(upload));

代码示例来源:origin: apache/cloudstack

upload.addProgressListener(new ProgressListener() {
  @Override
  public void progressChanged(ProgressEvent progressEvent) {

代码示例来源:origin: ch.cern.hadoop/hadoop-aws

upload.addProgressListener(listener);

代码示例来源:origin: Aloisius/hadoop-s3a

upload.addProgressListener(listener);

代码示例来源:origin: jenkinsci/pipeline-aws-plugin

upload.addProgressListener((ProgressListener) progressEvent -> {
  if (progressEvent.getEventType() == ProgressEventType.TRANSFER_COMPLETED_EVENT) {
    RemoteUploader.this.taskListener.getLogger().println("Finished: " + upload.getDescription());
fileUpload = mgr.uploadDirectory(this.bucket, this.path, localFile, true, metadatasProvider);
for (final Upload upload : fileUpload.getSubTransfers()) {
  upload.addProgressListener((ProgressListener) progressEvent -> {
    if (progressEvent.getEventType() == ProgressEventType.TRANSFER_COMPLETED_EVENT) {
      RemoteUploader.this.taskListener.getLogger().println("Finished: " + upload.getDescription());

代码示例来源:origin: uk.co.nichesolutions.presto/presto-hive

private void uploadObject()
    throws IOException
{
  try {
    log.debug("Starting upload for host: %s, key: %s, file: %s, size: %s", host, key, tempFile, tempFile.length());
    STATS.uploadStarted();
    PutObjectRequest request = new PutObjectRequest(host, key, tempFile);
    if (sseEnabled) {
      ObjectMetadata metadata = new ObjectMetadata();
      metadata.setSSEAlgorithm(ObjectMetadata.AES_256_SERVER_SIDE_ENCRYPTION);
      request.setMetadata(metadata);
    }
    Upload upload = transferManager.upload(request);
    if (log.isDebugEnabled()) {
      upload.addProgressListener(createProgressListener(upload));
    }
    upload.waitForCompletion();
    STATS.uploadSuccessful();
    log.debug("Completed upload for host: %s, key: %s", host, key);
  }
  catch (AmazonClientException e) {
    STATS.uploadFailed();
    throw new IOException(e);
  }
  catch (InterruptedException e) {
    STATS.uploadFailed();
    Thread.currentThread().interrupt();
    throw new InterruptedIOException();
  }
}

代码示例来源:origin: apache/jackrabbit

up.addProgressListener(new S3UploadProgressListener(up,
  identifier, file, callback));
LOG.debug(

代码示例来源:origin: org.apache.jackrabbit/jackrabbit-aws-ext

up.addProgressListener(new S3UploadProgressListener(up,
  identifier, file, callback));
LOG.debug(

代码示例来源:origin: classmethod/gradle-aws-plugin

upload.addProgressListener(new ProgressListener() {

代码示例来源:origin: jenkinsci/pipeline-aws-plugin

fileUpload = mgr.uploadFileList(this.bucket, this.path, localFile, this.fileList, metadatasProvider);
for (final Upload upload : fileUpload.getSubTransfers()) {
  upload.addProgressListener((ProgressListener) progressEvent -> {
    if (progressEvent.getEventType() == ProgressEventType.TRANSFER_COMPLETED_EVENT) {
      RemoteListUploader.this.taskListener.getLogger().println("Finished: " + upload.getDescription());

代码示例来源:origin: com.conveyal/r5

@Override
  public void saveData(String directory, String fileName, PersistenceBuffer persistenceBuffer) {
    try {
      ObjectMetadata metadata = new ObjectMetadata();
      // Set content encoding to gzip. This way browsers will decompress on download using native deflate code.
      // http://www.rightbrainnetworks.com/blog/serving-compressed-gzipped-static-files-from-amazon-s3-or-cloudfront/
      metadata.setContentEncoding("gzip");
      metadata.setContentType(persistenceBuffer.getMimeType());
      // We must setContentLength or the S3 client will re-buffer the InputStream into another memory buffer.
      metadata.setContentLength(persistenceBuffer.getSize());
//            amazonS3.putObject(directory, fileName, persistenceBuffer.getInputStream(), metadata);
      final Upload upload = transferManager.upload(directory, fileName, persistenceBuffer.getInputStream(), metadata);
      upload.addProgressListener(new UploadProgressLogger(upload));
      // Block until upload completes to avoid accumulating unlimited uploads in memory.
      upload.waitForCompletion();
    } catch (Exception e) {
      throw new RuntimeException(e);
    }
  }

代码示例来源:origin: conveyal/r5

@Override
  public void saveData(String directory, String fileName, PersistenceBuffer persistenceBuffer) {
    try {
      ObjectMetadata metadata = new ObjectMetadata();
      // Set content encoding to gzip. This way browsers will decompress on download using native deflate code.
      // http://www.rightbrainnetworks.com/blog/serving-compressed-gzipped-static-files-from-amazon-s3-or-cloudfront/
      metadata.setContentEncoding("gzip");
      metadata.setContentType(persistenceBuffer.getMimeType());
      // We must setContentLength or the S3 client will re-buffer the InputStream into another memory buffer.
      metadata.setContentLength(persistenceBuffer.getSize());
//            amazonS3.putObject(directory, fileName, persistenceBuffer.getInputStream(), metadata);
      final Upload upload = transferManager.upload(directory, fileName, persistenceBuffer.getInputStream(), metadata);
      upload.addProgressListener(new UploadProgressLogger(upload));
      // Block until upload completes to avoid accumulating unlimited uploads in memory.
      upload.waitForCompletion();
    } catch (Exception e) {
      throw new RuntimeException(e);
    }
  }

代码示例来源:origin: ch.cern.hadoop/hadoop-aws

up.addProgressListener(progressListener);
try {
 up.waitForUploadResult();

代码示例来源:origin: prestosql/presto

upload.addProgressListener(createProgressListener(upload));

代码示例来源:origin: org.apache.hadoop/hadoop-aws

/**
 * Execute a PUT via the transfer manager, blocking for completion,
 * updating the metastore afterwards.
 * If the waiting for completion is interrupted, the upload will be
 * aborted before an {@code InterruptedIOException} is thrown.
 * @param putObjectRequest request
 * @param progress optional progress callback
 * @return the upload result
 * @throws InterruptedIOException if the blocking was interrupted.
 */
@Retries.OnceRaw("For PUT; post-PUT actions are RetriesExceptionsSwallowed")
UploadResult executePut(PutObjectRequest putObjectRequest,
  Progressable progress)
  throws InterruptedIOException {
 String key = putObjectRequest.getKey();
 UploadInfo info = putObject(putObjectRequest);
 Upload upload = info.getUpload();
 ProgressableProgressListener listener = new ProgressableProgressListener(
   this, key, upload, progress);
 upload.addProgressListener(listener);
 UploadResult result = waitForUploadCompletion(key, info);
 listener.uploadCompleted();
 // post-write actions
 finishedWrite(key, info.getLength());
 return result;
}

代码示例来源:origin: Aloisius/hadoop-s3a

up.addProgressListener(progressListener);
try {
 up.waitForUploadResult();

相关文章