com.amazonaws.services.s3.transfer.Upload.waitForUploadResult()方法的使用及代码示例

x33g5p2x  于2022-02-01 转载在 其他  
字(10.2k)|赞(0)|评价(0)|浏览(87)

本文整理了Java中com.amazonaws.services.s3.transfer.Upload.waitForUploadResult()方法的一些代码示例,展示了Upload.waitForUploadResult()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Upload.waitForUploadResult()方法的具体详情如下:
包路径:com.amazonaws.services.s3.transfer.Upload
类名称:Upload
方法名:waitForUploadResult

Upload.waitForUploadResult介绍

[英]Waits for this upload to complete and returns the result of this upload. This is a blocking call. Be prepared to handle errors when calling this method. Any errors that occurred during the asynchronous transfer will be re-thrown through this method.
[中]等待此上载完成并返回此上载的结果。这是一个拦截呼叫。调用此方法时,请准备好处理错误。异步传输期间发生的任何错误都将通过此方法重新抛出。

代码示例

代码示例来源:origin: pinterest/secor

public UploadResult get() throws Exception {
    return mUpload.waitForUploadResult();
  }
}

代码示例来源:origin: Alluxio/alluxio

mManager.upload(putReq).waitForUploadResult();
if (!mFile.delete()) {
 LOG.error("Failed to delete temporary file @ {}", mFile.getPath());

代码示例来源:origin: magefree/mage

upload.waitForUploadResult();
logger.info("Sync Complete For " + path + " to bucket: " + existingBucketName + " with AWS Access Id: " + accessKeyId);
new File(path);

代码示例来源:origin: EMCECS/ecs-sync

@Override
  public String call() throws Exception {
    return upload.waitForUploadResult().getETag();
  }
}, OPERATION_MPU);

代码示例来源:origin: apache/jackrabbit-oak

@Override
public void addMetadataRecord(File input, String name) throws DataStoreException {
  checkArgument(input != null, "input should not be null");
  checkArgument(!Strings.isNullOrEmpty(name), "name should not be empty");
  ClassLoader contextClassLoader = Thread.currentThread().getContextClassLoader();
  try {
    Thread.currentThread().setContextClassLoader(getClass().getClassLoader());
    Upload upload = tmx.upload(s3ReqDecorator
      .decorate(new PutObjectRequest(bucket, addMetaKeyPrefix(name), input)));
    upload.waitForUploadResult();
  } catch (InterruptedException e) {
    LOG.error("Exception in uploading metadata file {}", new Object[] {input, e});
    throw new DataStoreException("Error in uploading metadata file", e);
  } finally {
    if (contextClassLoader != null) {
      Thread.currentThread().setContextClassLoader(contextClassLoader);
    }
  }
}

代码示例来源:origin: org.apache.jackrabbit/oak-blob-cloud

@Override
public void addMetadataRecord(File input, String name) throws DataStoreException {
  checkArgument(input != null, "input should not be null");
  checkArgument(!Strings.isNullOrEmpty(name), "name should not be empty");
  ClassLoader contextClassLoader = Thread.currentThread().getContextClassLoader();
  try {
    Thread.currentThread().setContextClassLoader(getClass().getClassLoader());
    Upload upload = tmx.upload(s3ReqDecorator
      .decorate(new PutObjectRequest(bucket, addMetaKeyPrefix(name), input)));
    upload.waitForUploadResult();
  } catch (InterruptedException e) {
    LOG.error("Exception in uploading metadata file {}", new Object[] {input, e});
    throw new DataStoreException("Error in uploading metadata file", e);
  } finally {
    if (contextClassLoader != null) {
      Thread.currentThread().setContextClassLoader(contextClassLoader);
    }
  }
}

代码示例来源:origin: org.apache.jackrabbit/oak-blob-cloud

@Override
public void addMetadataRecord(final InputStream input, final String name) throws DataStoreException {
  checkArgument(input != null, "input should not be null");
  checkArgument(!Strings.isNullOrEmpty(name), "name should not be empty");
  ClassLoader contextClassLoader = Thread.currentThread().getContextClassLoader();
  try {
    Thread.currentThread().setContextClassLoader(getClass().getClassLoader());
    Upload upload = tmx.upload(s3ReqDecorator
      .decorate(new PutObjectRequest(bucket, addMetaKeyPrefix(name), input, new ObjectMetadata())));
    upload.waitForUploadResult();
  } catch (InterruptedException e) {
    LOG.error("Error in uploading", e);
    throw new DataStoreException("Error in uploading", e);
  } finally {
    if (contextClassLoader != null) {
      Thread.currentThread().setContextClassLoader(contextClassLoader);
    }
  }
}

代码示例来源:origin: apache/jackrabbit-oak

@Override
public void addMetadataRecord(final InputStream input, final String name) throws DataStoreException {
  checkArgument(input != null, "input should not be null");
  checkArgument(!Strings.isNullOrEmpty(name), "name should not be empty");
  ClassLoader contextClassLoader = Thread.currentThread().getContextClassLoader();
  try {
    Thread.currentThread().setContextClassLoader(getClass().getClassLoader());
    Upload upload = tmx.upload(s3ReqDecorator
      .decorate(new PutObjectRequest(bucket, addMetaKeyPrefix(name), input, new ObjectMetadata())));
    upload.waitForUploadResult();
  } catch (InterruptedException e) {
    LOG.error("Error in uploading", e);
    throw new DataStoreException("Error in uploading", e);
  } finally {
    if (contextClassLoader != null) {
      Thread.currentThread().setContextClassLoader(contextClassLoader);
    }
  }
}

代码示例来源:origin: apache/streams

private void addFile() throws Exception {
 InputStream is = new ByteArrayInputStream(this.outputStream.toByteArray());
 int contentLength = outputStream.size();
 TransferManager transferManager = new TransferManager(amazonS3Client);
 ObjectMetadata metadata = new ObjectMetadata();
 metadata.setExpirationTime(DateTime.now().plusDays(365 * 3).toDate());
 metadata.setContentLength(contentLength);
 metadata.addUserMetadata("writer", "org.apache.streams");
 for (String s : metaData.keySet()) {
  metadata.addUserMetadata(s, metaData.get(s));
 }
 String fileNameToWrite = path + fileName;
 Upload upload = transferManager.upload(bucketName, fileNameToWrite, is, metadata);
 try {
  upload.waitForUploadResult();
  is.close();
  transferManager.shutdownNow(false);
  LOGGER.info("S3 File Close[{} kb] - {}", contentLength / 1024, path + fileName);
 } catch (Exception ignored) {
  LOGGER.trace("Ignoring", ignored);
 }
}

代码示例来源:origin: com.ibm.stocator/stocator

@Override
public void close() throws IOException {
 if (closed.getAndSet(true)) {
  return;
 }
 mBackupOutputStream.close();
 LOG.debug("OutputStream for key '{}' closed. Now beginning upload", mKey);
 try {
  final ObjectMetadata om = new ObjectMetadata();
  om.setContentLength(mBackupFile.length());
  om.setContentType(mContentType);
  om.setUserMetadata(mMetadata);
  PutObjectRequest putObjectRequest = new PutObjectRequest(mBucketName, mKey, mBackupFile);
  putObjectRequest.setMetadata(om);
  Upload upload = transfers.upload(putObjectRequest);
  upload.waitForUploadResult();
 } catch (InterruptedException e) {
  throw (InterruptedIOException) new InterruptedIOException(e.toString())
    .initCause(e);
 } catch (AmazonClientException e) {
  throw new IOException(String.format("saving output %s %s", mKey, e));
 } finally {
  if (!mBackupFile.delete()) {
   LOG.warn("Could not delete temporary cos file: {}", mBackupOutputStream);
  }
  super.close();
 }
 LOG.debug("OutputStream for key '{}' upload complete", mKey);
}

代码示例来源:origin: CODAIT/stocator

@Override
public void close() throws IOException {
 if (closed.getAndSet(true)) {
  return;
 }
 mBackupOutputStream.close();
 LOG.debug("OutputStream for key '{}' closed. Now beginning upload", mKey);
 try {
  final ObjectMetadata om = new ObjectMetadata();
  om.setContentLength(mBackupFile.length());
  om.setContentType(mContentType);
  om.setUserMetadata(mMetadata);
  PutObjectRequest putObjectRequest = new PutObjectRequest(mBucketName, mKey, mBackupFile);
  putObjectRequest.setMetadata(om);
  Upload upload = transfers.upload(putObjectRequest);
  upload.waitForUploadResult();
 } catch (InterruptedException e) {
  throw (InterruptedIOException) new InterruptedIOException(e.toString())
    .initCause(e);
 } catch (AmazonClientException e) {
  throw new IOException(String.format("saving output %s %s", mKey, e));
 } finally {
  if (!mBackupFile.delete()) {
   LOG.warn("Could not delete temporary cos file: {}", mBackupOutputStream);
  }
  super.close();
 }
 LOG.debug("OutputStream for key '{}' upload complete", mKey);
}

代码示例来源:origin: io.ifar.skid-road/skid-road

@Override
public void put(String uri, File f) throws AmazonClientException {
  LOG.trace("Uploading " + uri);
  String[] parts = pieces(uri);
  ObjectMetadata om = new ObjectMetadata();
  om.setContentLength(f.length());
  if (f.getName().endsWith("gzip")) {
    om.setContentEncoding("gzip");
  }
  uploadsInProgress.incrementAndGet();
  try {
    PutObjectRequest req = new PutObjectRequest(parts[0],parts[1],f);
    req.setMetadata(om);
    UploadResult resp = svc.upload(req).waitForUploadResult();
    LOG.trace("Uploaded " + uri + " with ETag " + resp.getETag());
  } catch (InterruptedException ie) {
    LOG.error("Interrupted while uploading {} to {}.",
        f.getPath(), uri);
    throw Throwables.propagate(ie);
  } finally {
    uploadsInProgress.decrementAndGet();
  }
}

代码示例来源:origin: ch.cern.hadoop/hadoop-aws

upload.addProgressListener(listener);
upload.waitForUploadResult();

代码示例来源:origin: org.apache.jackrabbit/oak-blob-cloud

bucket, key, file)));
  up.waitForUploadResult();
  LOG.debug("synchronous upload to identifier [{}] completed.", identifier);
} catch (Exception e2 ) {

代码示例来源:origin: org.apache.hadoop/hadoop-aws

/**
 * Wait for an upload to complete.
 * If the waiting for completion is interrupted, the upload will be
 * aborted before an {@code InterruptedIOException} is thrown.
 * @param upload upload to wait for
 * @param key destination key
 * @return the upload result
 * @throws InterruptedIOException if the blocking was interrupted.
 */
UploadResult waitForUploadCompletion(String key, UploadInfo uploadInfo)
  throws InterruptedIOException {
 Upload upload = uploadInfo.getUpload();
 try {
  UploadResult result = upload.waitForUploadResult();
  incrementPutCompletedStatistics(true, uploadInfo.getLength());
  return result;
 } catch (InterruptedException e) {
  LOG.info("Interrupted: aborting upload");
  incrementPutCompletedStatistics(false, uploadInfo.getLength());
  upload.abort();
  throw (InterruptedIOException)
    new InterruptedIOException("Interrupted in PUT to "
      + keyToQualifiedPath(key))
    .initCause(e);
 }
}

代码示例来源:origin: apache/jackrabbit-oak

bucket, key, file)));
  up.waitForUploadResult();
  LOG.debug("synchronous upload to identifier [{}] completed.", identifier);
} catch (Exception e2 ) {

代码示例来源:origin: Aloisius/hadoop-s3a

upload.addProgressListener(listener);
upload.waitForUploadResult();

代码示例来源:origin: com.ibm.stocator/stocator

PutObjectRequest putObjectRequest = new PutObjectRequest(mBucket, objName, im, om);
Upload upload = transfers.upload(putObjectRequest);
upload.waitForUploadResult();
OutputStream fakeStream = new OutputStream() {

代码示例来源:origin: io.digdag/digdag-storage-s3

try (InputStream in = payload.open()) {
  PutObjectRequest req = new PutObjectRequest(bucket, key, in, meta);
  UploadResult result = transferManager.upload(req).waitForUploadResult();

代码示例来源:origin: ch.cern.hadoop/hadoop-aws

up.addProgressListener(progressListener);
try {
 up.waitForUploadResult();
 statistics.incrementWriteOps(1);
} catch (InterruptedException e) {

相关文章