com.amazonaws.services.s3.transfer.Upload.waitForUploadResult()方法的使用及代码示例

x33g5p2x  于2022-02-01 转载在 其他  
字(10.2k)|赞(0)|评价(0)|浏览(105)

本文整理了Java中com.amazonaws.services.s3.transfer.Upload.waitForUploadResult()方法的一些代码示例,展示了Upload.waitForUploadResult()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Upload.waitForUploadResult()方法的具体详情如下:
包路径:com.amazonaws.services.s3.transfer.Upload
类名称:Upload
方法名:waitForUploadResult

Upload.waitForUploadResult介绍

[英]Waits for this upload to complete and returns the result of this upload. This is a blocking call. Be prepared to handle errors when calling this method. Any errors that occurred during the asynchronous transfer will be re-thrown through this method.
[中]等待此上载完成并返回此上载的结果。这是一个拦截呼叫。调用此方法时,请准备好处理错误。异步传输期间发生的任何错误都将通过此方法重新抛出。

代码示例

代码示例来源:origin: pinterest/secor

  1. public UploadResult get() throws Exception {
  2. return mUpload.waitForUploadResult();
  3. }
  4. }

代码示例来源:origin: Alluxio/alluxio

  1. mManager.upload(putReq).waitForUploadResult();
  2. if (!mFile.delete()) {
  3. LOG.error("Failed to delete temporary file @ {}", mFile.getPath());

代码示例来源:origin: magefree/mage

  1. upload.waitForUploadResult();
  2. logger.info("Sync Complete For " + path + " to bucket: " + existingBucketName + " with AWS Access Id: " + accessKeyId);
  3. new File(path);

代码示例来源:origin: EMCECS/ecs-sync

  1. @Override
  2. public String call() throws Exception {
  3. return upload.waitForUploadResult().getETag();
  4. }
  5. }, OPERATION_MPU);

代码示例来源:origin: apache/jackrabbit-oak

  1. @Override
  2. public void addMetadataRecord(File input, String name) throws DataStoreException {
  3. checkArgument(input != null, "input should not be null");
  4. checkArgument(!Strings.isNullOrEmpty(name), "name should not be empty");
  5. ClassLoader contextClassLoader = Thread.currentThread().getContextClassLoader();
  6. try {
  7. Thread.currentThread().setContextClassLoader(getClass().getClassLoader());
  8. Upload upload = tmx.upload(s3ReqDecorator
  9. .decorate(new PutObjectRequest(bucket, addMetaKeyPrefix(name), input)));
  10. upload.waitForUploadResult();
  11. } catch (InterruptedException e) {
  12. LOG.error("Exception in uploading metadata file {}", new Object[] {input, e});
  13. throw new DataStoreException("Error in uploading metadata file", e);
  14. } finally {
  15. if (contextClassLoader != null) {
  16. Thread.currentThread().setContextClassLoader(contextClassLoader);
  17. }
  18. }
  19. }

代码示例来源:origin: org.apache.jackrabbit/oak-blob-cloud

  1. @Override
  2. public void addMetadataRecord(File input, String name) throws DataStoreException {
  3. checkArgument(input != null, "input should not be null");
  4. checkArgument(!Strings.isNullOrEmpty(name), "name should not be empty");
  5. ClassLoader contextClassLoader = Thread.currentThread().getContextClassLoader();
  6. try {
  7. Thread.currentThread().setContextClassLoader(getClass().getClassLoader());
  8. Upload upload = tmx.upload(s3ReqDecorator
  9. .decorate(new PutObjectRequest(bucket, addMetaKeyPrefix(name), input)));
  10. upload.waitForUploadResult();
  11. } catch (InterruptedException e) {
  12. LOG.error("Exception in uploading metadata file {}", new Object[] {input, e});
  13. throw new DataStoreException("Error in uploading metadata file", e);
  14. } finally {
  15. if (contextClassLoader != null) {
  16. Thread.currentThread().setContextClassLoader(contextClassLoader);
  17. }
  18. }
  19. }

代码示例来源:origin: org.apache.jackrabbit/oak-blob-cloud

  1. @Override
  2. public void addMetadataRecord(final InputStream input, final String name) throws DataStoreException {
  3. checkArgument(input != null, "input should not be null");
  4. checkArgument(!Strings.isNullOrEmpty(name), "name should not be empty");
  5. ClassLoader contextClassLoader = Thread.currentThread().getContextClassLoader();
  6. try {
  7. Thread.currentThread().setContextClassLoader(getClass().getClassLoader());
  8. Upload upload = tmx.upload(s3ReqDecorator
  9. .decorate(new PutObjectRequest(bucket, addMetaKeyPrefix(name), input, new ObjectMetadata())));
  10. upload.waitForUploadResult();
  11. } catch (InterruptedException e) {
  12. LOG.error("Error in uploading", e);
  13. throw new DataStoreException("Error in uploading", e);
  14. } finally {
  15. if (contextClassLoader != null) {
  16. Thread.currentThread().setContextClassLoader(contextClassLoader);
  17. }
  18. }
  19. }

代码示例来源:origin: apache/jackrabbit-oak

  1. @Override
  2. public void addMetadataRecord(final InputStream input, final String name) throws DataStoreException {
  3. checkArgument(input != null, "input should not be null");
  4. checkArgument(!Strings.isNullOrEmpty(name), "name should not be empty");
  5. ClassLoader contextClassLoader = Thread.currentThread().getContextClassLoader();
  6. try {
  7. Thread.currentThread().setContextClassLoader(getClass().getClassLoader());
  8. Upload upload = tmx.upload(s3ReqDecorator
  9. .decorate(new PutObjectRequest(bucket, addMetaKeyPrefix(name), input, new ObjectMetadata())));
  10. upload.waitForUploadResult();
  11. } catch (InterruptedException e) {
  12. LOG.error("Error in uploading", e);
  13. throw new DataStoreException("Error in uploading", e);
  14. } finally {
  15. if (contextClassLoader != null) {
  16. Thread.currentThread().setContextClassLoader(contextClassLoader);
  17. }
  18. }
  19. }

代码示例来源:origin: apache/streams

  1. private void addFile() throws Exception {
  2. InputStream is = new ByteArrayInputStream(this.outputStream.toByteArray());
  3. int contentLength = outputStream.size();
  4. TransferManager transferManager = new TransferManager(amazonS3Client);
  5. ObjectMetadata metadata = new ObjectMetadata();
  6. metadata.setExpirationTime(DateTime.now().plusDays(365 * 3).toDate());
  7. metadata.setContentLength(contentLength);
  8. metadata.addUserMetadata("writer", "org.apache.streams");
  9. for (String s : metaData.keySet()) {
  10. metadata.addUserMetadata(s, metaData.get(s));
  11. }
  12. String fileNameToWrite = path + fileName;
  13. Upload upload = transferManager.upload(bucketName, fileNameToWrite, is, metadata);
  14. try {
  15. upload.waitForUploadResult();
  16. is.close();
  17. transferManager.shutdownNow(false);
  18. LOGGER.info("S3 File Close[{} kb] - {}", contentLength / 1024, path + fileName);
  19. } catch (Exception ignored) {
  20. LOGGER.trace("Ignoring", ignored);
  21. }
  22. }

代码示例来源:origin: com.ibm.stocator/stocator

  1. @Override
  2. public void close() throws IOException {
  3. if (closed.getAndSet(true)) {
  4. return;
  5. }
  6. mBackupOutputStream.close();
  7. LOG.debug("OutputStream for key '{}' closed. Now beginning upload", mKey);
  8. try {
  9. final ObjectMetadata om = new ObjectMetadata();
  10. om.setContentLength(mBackupFile.length());
  11. om.setContentType(mContentType);
  12. om.setUserMetadata(mMetadata);
  13. PutObjectRequest putObjectRequest = new PutObjectRequest(mBucketName, mKey, mBackupFile);
  14. putObjectRequest.setMetadata(om);
  15. Upload upload = transfers.upload(putObjectRequest);
  16. upload.waitForUploadResult();
  17. } catch (InterruptedException e) {
  18. throw (InterruptedIOException) new InterruptedIOException(e.toString())
  19. .initCause(e);
  20. } catch (AmazonClientException e) {
  21. throw new IOException(String.format("saving output %s %s", mKey, e));
  22. } finally {
  23. if (!mBackupFile.delete()) {
  24. LOG.warn("Could not delete temporary cos file: {}", mBackupOutputStream);
  25. }
  26. super.close();
  27. }
  28. LOG.debug("OutputStream for key '{}' upload complete", mKey);
  29. }

代码示例来源:origin: CODAIT/stocator

  1. @Override
  2. public void close() throws IOException {
  3. if (closed.getAndSet(true)) {
  4. return;
  5. }
  6. mBackupOutputStream.close();
  7. LOG.debug("OutputStream for key '{}' closed. Now beginning upload", mKey);
  8. try {
  9. final ObjectMetadata om = new ObjectMetadata();
  10. om.setContentLength(mBackupFile.length());
  11. om.setContentType(mContentType);
  12. om.setUserMetadata(mMetadata);
  13. PutObjectRequest putObjectRequest = new PutObjectRequest(mBucketName, mKey, mBackupFile);
  14. putObjectRequest.setMetadata(om);
  15. Upload upload = transfers.upload(putObjectRequest);
  16. upload.waitForUploadResult();
  17. } catch (InterruptedException e) {
  18. throw (InterruptedIOException) new InterruptedIOException(e.toString())
  19. .initCause(e);
  20. } catch (AmazonClientException e) {
  21. throw new IOException(String.format("saving output %s %s", mKey, e));
  22. } finally {
  23. if (!mBackupFile.delete()) {
  24. LOG.warn("Could not delete temporary cos file: {}", mBackupOutputStream);
  25. }
  26. super.close();
  27. }
  28. LOG.debug("OutputStream for key '{}' upload complete", mKey);
  29. }

代码示例来源:origin: io.ifar.skid-road/skid-road

  1. @Override
  2. public void put(String uri, File f) throws AmazonClientException {
  3. LOG.trace("Uploading " + uri);
  4. String[] parts = pieces(uri);
  5. ObjectMetadata om = new ObjectMetadata();
  6. om.setContentLength(f.length());
  7. if (f.getName().endsWith("gzip")) {
  8. om.setContentEncoding("gzip");
  9. }
  10. uploadsInProgress.incrementAndGet();
  11. try {
  12. PutObjectRequest req = new PutObjectRequest(parts[0],parts[1],f);
  13. req.setMetadata(om);
  14. UploadResult resp = svc.upload(req).waitForUploadResult();
  15. LOG.trace("Uploaded " + uri + " with ETag " + resp.getETag());
  16. } catch (InterruptedException ie) {
  17. LOG.error("Interrupted while uploading {} to {}.",
  18. f.getPath(), uri);
  19. throw Throwables.propagate(ie);
  20. } finally {
  21. uploadsInProgress.decrementAndGet();
  22. }
  23. }

代码示例来源:origin: ch.cern.hadoop/hadoop-aws

  1. upload.addProgressListener(listener);
  2. upload.waitForUploadResult();

代码示例来源:origin: org.apache.jackrabbit/oak-blob-cloud

  1. bucket, key, file)));
  2. up.waitForUploadResult();
  3. LOG.debug("synchronous upload to identifier [{}] completed.", identifier);
  4. } catch (Exception e2 ) {

代码示例来源:origin: org.apache.hadoop/hadoop-aws

  1. /**
  2. * Wait for an upload to complete.
  3. * If the waiting for completion is interrupted, the upload will be
  4. * aborted before an {@code InterruptedIOException} is thrown.
  5. * @param upload upload to wait for
  6. * @param key destination key
  7. * @return the upload result
  8. * @throws InterruptedIOException if the blocking was interrupted.
  9. */
  10. UploadResult waitForUploadCompletion(String key, UploadInfo uploadInfo)
  11. throws InterruptedIOException {
  12. Upload upload = uploadInfo.getUpload();
  13. try {
  14. UploadResult result = upload.waitForUploadResult();
  15. incrementPutCompletedStatistics(true, uploadInfo.getLength());
  16. return result;
  17. } catch (InterruptedException e) {
  18. LOG.info("Interrupted: aborting upload");
  19. incrementPutCompletedStatistics(false, uploadInfo.getLength());
  20. upload.abort();
  21. throw (InterruptedIOException)
  22. new InterruptedIOException("Interrupted in PUT to "
  23. + keyToQualifiedPath(key))
  24. .initCause(e);
  25. }
  26. }

代码示例来源:origin: apache/jackrabbit-oak

  1. bucket, key, file)));
  2. up.waitForUploadResult();
  3. LOG.debug("synchronous upload to identifier [{}] completed.", identifier);
  4. } catch (Exception e2 ) {

代码示例来源:origin: Aloisius/hadoop-s3a

  1. upload.addProgressListener(listener);
  2. upload.waitForUploadResult();

代码示例来源:origin: com.ibm.stocator/stocator

  1. PutObjectRequest putObjectRequest = new PutObjectRequest(mBucket, objName, im, om);
  2. Upload upload = transfers.upload(putObjectRequest);
  3. upload.waitForUploadResult();
  4. OutputStream fakeStream = new OutputStream() {

代码示例来源:origin: io.digdag/digdag-storage-s3

  1. try (InputStream in = payload.open()) {
  2. PutObjectRequest req = new PutObjectRequest(bucket, key, in, meta);
  3. UploadResult result = transferManager.upload(req).waitForUploadResult();

代码示例来源:origin: ch.cern.hadoop/hadoop-aws

  1. up.addProgressListener(progressListener);
  2. try {
  3. up.waitForUploadResult();
  4. statistics.incrementWriteOps(1);
  5. } catch (InterruptedException e) {

相关文章