com.amazonaws.services.s3.transfer.Upload.addProgressListener()方法的使用及代码示例

x33g5p2x  于2022-02-01 转载在 其他  
字(6.6k)|赞(0)|评价(0)|浏览(121)

本文整理了Java中com.amazonaws.services.s3.transfer.Upload.addProgressListener()方法的一些代码示例,展示了Upload.addProgressListener()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Upload.addProgressListener()方法的具体详情如下:
包路径:com.amazonaws.services.s3.transfer.Upload
类名称:Upload
方法名:addProgressListener

Upload.addProgressListener介绍

暂无

代码示例

代码示例来源:origin: awsdocs/aws-doc-sdk-examples

  1. u.addProgressListener(new ProgressListener() {
  2. public void progressChanged(ProgressEvent e) {
  3. double pct = e.getBytesTransferred() * 100.0 / e.getBytes();

代码示例来源:origin: prestodb/presto

  1. upload.addProgressListener(createProgressListener(upload));

代码示例来源:origin: apache/cloudstack

  1. upload.addProgressListener(new ProgressListener() {
  2. @Override
  3. public void progressChanged(ProgressEvent progressEvent) {

代码示例来源:origin: ch.cern.hadoop/hadoop-aws

  1. upload.addProgressListener(listener);

代码示例来源:origin: Aloisius/hadoop-s3a

  1. upload.addProgressListener(listener);

代码示例来源:origin: jenkinsci/pipeline-aws-plugin

  1. upload.addProgressListener((ProgressListener) progressEvent -> {
  2. if (progressEvent.getEventType() == ProgressEventType.TRANSFER_COMPLETED_EVENT) {
  3. RemoteUploader.this.taskListener.getLogger().println("Finished: " + upload.getDescription());
  4. fileUpload = mgr.uploadDirectory(this.bucket, this.path, localFile, true, metadatasProvider);
  5. for (final Upload upload : fileUpload.getSubTransfers()) {
  6. upload.addProgressListener((ProgressListener) progressEvent -> {
  7. if (progressEvent.getEventType() == ProgressEventType.TRANSFER_COMPLETED_EVENT) {
  8. RemoteUploader.this.taskListener.getLogger().println("Finished: " + upload.getDescription());

代码示例来源:origin: uk.co.nichesolutions.presto/presto-hive

  1. private void uploadObject()
  2. throws IOException
  3. {
  4. try {
  5. log.debug("Starting upload for host: %s, key: %s, file: %s, size: %s", host, key, tempFile, tempFile.length());
  6. STATS.uploadStarted();
  7. PutObjectRequest request = new PutObjectRequest(host, key, tempFile);
  8. if (sseEnabled) {
  9. ObjectMetadata metadata = new ObjectMetadata();
  10. metadata.setSSEAlgorithm(ObjectMetadata.AES_256_SERVER_SIDE_ENCRYPTION);
  11. request.setMetadata(metadata);
  12. }
  13. Upload upload = transferManager.upload(request);
  14. if (log.isDebugEnabled()) {
  15. upload.addProgressListener(createProgressListener(upload));
  16. }
  17. upload.waitForCompletion();
  18. STATS.uploadSuccessful();
  19. log.debug("Completed upload for host: %s, key: %s", host, key);
  20. }
  21. catch (AmazonClientException e) {
  22. STATS.uploadFailed();
  23. throw new IOException(e);
  24. }
  25. catch (InterruptedException e) {
  26. STATS.uploadFailed();
  27. Thread.currentThread().interrupt();
  28. throw new InterruptedIOException();
  29. }
  30. }

代码示例来源:origin: apache/jackrabbit

  1. up.addProgressListener(new S3UploadProgressListener(up,
  2. identifier, file, callback));
  3. LOG.debug(

代码示例来源:origin: org.apache.jackrabbit/jackrabbit-aws-ext

  1. up.addProgressListener(new S3UploadProgressListener(up,
  2. identifier, file, callback));
  3. LOG.debug(

代码示例来源:origin: classmethod/gradle-aws-plugin

  1. upload.addProgressListener(new ProgressListener() {

代码示例来源:origin: jenkinsci/pipeline-aws-plugin

  1. fileUpload = mgr.uploadFileList(this.bucket, this.path, localFile, this.fileList, metadatasProvider);
  2. for (final Upload upload : fileUpload.getSubTransfers()) {
  3. upload.addProgressListener((ProgressListener) progressEvent -> {
  4. if (progressEvent.getEventType() == ProgressEventType.TRANSFER_COMPLETED_EVENT) {
  5. RemoteListUploader.this.taskListener.getLogger().println("Finished: " + upload.getDescription());

代码示例来源:origin: com.conveyal/r5

  1. @Override
  2. public void saveData(String directory, String fileName, PersistenceBuffer persistenceBuffer) {
  3. try {
  4. ObjectMetadata metadata = new ObjectMetadata();
  5. // Set content encoding to gzip. This way browsers will decompress on download using native deflate code.
  6. // http://www.rightbrainnetworks.com/blog/serving-compressed-gzipped-static-files-from-amazon-s3-or-cloudfront/
  7. metadata.setContentEncoding("gzip");
  8. metadata.setContentType(persistenceBuffer.getMimeType());
  9. // We must setContentLength or the S3 client will re-buffer the InputStream into another memory buffer.
  10. metadata.setContentLength(persistenceBuffer.getSize());
  11. // amazonS3.putObject(directory, fileName, persistenceBuffer.getInputStream(), metadata);
  12. final Upload upload = transferManager.upload(directory, fileName, persistenceBuffer.getInputStream(), metadata);
  13. upload.addProgressListener(new UploadProgressLogger(upload));
  14. // Block until upload completes to avoid accumulating unlimited uploads in memory.
  15. upload.waitForCompletion();
  16. } catch (Exception e) {
  17. throw new RuntimeException(e);
  18. }
  19. }

代码示例来源:origin: conveyal/r5

  1. @Override
  2. public void saveData(String directory, String fileName, PersistenceBuffer persistenceBuffer) {
  3. try {
  4. ObjectMetadata metadata = new ObjectMetadata();
  5. // Set content encoding to gzip. This way browsers will decompress on download using native deflate code.
  6. // http://www.rightbrainnetworks.com/blog/serving-compressed-gzipped-static-files-from-amazon-s3-or-cloudfront/
  7. metadata.setContentEncoding("gzip");
  8. metadata.setContentType(persistenceBuffer.getMimeType());
  9. // We must setContentLength or the S3 client will re-buffer the InputStream into another memory buffer.
  10. metadata.setContentLength(persistenceBuffer.getSize());
  11. // amazonS3.putObject(directory, fileName, persistenceBuffer.getInputStream(), metadata);
  12. final Upload upload = transferManager.upload(directory, fileName, persistenceBuffer.getInputStream(), metadata);
  13. upload.addProgressListener(new UploadProgressLogger(upload));
  14. // Block until upload completes to avoid accumulating unlimited uploads in memory.
  15. upload.waitForCompletion();
  16. } catch (Exception e) {
  17. throw new RuntimeException(e);
  18. }
  19. }

代码示例来源:origin: ch.cern.hadoop/hadoop-aws

  1. up.addProgressListener(progressListener);
  2. try {
  3. up.waitForUploadResult();

代码示例来源:origin: prestosql/presto

  1. upload.addProgressListener(createProgressListener(upload));

代码示例来源:origin: org.apache.hadoop/hadoop-aws

  1. /**
  2. * Execute a PUT via the transfer manager, blocking for completion,
  3. * updating the metastore afterwards.
  4. * If the waiting for completion is interrupted, the upload will be
  5. * aborted before an {@code InterruptedIOException} is thrown.
  6. * @param putObjectRequest request
  7. * @param progress optional progress callback
  8. * @return the upload result
  9. * @throws InterruptedIOException if the blocking was interrupted.
  10. */
  11. @Retries.OnceRaw("For PUT; post-PUT actions are RetriesExceptionsSwallowed")
  12. UploadResult executePut(PutObjectRequest putObjectRequest,
  13. Progressable progress)
  14. throws InterruptedIOException {
  15. String key = putObjectRequest.getKey();
  16. UploadInfo info = putObject(putObjectRequest);
  17. Upload upload = info.getUpload();
  18. ProgressableProgressListener listener = new ProgressableProgressListener(
  19. this, key, upload, progress);
  20. upload.addProgressListener(listener);
  21. UploadResult result = waitForUploadCompletion(key, info);
  22. listener.uploadCompleted();
  23. // post-write actions
  24. finishedWrite(key, info.getLength());
  25. return result;
  26. }

代码示例来源:origin: Aloisius/hadoop-s3a

  1. up.addProgressListener(progressListener);
  2. try {
  3. up.waitForUploadResult();

相关文章