azure 为什么是BlobClientsetHttpHeaders(...)停止进一步处理?

j1dl9f46  于 2023-05-01  发布在  其他
关注(0)|答案(1)|浏览(155)

我尝试以字节[]的形式将文件上传到Azure Storage Account:

public void uploadFilesInBatch(Map<Document, byte[]> documents) throws InterruptedException, IOException {
//        int numThreads = Runtime.getRuntime().availableProcessors();
    int numThreads = 6;

    ExecutorService executorService = Executors.newFixedThreadPool(numThreads);

    List<Runnable> tasks = new ArrayList<>();

    TikaConfig config = TikaConfig.getDefaultConfig();

    for (Map.Entry<Document, byte[]> entry : documents.entrySet()) {
        MediaType mediaType = config.getMimeRepository().detect(new ByteArrayInputStream(entry.getValue()), new Metadata());
        tasks.add(() -> {
            String fileName = entry.getKey().getFileName();
            BlobClient blobClient = blobContainerClient.getBlobClient(fileName);
            // synchronized (blobClient) {
                blobClient.setHttpHeaders(new BlobHttpHeaders().setContentType(mediaType.getType()));
//                blobClient.setHttpHeaders(new 
BlobHttpHeaders().setContentType(mediaType.toString()));
                blobClient.upload(new ByteArrayInputStream(entry.getValue()));
            // }
        });
    }

    for (Runnable task : tasks) {
        executorService.submit(task);
    }

    executorService.shutdown();
    executorService.awaitTermination(Long.MAX_VALUE, TimeUnit.SECONDS);
}

它在设置HttpHeaders时工作得很好,但是下面的上传语句从未到达,我不明白原因。如果我在blobClient.setHttpHeaders()行和blobClient.upload()行中设置断点,IntelliJ将在blobClient.setHttpHeaders()行中停止。进一步,下一个任务被添加到tasks ArrayList中。
我正在使用Java 11和这些Azure依赖项:

<azure-core.version>1.36.0</azure-core.version>
<spring-cloud-azure-starter-keyvault-secrets.version>4.3.0</spring-cloud-azure-starter-keyvault-secrets.version> <!-- Version 5.0.0 requires Java major version 61 instead of 55 -->
<azure-storage-blob.version>12.21.0</azure-storage-blob.version>
<azure-storage-blob-batch.version>12.18.0</azure-storage-blob-batch.version>

Azurite在Docker容器中运行:“ www.example.com ”
也许有趣的是,提到的uploadFilesBatch()方法是从一个在

public myMethod() {
    return CompletableFuture.supplyAsync(() -> { ... }
}
afdcj2ne

afdcj2ne1#

我使用BlobParallelUploadOptions沿着BlobClient.uploadWithResponse()解决了这个问题:

for (Map.Entry<Document, byte[]> entry : documents.entrySet()) {
    MediaType mediaType = config.getMimeRepository().detect(new ByteArrayInputStream(entry.getValue()), new Metadata());
    tasks.add(() -> {
        String fileName = entry.getKey().getFileName();
        BlobClient blobClient = blobContainerClient.getBlobClient(fileName);
        BlobParallelUploadOptions bpuo = new BlobParallelUploadOptions(new ByteArrayInputStream(entry.getValue()))
            .setHeaders(new BlobHttpHeaders().setContentType(mediaType.getType()));
        blobClient.uploadWithResponse(bpuo, Context.NONE);
    });
}

相关问题