Spring Boot os::commit_memory失败;错误=空间不足(errno=12)

mrfwxfqh  于 2023-05-17  发布在  Spring
关注(0)|答案(1)|浏览(236)

我在ubuntu20ec2机器上运行了一个spring Boot 应用程序,在那里我创建了大约200000个线程来将数据写入Kafka。但是,它反复失败,并出现以下错误

[138.470s][warning][os,thread] Attempt to protect stack guard pages failed (0x00007f828d055000-0x00007f828d059000).
[138.470s][warning][os,thread] Attempt to deallocate stack guard pages failed.
OpenJDK 64-Bit Server VM warning: [138.472s][warning][os,thread] Failed to start thread - pthread_create failed (EAGAIN) for attributes: stacksize: 1024k, guardsize: 0k, detached.
INFO: os::commit_memory(0x00007f828cf54000, 16384, 0) failed; error='Not enough space' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 16384 bytes for committing reserved memory.

我试过将我的ec2示例的内存增加到64 gb,但没有用。我使用docker stats和htop来监控进程的内存占用,当它达到10gb左右时,它会失败并给出错误。
我还尝试增加进程的堆大小和最大内存。

docker run --rm --name test -e JAVA_OPTS=-Xmx64g -v /workspace/logs/test:/logs -t test:master

下面是我的代码

final int LIMIT = 200000;
    ExecutorService executorService = Executors.newFixedThreadPool(LIMIT);
    final CountDownLatch latch = new CountDownLatch(LIMIT);
    for (int i = 1; i <= LIMIT; i++) {
        final int counter = i;
        executorService.execute(() -> {
            try {
                kafkaTemplate.send("rf-data", Integer.toString(123), "asdsadsd");
                kafkaTemplate.send("rf-data", Integer.toString(123), "zczxczxczxc");
                latch.countDown();
            } catch (Exception e) {
                logger.error("Error sending data: ", e);
            }
        });
    }
    try {
        latch.await();
    } catch (InterruptedException e) {
        logger.error("error ltach", e);
    }
gab6jxml

gab6jxml1#

在docker compose yml文件中,可以修改容器的内存限制。您可以尝试在映像配置部分增加该限制,如下所示:

...
    deploy:
      resources:
        limits:
          memory: <memory size>

更多信息在这里

相关问题