torch distributed.init out of memory

x33g5p2x  于2022-05-16 转载在 其他  
字(0.7k)|赞(0)|评价(0)|浏览(314)

torch distributed.init out of memory

设置环境gpu:

os.environ["CUDA_VISIBLE_DEVICES"] = "1, 2, 3"

local_rank=0

torch.cuda.set_device(local_rank)

cuda(0)默认是第0块显卡,

但是设置CUDA_VISIBLE_DEVICES后:

cuda(0)就是CUDA_VISIBLE_DEVICES里面的第一个gpu。

distributed.init 报错out of memory
import argparse
import logging
import os
import time

import torch
import torch.distributed as dist
import torch.nn.functional as F
import torch.utils.data.distributed
def main(args):
    try:
        world_size = int(os.environ['WORLD_SIZE'])
        rank = int(os.environ['RANK'])
        dist_url = "tcp://{}:{}".format(os.environ["MASTER_ADDR"], os.environ["MASTER_PORT"])
    except KeyError:
        world_size = 1
        rank = 0
        dist_url = "tcp://127.0.0.1

相关文章