pytorch Colab pro不提供超过16 gb的ram

x3naxklr  于 2022-12-13  发布在  其他
关注(0)|答案(1)|浏览(180)

今天我升级了我的帐户到Colab专业。虽然它打印的ram为:

Your runtime has 27.3 gigabytes of available RAM
    
    You are using a high-RAM runtime!

当我开始训练我的模型时,它给出了下面的错误。

RuntimeError: CUDA out of memory. Tried to allocate 88.00 MiB (GPU 0; 15.90 GiB total capacity; 14.75 GiB already allocated; 75.75 MiB free; 14.95 GiB reserved in total by PyTorch)

我的模型的超参数:

args_dict = dict(
    #data_dir="", # path for data files
    output_dir="", # path to save the checkpoints
    model_name_or_path='t5-large',
    tokenizer_name_or_path='t5-large',
    max_seq_length=600,
    learning_rate=3e-4,
    weight_decay=0.0,
    adam_epsilon=1e-8,
    warmup_steps=0,
    train_batch_size=4,
    eval_batch_size=4,
    num_train_epochs=2,
    gradient_accumulation_steps=16,
    n_gpu=1,
    early_stop_callback=False,
    fp_16=True, # if you want to enable 16-bit training then install apex and set this to true
    opt_level='O1', # you can find out more on optimisation levels here https://nvidia.github.io/apex/amp.html#opt-levels-and-properties
    max_grad_norm=1.0, # if you enable 16-bit training then set this to a sensible value, 0.5 is a good default
    seed=42,
)

Colab pro没有提供所有的内存。我的代码只有在train_batch_size = 1的情况下才能工作。这是什么原因?有什么想法吗?
注意:当我在Kaggle(16Gb)中运行代码时,我得到了同样的错误。那么,我在colab pro中得到了什么呢?

mrwjdhj3

mrwjdhj31#

看看你的错误,16 GB是指显卡,而不是RAM。
据我所知,使用colab-pro可以让你使用高达16 GB的VRAM显卡。
您可以通过运行以下代码来检查VRAM容量。

gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
  print('Select the Runtime > "Change runtime type" menu to enable a GPU accelerator, ')
  print('and then re-execute this cell.')
else:
  print(gpu_info)

也许您使用的批量小于4?

相关问题