我正在尝试重新创建这些:Hugging Face: Question Answering Task和Hugging Face: Question Answering NLP Course。
我在**model.fit()**部分遇到了这个ResourceExhaustedError。
---------------------------------------------------------------------------
ResourceExhaustedError Traceback (most recent call last)
Cell In[14], line 1
----> 1 model.fit(x=tf_train_set, batch_size=16, validation_data=tf_validation_set, epochs=3, callbacks=[callback])
个字符
这里列出了一堆文件
Node: 'tf_distil_bert_for_question_answering/distilbert/transformer/layer_._4/attention/dropout_14/dropout/random_uniform/RandomUniform'
OOM when allocating tensor with shape[16,12,384,384] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node tf_distil_bert_for_question_answering/distilbert/transformer/layer_._4/attention/dropout_14/dropout/random_uniform/RandomUniform}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
[Op:__inference_train_function_9297]
型
我已经尝试降低batch_size. model.fit(x=tf_train_set, batch_size=16, validation_data=tf_validation_set, epochs=3, callbacks=[callback])
我还尝试限制GPU的内存增长限制GPU内存增长
以下是colab笔记本电脑:Colab: Question Answering Task和Colab: Question Answering NLP Course
2条答案
按热度按时间mgdq6dx11#
这意味着GPU的内存无法承受您的批处理大小或输入数据大小。因此,请尝试减少批处理大小或输入数据大小
v1l68za42#
我在开头加上了这些台词
字符串