我刚收到这条消息时,试图运行一个前馈torch.nn.Conv2d,得到以下堆栈跟踪:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-26-04bd4a00565d> in <module>
3
4 # call training function
----> 5 losses = train(D, G, n_epochs=n_epochs)
<ipython-input-24-b539315e0aa0> in train(D, G, n_epochs, print_every)
46 real_images = real_images.cuda()
47
---> 48 D_real = D(real_images)
49 d_real_loss = real_loss(D_real, True) # smoothing label 1 => 0.9
50
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
<ipython-input-14-bf68e57c25ff> in forward(self, x)
48 """
49
---> 50 x = self.leaky_relu(self.conv1(x))
51 x = self.leaky_relu(self.conv2(x))
52 x = self.leaky_relu(self.conv3(x))
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/container.py in forward(self, input)
98 def forward(self, input):
99 for module in self:
--> 100 input = module(input)
101 return input
102
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py in forward(self, input)
347
348 def forward(self, input):
--> 349 return self._conv_forward(input, self.weight)
350
351 class Conv3d(_ConvNd):
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight)
344 _pair(0), self.dilation, self.groups)
345 return F.conv2d(input, weight, self.bias, self.stride,
--> 346 self.padding, self.dilation, self.groups)
347
348 def forward(self, input):
RuntimeError: Unable to find a valid cuDNN algorithm to run convolution
运行nvidia-smi显示:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 770 On | 00000000:01:00.0 N/A | N/A |
| 38% 50C P8 N/A / N/A | 624MiB / 4034MiB | N/A Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 Not Supported |
+-----------------------------------------------------------------------------+
我使用的是Python 3.7、Pytorch 1.5,GPU是Nvidia GeForce GTX 770,运行在Ubuntu 18.04.2上。我在任何地方都没有发现错误消息。它有什么印象吗?
先谢谢你了。
7条答案
按热度按时间0ejtzxu11#
根据this对tensorflow类似问题的回答,可能是因为达到了VRAM内存限制(从错误消息来看,这是相当不直观的)。
对于我的PyTorch模型训练,减少批量大小是有帮助的。你可以尝试这样做,或者减少模型大小以消耗更少的VRAM。
y3bcpkx12#
这个错误有时很棘手。在某些特定情况下,内存不足也会报告此错误信息。
wixjitnu3#
我在测试不同的EC2节点机器的推理速度时得到了这个错误。当我通过日志挖掘时,我发现了这个:
经验教训:不要为PyTorch模型使用
g2.XX
示例类型。g3.XX
和p
系列运行良好。vltsax254#
检查你在代码中分配的类的数量。当我试图在Cifar100而不是Cifar10上运行代码,但忘记将num_classes从10更改为100时,出现了这个错误。
ljo96ir55#
一个更好的调试方法是在CPU上运行计算,它将通过一个实际的错误消息。
1.可能是阶级不匹配
1.我正在运行一个分割模型我的面具有一个不同数量的类比类索引预测的模型
1.我在面具上的变换是错误的
0s7z1bwu6#
问题是你正在使用torch.nn.前馈模块,但你返回的是功能模块
F.conv2d()
。将返回代码更改为nn.Conv2d()
这可能会帮助你更多-https://pytorch.org/docs/stable/nn.html?highlight=conv2d#torch.nn.Conv2d
juzqafwq7#
这种情况在我身上发生过几次。也许它会有点基本,但是。关闭正在运行的内核对我帮助很大。关闭其他内核后,内存几乎完全恢复,问题消失了。