我正在尝试为Conv2D
和TransposeConv2D
层编写dropconnect代码。按照https://pytorchnlp.readthedocs.io/en/latest/_modules/torchnlp/nn/weight_drop.html中的教程创建它。
import torch
from torch.nn import Parameter
def _weight_drop(module, weights, dropout):
for name_w in weights:
w = getattr(module, name_w)
del module._parameters[name_w]
module.register_parameter(name_w + '_raw', Parameter(w))
original_module_forward = module.forward
def forward(*args, **kwargs):
for name_w in weights:
raw_w = getattr(module, name_w + '_raw')
w = torch.nn.functional.dropout(raw_w, p=dropout, training=module.training)
setattr(module, name_w, w)
return original_module_forward(*args, **kwargs)
setattr(module, 'forward', forward)
class WeightDropConv2d(torch.nn.Conv2d):
def __init__(self, *args, weight_dropout=0.0, **kwargs):
super().__init__(*args, **kwargs)
weights = ['weight']
_weight_drop(self, weights, weight_dropout)
class WeightDropConvTranspose2d(torch.nn.ConvTranspose2d):
def __init__(self, *args, weight_dropout=0.0, **kwargs):
super().__init__(*args, **kwargs)
weights = ['weight']
_weight_drop(self, weights, weight_dropout)
我使用的Torch和Cuda版本:
torch.version.cuda
:1.1.0torch.__version__
:9.0.176
我在第二个时期得到以下错误:
Traceback (most recent call last):
File "dropconnect.py", line 110, in <module>
out = model(image)
File "/home/sbhand2s/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "dropconnect.py", line 73, in forward
out = self.c1(x)
File "/home/sbhand2s/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "dropconnect.py", line 34, in forward
setattr(module, name_w, w)
File "/home/sbhand2s/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 558, in __setattr__
.format(torch.typename(value), name))
TypeError: cannot assign 'torch.cuda.FloatTensor' as parameter 'weight' (torch.nn.Parameter or None expected)
当我从.eval()
切换到.train()
时,该错误发生在第二个时期。如果我不调用.eval()
,则不会发生此错误
对于为什么会出现此错误或如何以更好的方式实现dropconnect有什么建议?
复制问题的代码:
from collections import OrderedDict
import torch
from torch import nn
layers = []
layers.append(("conv_1", WeightDropConv2d(1,3,3,1,1,weight_dropout=0.5)))
layers.append(("conv_2", WeightDropConv2d(3,3,3,1,1,weight_dropout=0.5)))
layers.append(("conv_3", WeightDropConv2d(3,1,3,1,1,weight_dropout=0.5)))
model = nn.Sequential(OrderedDict(layers))
pred = model(torch.randn([1,1,3,3]))
model.eval()
pred = model(torch.randn([1,1,3,3]))
model.train()
pred = model(torch.randn([1,1,3,3]))
3条答案
按热度按时间gfttwv5a1#
我也不能让这个方法工作(虽然有一个不同的错误),但这里有一个更简单的方法,似乎工作:
(在Pytorch 1.4中测试)
3ks5zfa02#
在遇到类似的障碍后,我读了这篇文章:https://tomaxent.com/2018/01/15/DropConnect-Implementation-in-Python-and-TensorFlow-Repost/。它指出Dropout和DropConnect实现之间的唯一区别是,掩码矩阵中的权重是否按比例增加(以保持预期的总和)。虽然这篇文章谈论的是Tensorflow,但在我看来,它也可以应用于PyTorch。所以DropConnect在PyTorch中看起来像这样:
x8diyxa73#
通过efficient-pytorch的efficientnet中的drop_connect实现:
我认为它更像是
x = self.dropout(x)/(1-self.p)
,因为output = inputs / keep_prob * binary_tensor
等于output = (inputs * binary_tenosr) / (1 - self.p)
,代码意味着随机集0,然后按比例放大。