pytorch中的Dropconnect实现

ac1kyiln  于 2023-06-23  发布在  其他
关注(0)|答案(3)|浏览(145)

我正在尝试为Conv2DTransposeConv2D层编写dropconnect代码。按照https://pytorchnlp.readthedocs.io/en/latest/_modules/torchnlp/nn/weight_drop.html中的教程创建它。

import torch
from torch.nn import Parameter

def _weight_drop(module, weights, dropout):
    for name_w in weights:
        w = getattr(module, name_w)
        del module._parameters[name_w]
        module.register_parameter(name_w + '_raw', Parameter(w))
    original_module_forward = module.forward

    def forward(*args, **kwargs):
        for name_w in weights:
            raw_w = getattr(module, name_w + '_raw')
            w = torch.nn.functional.dropout(raw_w, p=dropout, training=module.training)
            setattr(module, name_w, w)
        return original_module_forward(*args, **kwargs)
    setattr(module, 'forward', forward)
        
class WeightDropConv2d(torch.nn.Conv2d):
    def __init__(self, *args, weight_dropout=0.0, **kwargs):
        super().__init__(*args, **kwargs)
        weights = ['weight']
        _weight_drop(self, weights, weight_dropout)
        
class WeightDropConvTranspose2d(torch.nn.ConvTranspose2d):
    def __init__(self, *args, weight_dropout=0.0, **kwargs):
        super().__init__(*args, **kwargs)
        weights = ['weight']
        _weight_drop(self, weights, weight_dropout)

我使用的Torch和Cuda版本:

  • torch.version.cuda:1.1.0
  • torch.__version__:9.0.176

我在第二个时期得到以下错误:

Traceback (most recent call last):
  File "dropconnect.py", line 110, in <module>
    out = model(image)
  File "/home/sbhand2s/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "dropconnect.py", line 73, in forward
    out = self.c1(x)
  File "/home/sbhand2s/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "dropconnect.py", line 34, in forward
    setattr(module, name_w, w)
  File "/home/sbhand2s/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 558, in __setattr__
    .format(torch.typename(value), name))
TypeError: cannot assign 'torch.cuda.FloatTensor' as parameter 'weight' (torch.nn.Parameter or None expected)

当我从.eval()切换到.train()时,该错误发生在第二个时期。如果我不调用.eval(),则不会发生此错误
对于为什么会出现此错误或如何以更好的方式实现dropconnect有什么建议?
复制问题的代码:

from collections import OrderedDict
import torch
from torch import nn

layers = []
layers.append(("conv_1", WeightDropConv2d(1,3,3,1,1,weight_dropout=0.5)))
layers.append(("conv_2", WeightDropConv2d(3,3,3,1,1,weight_dropout=0.5)))
layers.append(("conv_3", WeightDropConv2d(3,1,3,1,1,weight_dropout=0.5)))

model = nn.Sequential(OrderedDict(layers))

pred = model(torch.randn([1,1,3,3]))

model.eval()
pred = model(torch.randn([1,1,3,3]))

model.train()
pred = model(torch.randn([1,1,3,3]))
gfttwv5a

gfttwv5a1#

我也不能让这个方法工作(虽然有一个不同的错误),但这里有一个更简单的方法,似乎工作:

for i in range(num_batches):

    orig_params = []
    for n, p in model.named_parameters():
        orig_params.append(p.clone())
        p.copy_(F.dropout(p.data, p=drop_prob) * (1 - drop_prob))

    output = model(input)
    loss = nn.CrossEntropyLoss()(output, label)
    optimizer.zero_grad()
    loss.backward()

    for orig_p, (n, p) in zip(orig_params, model.named_parameters()):  
        p.copy_(orig_p)

    optimizer.step()

(在Pytorch 1.4中测试)

3ks5zfa0

3ks5zfa02#

在遇到类似的障碍后,我读了这篇文章:https://tomaxent.com/2018/01/15/DropConnect-Implementation-in-Python-and-TensorFlow-Repost/。它指出Dropout和DropConnect实现之间的唯一区别是,掩码矩阵中的权重是否按比例增加(以保持预期的总和)。虽然这篇文章谈论的是Tensorflow,但在我看来,它也可以应用于PyTorch。所以DropConnect在PyTorch中看起来像这样:

self.dropout = torch.nn.Dropout(p=self.p)
...
def forward(self, x):
...
  x = self.dropout (x) * (1-self.p)
x8diyxa7

x8diyxa73#

通过efficient-pytorch的efficientnet中的drop_connect实现:

def drop_connect(inputs, p, training):
    """Drop connect.

    Args:
        input (tensor: BCWH): Input of this structure.
        p (float: 0.0~1.0): Probability of drop connection.
        training (bool): The running mode.

    Returns:
        output: Output after drop connection.
    """
    assert p >= 0 and p <= 1, 'p must be in range of [0,1]'

    if not training:
        return inputs

    batch_size = inputs.shape[0]
    keep_prob = 1 - p

    # generate binary_tensor mask according to probability (p for 0, 1-p for 1)
    random_tensor = keep_prob
    random_tensor += torch.rand([batch_size, 1, 1, 1], dtype=inputs.dtype, device=inputs.device)
    binary_tensor = torch.floor(random_tensor)

    output = inputs / keep_prob * binary_tensor
    return output

我认为它更像是x = self.dropout(x)/(1-self.p),因为output = inputs / keep_prob * binary_tensor等于output = (inputs * binary_tenosr) / (1 - self.p),代码意味着随机集0,然后按比例放大。

相关问题