pytorch 运行时错误:维超出范围(应在[-1,0]范围内,但得到1)

wlp8pajw  于 2022-12-04  发布在  其他
关注(0)|答案(4)|浏览(197)

我使用Pytorch Unet模型,将图像作为输入,同时将标签作为输入图像掩模,并在其上训练数据集。Unet模型是从其他地方获得的,我使用交叉熵损失作为损失函数,但我得到的维度超出范围误差,

RuntimeError                              
Traceback (most recent call last)
<ipython-input-358-fa0ef49a43ae> in <module>()
     16 for epoch in range(0, num_epochs):
     17     # train for one epoch
---> 18     curr_loss = train(train_loader, model, criterion, epoch, num_epochs)
     19 
     20     # store best loss and save a model checkpoint

<ipython-input-356-1bd6c6c281fb> in train(train_loader, model, criterion, epoch, num_epochs)
     16         # measure loss
     17         print (outputs.size(),labels.size())
---> 18         loss = criterion(outputs, labels)
     19         losses.update(loss.data[0], images.size(0))
     20 

/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py in     _ _call__(self, *input, **kwargs)
    323         for hook in self._forward_pre_hooks.values():
    324             hook(self, input)
--> 325         result = self.forward(*input, **kwargs)
    326         for hook in self._forward_hooks.values():
    327             hook_result = hook(self, input, result)

<ipython-input-355-db66abcdb074> in forward(self, logits, targets)
      9         probs_flat = probs.view(-1)
     10         targets_flat = targets.view(-1)
---> 11         return self.crossEntropy_loss(probs_flat, targets_flat)

/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py in     __call__(self, *input, **kwargs)
    323         for hook in self._forward_pre_hooks.values():
    324             hook(self, input)
  --> 325         result = self.forward(*input, **kwargs)
    326         for hook in self._forward_hooks.values():
    327             hook_result = hook(self, input, result)

/usr/local/lib/python3.5/dist-packages/torch/nn/modules/loss.py in f orward(self, input, target)
    599         _assert_no_grad(target)
    600         return F.cross_entropy(input, target, self.weight, self.size_average,
--> 601                                self.ignore_index, self.reduce)
    602 
    603 

/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py in     cross_entropy(input, target, weight, size_average, ignore_index, reduce)
   1138         >>> loss.backward()
   1139     """
-> 1140     return nll_loss(log_softmax(input, 1), target, weight, size_average, ignore_index, reduce)
   1141 
   1142 

/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py in     log_softmax(input, dim, _stacklevel)
    784     if dim is None:
    785         dim = _get_softmax_dim('log_softmax', input.dim(),      _stacklevel)
--> 786     return torch._C._nn.log_softmax(input, dim)
    787 
    788 

RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1)

我的部分代码如下所示

class crossEntropy(nn.Module):
    def __init__(self, weight = None, size_average = True):
        super(crossEntropy, self).__init__()
        self.crossEntropy_loss = nn.CrossEntropyLoss(weight, size_average)
        
    def forward(self, logits, targets):
        probs = F.sigmoid(logits)
        probs_flat = probs.view(-1)
        targets_flat = targets.view(-1)
        return self.crossEntropy_loss(probs_flat, targets_flat)

class UNet(nn.Module):
    def __init__(self, imsize):
        super(UNet, self).__init__()
        self.imsize = imsize

        self.activation = F.relu
        
        self.pool1 = nn.MaxPool2d(2)
        self.pool2 = nn.MaxPool2d(2)
        self.pool3 = nn.MaxPool2d(2)
        self.pool4 = nn.MaxPool2d(2)
        self.conv_block1_64 = UNetConvBlock(4, 64)
        self.conv_block64_128 = UNetConvBlock(64, 128)
        self.conv_block128_256 = UNetConvBlock(128, 256)
        self.conv_block256_512 = UNetConvBlock(256, 512)
        self.conv_block512_1024 = UNetConvBlock(512, 1024)

        self.up_block1024_512 = UNetUpBlock(1024, 512)
        self.up_block512_256 = UNetUpBlock(512, 256)
        self.up_block256_128 = UNetUpBlock(256, 128)
        self.up_block128_64 = UNetUpBlock(128, 64)

        self.last = nn.Conv2d(64, 2, 1)

    def forward(self, x):
        block1 = self.conv_block1_64(x)
        pool1 = self.pool1(block1)

        block2 = self.conv_block64_128(pool1)
        pool2 = self.pool2(block2)

        block3 = self.conv_block128_256(pool2)
        pool3 = self.pool3(block3)

        block4 = self.conv_block256_512(pool3)
        pool4 = self.pool4(block4)

        block5 = self.conv_block512_1024(pool4)

        up1 = self.up_block1024_512(block5, block4)

        up2 = self.up_block512_256(up1, block3)

        up3 = self.up_block256_128(up2, block2)

        up4 = self.up_block128_64(up3, block1)

        return F.log_softmax(self.last(up4))
ubof19bj

ubof19bj1#

根据您的代码:

probs_flat = probs.view(-1)
targets_flat = targets.view(-1)
return self.crossEntropy_loss(probs_flat, targets_flat)

您将两个一维Tensor赋予nn.CrossEntropyLoss,但根据文档,它预期:

Input: (N,C) where C = number of classes
Target: (N) where each value is 0 <= targets[i] <= C-1
Output: scalar. If reduce is False, then (N) instead.

我相信这就是您遇到的问题的原因。

p4rjhz4m

p4rjhz4m2#

问题在于,在分类问题中,您将错误的参数传递给了torch.nn.CrossEntropyLoss
具体来说,在这一行

---> 18         loss = criterion(outputs, labels)

参数labels不是CrossEntropyLoss所期望的。labels应为一维数组。此数组的长度应为与代码中的outputs匹配的批处理大小。每个元素的值应为从0开始的目标类ID。
这里有一个例子。
假设您的批处理大小为B=2,并且为每个数据示例指定了K=3类之一。
此外,假设神经网络的最后一层为批处理中的两个示例分别输出以下原始logit(softmax之前的值)。这些logit和每个数据示例的true标签如下所示。

Logits (before softmax)
               Class 0  Class 1  Class 2    True class
               -------  -------  -------    ----------
Instance 0:        0.5      1.5      0.1             1
Instance 1:        2.2      1.3      1.7             2

为了正确调用CrossEntropyLoss,需要两个变量:

  • 包含logit值的(B, K)形状的input
  • B形状的target,包含真实类的索引

下面是如何正确使用CrossEntropyLoss和上面的值。我使用的是torch.__version__ 1.9.0。

import torch

yhat = torch.Tensor([[0.5, 1.5, 0.1], [2.2, 1.3, 1.7]])
print(yhat)
# tensor([[0.5000, 1.5000, 0.1000],
#         [2.2000, 1.3000, 1.7000]])

y = torch.Tensor([1, 2]).to(torch.long)
print(y)
# tensor([1, 2])

loss = torch.nn.CrossEntropyLoss()
cel = loss(input=yhat, target=y)
print(cel)
# tensor(0.8393)

我猜你最初收到的错误

RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1)
  • 很可能 * 发生,因为您正在尝试计算一个数据示例的交叉熵损失,其中目标被编码为one-hot。您的数据可能如下所示:
Logits (before softmax)
               Class 0  Class 1  Class 2  True class 0 True class 1 True class 2
               -------  -------  -------  ------------ ------------ ------------
Instance 0:        0.5      1.5      0.1             0            1            0

下面是表示上面数据的代码:

import torch

yhat = torch.Tensor([0.5, 1.5, 0.1])
print(yhat)
# tensor([0.5000, 1.5000, 0.1000])

y = torch.Tensor([0, 1, 0]).to(torch.long)
print(y)
# tensor([0, 1, 0])

loss = torch.nn.CrossEntropyLoss()
cel = loss(input=yhat, target=y)
print(cel)

此时,我得到以下错误:

---> 10 cel = loss(input=yhat, target=y)

IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

在我看来,那个错误信息是不可理解和不可操作的。
另请参见TensorFlow中的类似问题:
What are logits? What is the difference between softmax and softmax_cross_entropy_with_logits?

0s7z1bwu

0s7z1bwu3#

我有同样的问题,因为这个线程不提供任何明确的答案,我会张贴我的解决方案,尽管年龄的职位。
forward()方法中,还需要返回x。它需要如下所示:

return F.log_softmax(self.last(up4)), x
busg9geu

busg9geu4#

取代了

return self.crossEntropy_loss(probs_flat, targets_flat)

return self.crossEntropy_loss(torch.unsqueeze(probs_flat,0), torch.unsqueeze(targets_flat,0))

相关问题