pytorch 在分类变压器模型中不能向后传递两个损耗

2ic8powd  于 2023-01-26  发布在  其他
关注(0)|答案(2)|浏览(138)

对于我的模型,我使用的是roberta变压器模型和Huggingface变压器库中的Trainer。
我计算了两个损失:lloss是交叉熵损失,dloss计算层次结构层之间的损失。
总损耗是lloss和dloss之和。(基于this
然而,当调用total_loss.backwards()时,我得到错误:

RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed

知道为什么会发生这种情况吗?我可以强制它只回叫一次吗?下面是损失计算部分:

dloss = calculate_dloss(prediction, labels, 3)
lloss = calculate_lloss(predeiction, labels, 3)
total_loss = lloss + dloss 
total_loss.backward()

def calculate_lloss(predictions, true_labels, total_level):
    '''Calculates the layer loss.
    '''

    loss_fct = nn.CrossEntropyLoss()

    lloss = 0
    for l in range(total_level):

        lloss += loss_fct(predictions[l], true_labels[l])

    return self.alpha * lloss

def calculate_dloss(predictions, true_labels, total_level):
    '''Calculate the dependence loss.
    '''

    dloss = 0
    for l in range(1, total_level):

        current_lvl_pred = torch.argmax(nn.Softmax(dim=1)(predictions[l]), dim=1)
        prev_lvl_pred = torch.argmax(nn.Softmax(dim=1)(predictions[l-1]), dim=1)

        D_l = self.check_hierarchy(current_lvl_pred, prev_lvl_pred, l)  #just a boolean tensor

        l_prev = torch.where(prev_lvl_pred == true_labels[l-1], torch.FloatTensor([0]).to(self.device), torch.FloatTensor([1]).to(self.device))
        l_curr = torch.where(current_lvl_pred == true_labels[l], torch.FloatTensor([0]).to(self.device), torch.FloatTensor([1]).to(self.device))

        dloss += torch.sum(torch.pow(self.p_loss, D_l*l_prev)*torch.pow(self.p_loss, D_l*l_curr) - 1)

    return self.beta * dloss
hgtggwj0

hgtggwj01#

有一个损失是两个单独的损失之和没有错,这里是一个小的原则证明改编from the docs

import torch
import numpy
from sklearn.datasets import make_blobs

class Feedforward(torch.nn.Module):
    def __init__(self, input_size, hidden_size):
        super(Feedforward, self).__init__()
        self.input_size = input_size
        self.hidden_size  = hidden_size
        self.fc1 = torch.nn.Linear(self.input_size, self.hidden_size)
        self.relu = torch.nn.ReLU()
        self.fc2 = torch.nn.Linear(self.hidden_size, 1)
        self.sigmoid = torch.nn.Sigmoid()
    def forward(self, x):
        hidden = self.fc1(x)
        relu = self.relu(hidden)
        output = self.fc2(relu)
        output = self.sigmoid(output)
        return output

def blob_label(y, label, loc): # assign labels
    target = numpy.copy(y)
    for l in loc:
        target[y == l] = label
    return target

x_train, y_train = make_blobs(n_samples=40, n_features=2, cluster_std=1.5, shuffle=True)
x_train = torch.FloatTensor(x_train)
y_train = torch.FloatTensor(blob_label(y_train, 0, [0]))
y_train = torch.FloatTensor(blob_label(y_train, 1, [1,2,3]))

x_test, y_test = make_blobs(n_samples=10, n_features=2, cluster_std=1.5, shuffle=True)
x_test = torch.FloatTensor(x_test)
y_test = torch.FloatTensor(blob_label(y_test, 0, [0]))
y_test = torch.FloatTensor(blob_label(y_test, 1, [1,2,3]))

model = Feedforward(2, 10)
criterion = torch.nn.BCELoss()
optimizer = torch.optim.SGD(model.parameters(), lr = 0.01)

model.eval()
y_pred = model(x_test)
before_train = criterion(y_pred.squeeze(), y_test)
print('Test loss before training' , before_train.item())

model.train()
epoch = 20
for epoch in range(epoch):
    optimizer.zero_grad()    # Forward pass
    y_pred = model(x_train)    # Compute Loss
    lossCE= criterion(y_pred.squeeze(), y_train)
    lossSQD = (y_pred.squeeze()-y_train).pow(2).mean()
    loss=lossCE+lossSQD
    print('Epoch {}: train loss: {}'.format(epoch, loss.item()))    # Backward pass
    loss.backward()
    optimizer.step()

必须有一真实的二次,你直接或间接调用backward对一些变量,然后遍历你的图形。这是有点太多,要求完整的代码在这里,只有你可以检查这一点,或至少减少到一个最小的例子(当这样做,你可能已经发现了问题)。除此之外,我会开始检查:
1.在第一次迭代培训中是否已经发生?如果没有:在没有detach的情况下,您是否重用了第二次迭代的任何计算结果?
1.当你对你的损失分别进行backward运算时,lloss.backward()之后是dloss.backward()(这与梯度累积时先将它们相加的效果相同):发生了什么?这将让您跟踪两个损失中的哪一个发生了错误。

q3aa0525

q3aa05252#

在backward()之后,您的comp.图形被释放,因此对于第二个backward,您需要通过再次提供输入来创建一个新图形。如果您想在backward之后重复相同的图形(出于某种原因),您需要将backward中的retain_graph标志指定为True。请参阅retain_graph here
P.S.由于Tensor的总和是自动可微的,因此对损失求和不会导致向后的任何问题。

相关问题