dataset = Data(params)
detector = Detector(params)
optimizer = torch.optim.Adam(detector.parameters(), lr=params['learning_rate'])
Loss = []
criterion = torch.nn.MSELoss()
for epoch in range(params['maxEpoch']):
y, h_a, h_b, plus, hTy, hTh = dataset.generate()
x_ = detector(hTy, hTh)
loss = 0.0
optimizer.zero_grad()
for _ in range(1, params['DetNet_layer']):
loss += criterion(x_[:,:,_], torch.from_numpy(plus).to(torch.double)) * math.log(_)
loss.backward(retain_graph=True)
optimizer.step()
Loss.append(loss.item())
字符串
上面是我的main.py文件,错误发生在loss.backward()中。类Detetor是:
def forward(self, HTy, HTH):
HTy_torch = torch.from_numpy(HTy).unsqueeze(1)
HTH_torch = torch.from_numpy(HTH).unsqueeze(1)
x_torch = torch.from_numpy(np.zeros((self.batch_size, 1, self.L)))
v_torch = torch.from_numpy(np.zeros((self.batch_size, 1, self.L)))
for i in range(1, self.L):
x_tmp, v_tmp = self.layers[i](HTy_torch, HTH_torch, x_torch[:, :, i-1], v_torch[:, :, i-1])
x_torch[:, :, i] = x_tmp
v_torch[:, :, i] = v_tmp
return x_torch
型
滑动操作[:,:,i]是否进行原地操作?
如果是,我该如何修改它?
我在函数loss.backword中尝试了'retain_graph=True'。
1条答案
按热度按时间uklbhaso1#
这可能是
loss +=
的用法。字符串
另外,你可能想在那里使用浮点数而不是双精度。