pytorch 运行时错误:应为标量类型Double,但找到的是Float

bjg7j2ky  于 2022-12-18  发布在  其他
关注(0)|答案(1)|浏览(230)

我正在使用GCNN。我的输入数据是float64。但是每当我运行我的代码时,这个错误就会出现。我尝试将所有Tensor转换为double,但没有成功。主要是我的数据是numpy数组,然后我将它们转换为pytorchTensor。
这是我的数据,我把numpy数组转换成Tensor,再把Tensor转换成几何数据来运行gcnn。

e_index1 = torch.tensor(edge_index)
x1 = torch.tensor(x)
y1 = torch.tensor(y)

print(x.dtype)
print(y.dtype)
print(edge_index.dtype)

from torch_geometric.data import Data
data = Data(x=x1, edge_index=e_index1, y=y1)

输出:

float64
float64
int64

下面是我的gcnn类的代码和其余的代码。
一个二个一个一个
错误记录

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-148-e816c251670b> in <module>
      7 for epoch in range(10):
      8     optimizer.zero_grad()
----> 9     out = model(data)
     10     loss = F.nll_loss(out[data.train_mask], data.y[data.train_mask])
     11     loss.backward()

5 frames
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1188         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1189                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190             return forward_call(*input, **kwargs)
   1191         # Do not call functions when jit is used
   1192         full_backward_hooks, non_full_backward_hooks = [], []

<ipython-input-147-c1bfee724570> in forward(self, data)
     13         x, edge_index = data.x.type(torch.DoubleTensor), data.edge_index
     14 
---> 15         x = self.conv1(x, edge_index)
     16         x = F.relu(x)
     17         x = F.dropout(x, training=self.training)

/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1188         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1189                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190             return forward_call(*input, **kwargs)
   1191         # Do not call functions when jit is used
   1192         full_backward_hooks, non_full_backward_hooks = [], []

/usr/local/lib/python3.8/dist-packages/torch_geometric/nn/conv/gcn_conv.py in forward(self, x, edge_index, edge_weight)
    193                     edge_index = cache
    194 
--> 195         x = self.lin(x)
    196 
    197         # propagate_type: (x: Tensor, edge_weight: OptTensor)

/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1188         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1189                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190             return forward_call(*input, **kwargs)
   1191         # Do not call functions when jit is used
   1192         full_backward_hooks, non_full_backward_hooks = [], []

/usr/local/lib/python3.8/dist-packages/torch_geometric/nn/dense/linear.py in forward(self, x)
    134             x (Tensor): The features.
    135         """
--> 136         return F.linear(x, self.weight, self.bias)
    137 
    138     @torch.no_grad()

RuntimeError: expected scalar type Double but found Float

我也在stackover flow的博客中尝试过这个解决方案,但是没有成功,同样的错误反复出现。

xytpbqjk

xytpbqjk1#

您可以使用model.double()将所有模型参数转换为双精度类型。如果您的输入数据是双精度类型,这应该会给予一个兼容的模型。请记住,由于精度较高,双精度类型通常比单精度类型慢。

相关问题