使用BatchNorm进行训练时,Pytorch预期每个通道有多个值

7hiiyaii  于 2022-12-13  发布在  其他
关注(0)|答案(1)|浏览(157)

我写了这样的代码:

import numpy as np
import torch
from torch.utils.data import TensorDataset, dataloader

inputDim = 10
n = 1000
X = np.random.rand(n,inputDim)
y = np.random.rand(0,2,n)

tensor_x = torch.Tensor(X)
tensor_y = torch.Tensor(y)
Xy = (tensor_x, tensor_y)
XyLoader = dataloader.DataLoader(Xy, batch_size = 16, shuffle = True, drop_last = True)

model = torch.nn.Sequential(
  torch.nn.Linear(inputDim, 200),
  torch.nn.ReLU(),
  torch.nn.BatchNorm1d(num_features=200),
  torch.nn.Linear(200,100),
  torch.nn.Tanh(),
  torch.nn.BatchNorm1d(num_features=100),
  torch.nn.Linear(100,1),
  torch.nn.Sigmoid()
)

optimizer = torch.optim.Adam(model.parameters(), lr= 0.001)
loss_fn = torch.nn.BCELoss()

nepochs = 1000
for epochs in range(nepochs):
  for X,y in XyLoader:
    batch_size = X.shape[0]
    y_hat = model(X.view(batch_size,-1))
    loss = loss_fn(y_hat, y)
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

with torch.no_grad():
  xt = torch.tensor(np.random.rand(1,inputDim))
  y2 = model(xt.float())
  print(y2.detach().numpy()[0][0])

我对torch.nn.BatchNorm1d做错了什么?如果我运行代码时没有两行代码,一切都“正常”,那会有什么问题?

kfgdxczn

kfgdxczn1#

在您的例子中,PyTorch只是抱怨nn.BatchNorm1d的输入的形状,这里的输入应该具有(B, C, L)的形状。C是嵌入维数,L是输入序列的长度/时间步。通常,PyTorch中的1d指的是一个序列,例如一个标记化的句子,其中的每个标记(总共L)表示为C维向量,并堆叠在dim 1上。要修复此错误,可以执行以下操作

tensor_x = torch.as_tensor(X).unsqueeze(1)  # use as_tensor to avoid unnecessary data copy
tensor_y = torch.as_tensor(y)

完整修改代码:

import numpy as np
import torch
from torch.utils.data import TensorDataset, DataLoader

inputDim = 10
n = 1000
X = np.random.rand(n, inputDim)
y = np.random.rand(0, 2, n)

tensor_x = torch.as_tensor(X).unsqueeze(1)
tensor_y = torch.as_tensor(y)
Xy = (tensor_x, tensor_y)
XyLoader = DataLoader(Xy, batch_size = 16, shuffle = True, drop_last = True)

model = torch.nn.Sequential(
  torch.nn.Linear(inputDim, 200),
  torch.nn.ReLU(),
  torch.nn.BatchNorm1d(num_features=1),
  torch.nn.Linear(200, 100),
  torch.nn.Tanh(),
  torch.nn.BatchNorm1d(num_features=1),
  torch.nn.Linear(100,1),
  torch.nn.Sigmoid()
)

optimizer = torch.optim.Adam(model.parameters(), lr= 0.001)
loss_fn = torch.nn.BCELoss()

nepochs = 1000
for epochs in range(nepochs):
    for X, y in XyLoader:
        batch_size = X.shape[0]
        y_hat = model(X.view(batch_size,-1))
        loss = loss_fn(y_hat, y)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

with torch.no_grad():
    xt = torch.as_tensor(np.random.rand(1, inputDim), dtype=torch.float32).unsqueeze(1)
    y2 = model(xt.float())
    print(y2.detach().numpy()[0][0])

相关问题