在训练和验证步骤中使用PyTorch无限循环

ee7vknir  于 2023-05-17  发布在  其他
关注(0)|答案(1)|浏览(83)

Dataset和DataLoader的部分是好的,我从我构建的另一段代码中回收,但在我的代码中的那部分得到了一个无限循环:

def train(train_loader, MLP, epoch, criterion, optimizer):

 MLP.train()
 epoch_loss = []

 for batch in train_loader:

    optimizer.zero_grad()
    sample, label = batch

    #Forward
    pred = MLP(sample)
    loss = criterion(pred, label)
    epoch_loss.append(loss.data)

    #Backward
    loss.backward()
    optimizer.step()

 epoch_loss = np.asarray(epoch_loss)

 print('Epoch: {}, Loss: {:.4f} +/- {:.4f}'.format(epoch+1, 
 epoch_loss.mean(), epoch_loss.std()))


def test(test_loader, MLP, epoch, criterion):

 MLP.eval()
 with torch.no_grad():
    epoch_loss = []

    for batch in train_loader:

        sample, label = batch

        #Forward
        pred = MLP(sample)
        loss = criterion(pred, label)
        epoch_loss.append(loss.data)

    epoch_loss = np.asarray(epoch_loss)

    print('Epoch: {}, Loss: {:.4f} +/- {:.4f}'.format(epoch+1, 
    epoch_loss.mean(), epoch_loss.std()))

然后,我把它放在迭代的时代:

for epoch in range(args['num_epochs']):
    train(train_loader, MLP, epoch, criterion, optimizer)
    test(test_loader, MLP, epoch, criterion)
    print('-----------------------')

由于它甚至不打印第一个损失数据,我相信逻辑错误在训练函数中,但我不知道它在哪里。
编辑:这是我的MLP类,问题也可以在这里:

class BikeRegressor(nn.Module):

 def __init__(self, input_size, hidden_size, out_size):
    super(BikeRegressor, self).__init__()
    
    self.features = nn.Sequential(nn.Linear(input_size, hidden_size),
                                  nn.ReLU(),
                                  nn.Linear(hidden_size, hidden_size),
                                  nn.ReLU())
    
    self.out = nn.Sequential(nn.Linear(hidden_size, out_size),
                             nn.ReLU())
    
 def forward(self, X):
    
    hidden = self.features(X)
    output = self.out(hidden)
    
    return output

编辑2:数据集和数据加载器:

class Bikes(Dataset):
 def __init__(self, data): #data is a Dataframe from Pandas
    self.datas = data.to_numpy()
    
 def __getitem__(self, idx): 
    sample = self.datas[idx][2:14] 
    label = self.datas[idx][-1:] 
    
    
    sample = torch.from_numpy(sample.astype(np.float32))
    label = torch.from_numpy(label.astype(np.float32))
    
    return sample, label

 def __len__(self):
    return len(self.datas)


train_set = Bikes(ds_train)
test_set = Bikes(ds_test)


train_loader = DataLoader(train_set, batch_size=args['batch_size'], shuffle=True, num_workers=args['num_workers'])
test_loader = DataLoader(test_set, batch_size=args['batch_size'], shuffle=True, num_workers=args['num_workers'])
ctzwtxfj

ctzwtxfj1#

我也遇到了同样的问题,问题是jupyter notebook可能无法正常使用多处理,如here所述:
注意:此包中的功能要求__ main __模块可由子模块导入。这在编程指南中有所涉及,但值得在这里指出。This means that some examples, such as the Pool examples will not work in the interactive interpreter.

您有三种选择来解决您的问题:

  • train_loadertest_loader中设置num_worker = 0。(最简单的一个)
  • 将您的代码移动到Google Colab。它与我一起使用num_worker = 6,但我认为这取决于你的程序将使用多少内存。因此,尝试逐渐增加num_worker,直到程序崩溃,告诉您程序内存不足。
  • 在jupyter中调整你的程序以支持多处理,这些资源12可能会有所帮助。

相关问题