PyTorch:一个非常简单的模型没有经过训练

enxuqcxy  于 2023-10-20  发布在  其他
关注(0)|答案(1)|浏览(109)

我在youtube上看了一个叫做PyTorch for Deep Learning & Machine Learning的教程。
我试着根据视频上的信息建立一个非常简单的线性回归模型。下面是模型和训练循环的代码。我还提供了输出。
由于某种原因,模型没有得到训练。我将参数分配给一个优化器,创建一个损失函数,然后反向传播,最后用step()更新参数。从输出中可以看到,损失的值非常奇怪。我不明白为什么它不工作。

# Imports
import torch
from torch import nn
from torch import optim

# Create model class
class LinearRegressionModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.linear_layer = nn.Linear(in_features=1, out_features=1)
        
    # Forward method to define the computation in the model
    def forward(self, X: torch.Tensor) -> torch.Tensor:
        return self.linear_layer(X)

    
# Set manual seed
torch.manual_seed(42)

# Create data
weight, bias = 0.7, 0.3
start, end, step = 0, 1, 0.02
X = torch.arange(start, end, step).unsqueeze(dim=1)
y = weight*X + bias + 0.035*torch.randn_like(X) 

# Create train/test split
train_split = int(0.8*len(X))
X_train, y_train = X[:train_split], y[:train_split]
X_test,  y_test  = X[train_split:], y[train_split:]

# Create model
model = LinearRegressionModel()
print(model, end="\n\n")
print(model.state_dict(), end="\n\n")

# Create loss function
loss_fn = nn.L1Loss()

# Create optimiser
optimiser = optim.SGD(params=model.parameters(), lr=1e2)

# Training loop 
epochs = 200
for epoch in range(1, epochs+1):
    # Set model to training mode
    model.train()
    
    # Forward pass
    y_pred = model(X_train)
    
    # Calculate loss
    loss = loss_fn(y_pred, y_train)
    
    # Zero gradients in optimiser
    optimiser.zero_grad()
    
    # Backpropate
    loss.backward()
    
    # Step model's lparameters
    optimiser.step()
    
    ### Evaluate the current state
    model.eval()
    with torch.inference_mode():
        test_pred = model(X_test)
        test_loss = loss_fn(test_pred, y_test)
    
    # Print the current state
    if epoch == 1 or epoch % 10 == 0:
        print("Epoch: {:3} | Loss: {:.2f} | Test loss {:.2f}".format(epoch,loss,test_loss))

print()
print(model.state_dict())

输出:

LinearRegressionModel(
  (linear_layer): Linear(in_features=1, out_features=1, bias=True)
)

OrderedDict([('linear_layer.weight', tensor([[0.8294]])), ('linear_layer.bias', tensor([-0.5927]))])

Epoch:   1 | Loss: 0.85 | Test loss: 133.93
Epoch:  10 | Loss: 114.36 | Test loss: 0.78
Epoch:  20 | Loss: 114.36 | Test loss: 0.78
Epoch:  30 | Loss: 114.36 | Test loss: 0.78
Epoch:  40 | Loss: 114.36 | Test loss: 0.78
Epoch:  50 | Loss: 114.36 | Test loss: 0.78
Epoch:  60 | Loss: 114.36 | Test loss: 0.78
Epoch:  70 | Loss: 114.36 | Test loss: 0.78
Epoch:  80 | Loss: 114.36 | Test loss: 0.78
Epoch:  90 | Loss: 114.36 | Test loss: 0.78
Epoch: 100 | Loss: 114.36 | Test loss: 0.78
Epoch: 110 | Loss: 114.36 | Test loss: 0.78
Epoch: 120 | Loss: 114.36 | Test loss: 0.78
Epoch: 130 | Loss: 114.36 | Test loss: 0.78
Epoch: 140 | Loss: 114.36 | Test loss: 0.78
Epoch: 150 | Loss: 114.36 | Test loss: 0.78
Epoch: 160 | Loss: 114.36 | Test loss: 0.78
Epoch: 170 | Loss: 114.36 | Test loss: 0.78
Epoch: 180 | Loss: 114.36 | Test loss: 0.78
Epoch: 190 | Loss: 114.36 | Test loss: 0.78
Epoch: 200 | Loss: 114.36 | Test loss: 0.78

OrderedDict([('linear_layer.weight', tensor([[0.8294]])), ('linear_layer.bias', tensor([-0.5927]))])
noj0wjuj

noj0wjuj1#

我认为这很好;你只需要在以下方面进行更改:
1.学习率:你的学习率非常高,所以用一些更小的值替换它的值;在这里,我使用了lr=1e-2

optimiser = optim.SGD(params=model.parameters(), lr=1e-2)

我得到以下输出:

LinearRegressionModel(
  (linear_layer): Linear(in_features=1, out_features=1, bias=True)
)

OrderedDict([('linear_layer.weight', tensor([[0.8294]])), ('linear_layer.bias', tensor([-0.5927]))])

Epoch:   1 | Loss: 0.85 | Test loss 0.77
Epoch:  10 | Loss: 0.74 | Test loss 0.64
Epoch:  20 | Loss: 0.63 | Test loss 0.51
Epoch:  30 | Loss: 0.51 | Test loss 0.37
Epoch:  40 | Loss: 0.40 | Test loss 0.24
Epoch:  50 | Loss: 0.28 | Test loss 0.11
Epoch:  60 | Loss: 0.17 | Test loss 0.04
Epoch:  70 | Loss: 0.10 | Test loss 0.12
Epoch:  80 | Loss: 0.09 | Test loss 0.16
Epoch:  90 | Loss: 0.08 | Test loss 0.18
Epoch: 100 | Loss: 0.07 | Test loss 0.19
Epoch: 110 | Loss: 0.07 | Test loss 0.18
Epoch: 120 | Loss: 0.07 | Test loss 0.18
Epoch: 130 | Loss: 0.06 | Test loss 0.17
Epoch: 140 | Loss: 0.06 | Test loss 0.16
Epoch: 150 | Loss: 0.06 | Test loss 0.15
Epoch: 160 | Loss: 0.06 | Test loss 0.14
Epoch: 170 | Loss: 0.05 | Test loss 0.14
Epoch: 180 | Loss: 0.05 | Test loss 0.13
Epoch: 190 | Loss: 0.05 | Test loss 0.12
Epoch: 200 | Loss: 0.05 | Test loss 0.12

OrderedDict([('linear_layer.weight', tensor([[0.9241]])), ('linear_layer.bias', tensor([0.2178]))])

高学习率通常会跳过最佳值。所以,刚才没有成功。
我希望这对你有帮助。谢谢你,谢谢

相关问题