keras 为什么我的损失函数在每个时期都在增加?

yiytaume  于 2023-03-02  发布在  其他
关注(0)|答案(2)|浏览(169)

我是ML的新手,所以如果这是一个任何人都能理解的愚蠢问题,我很抱歉。我在这里使用TensorFlow和Keras。
下面是我的代码:

import tensorflow as tf
import numpy as np
from tensorflow import keras
model = keras.Sequential([
    keras.layers.Dense(units=1, input_shape=[1])
])
model.compile(optimizer="sgd", loss="mean_squared_error")
xs = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0], dtype=float)
ys = np.array([0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0], dtype=float)
model.fit(xs, ys, epochs=500)
print(model.predict([25.0]))

我得到的输出如下[我没有显示全部500行,只显示20个历元:

Epoch 1/500
1/1 [==============================] - 0s 210ms/step - loss: 450.9794
Epoch 2/500
1/1 [==============================] - 0s 4ms/step - loss: 1603.0852
Epoch 3/500
1/1 [==============================] - 0s 10ms/step - loss: 5698.4731
Epoch 4/500
1/1 [==============================] - 0s 7ms/step - loss: 20256.3398
Epoch 5/500
1/1 [==============================] - 0s 10ms/step - loss: 72005.1719
Epoch 6/500
1/1 [==============================] - 0s 4ms/step - loss: 255956.5938
Epoch 7/500
1/1 [==============================] - 0s 3ms/step - loss: 909848.5000
Epoch 8/500
1/1 [==============================] - 0s 5ms/step - loss: 3234236.0000
Epoch 9/500
1/1 [==============================] - 0s 3ms/step - loss: 11496730.0000
Epoch 10/500
1/1 [==============================] - 0s 3ms/step - loss: 40867392.0000
Epoch 11/500
1/1 [==============================] - 0s 3ms/step - loss: 145271264.0000
Epoch 12/500
1/1 [==============================] - 0s 3ms/step - loss: 516395584.0000
Epoch 13/500
1/1 [==============================] - 0s 4ms/step - loss: 1835629312.0000
Epoch 14/500
1/1 [==============================] - 0s 3ms/step - loss: 6525110272.0000
Epoch 15/500
1/1 [==============================] - 0s 3ms/step - loss: 23194802176.0000
Epoch 16/500
1/1 [==============================] - 0s 3ms/step - loss: 82450513920.0000
Epoch 17/500
1/1 [==============================] - 0s 3ms/step - loss: 293086593024.0000
Epoch 18/500
1/1 [==============================] - 0s 5ms/step - loss: 1041834835968.0000
Epoch 19/500
1/1 [==============================] - 0s 3ms/step - loss: 3703408164864.0000
Epoch 20/500
1/1 [==============================] - 0s 3ms/step - loss: 13164500484096.0000

如你所见,它呈指数级增长。很快(在第64个历元),这些数字变成inf。然后,从无穷大开始,它做了一些事情,变成NaN(不是数字)。我认为随着时间的推移,模型会更好地找出模式,这是怎么回事?
我注意到一件事,如果我把xsys的长度从20减少到10,损失会减少,变成7.9193e-05。当我把两个numpy数组的长度增加到18后,它开始不受控制地增加,否则就没问题了。我给了20个值,因为我认为如果我给予更多的数据,模型会更好。所以我给出了20个值。

ngynwnxp

ngynwnxp1#

你的阿尔法/学习率似乎太大了。
尝试使用较低的学习率,如下所示:

import tensorflow as tf
import numpy as np
from tensorflow import keras
model = keras.Sequential([
    keras.layers.Dense(units=1, input_shape=[1])
])
# manually set the optimizer, default learning_rate=0.01
opt = keras.optimizers.SGD(learning_rate=0.0001)

model.compile(optimizer=opt, loss="mean_squared_error")
xs = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0], dtype=float)
ys = np.array([0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0], dtype=float)
model.fit(xs, ys, epochs=500)
print(model.predict([25.0]))

......它们会汇聚在一起。
ADAM工作得更好的原因之一可能是因为它自适应地估计学习速率-我认为ADAM中的A代表自适应;)).

    • 编辑:确实如此!**

https://arxiv.org/pdf/1412.6980.pdf开始
该方法根据梯度的一阶矩和二阶矩的估计来计算不同参数的个体自适应学习速率;* * Adam这个名字来源于自适应矩估计**

Epoch 1/500
1/1 [==============================] - 0s 129ms/step - loss: 1.2133
Epoch 2/500
1/1 [==============================] - 0s 990us/step - loss: 1.1442
Epoch 3/500
1/1 [==============================] - 0s 0s/step - loss: 1.0792
Epoch 4/500
1/1 [==============================] - 0s 1ms/step - loss: 1.0178
Epoch 5/500
1/1 [==============================] - 0s 1ms/step - loss: 0.9599
Epoch 6/500
1/1 [==============================] - 0s 1ms/step - loss: 0.9053
Epoch 7/500
1/1 [==============================] - 0s 0s/step - loss: 0.8538
Epoch 8/500
1/1 [==============================] - 0s 1ms/step - loss: 0.8053
Epoch 9/500
1/1 [==============================] - 0s 999us/step - loss: 0.7595
Epoch 10/500
1/1 [==============================] - 0s 1ms/step - loss: 0.7163
...
Epoch 499/500
1/1 [==============================] - 0s 1ms/step - loss: 9.9431e-06
Epoch 500/500
1/1 [==============================] - 0s 999us/step - loss: 9.9420e-06
    • 编辑2:**

使用真/"香草"梯度下降(与随机GD相比),你应该在每一步都看到收敛。如果你开始发散,通常是因为alpha/学习率/步长太大。这意味着搜索在一个、几个或所有维度上"过冲"。
考虑一个损失函数,它的偏导数/梯度在一维或多维上有一个非常窄的谷,一个"小步太远"可能意味着突然出现一个大的误差。

hujrc8aj

hujrc8aj2#

似乎优化器SGD在您的数据集上执行得不好。如果您将优化器替换为"adam",您应该会得到预期的结果。

model.compile(optimizer="adam", loss="mean_squared_error")

然后,预测结果应该是您所期望的结果

print(model.predict([25.0]))
# [[12.487587]]

我不是100%为什么SGD优化器工作得这么差。
编辑:
@MortenJensen(见下图)很好地解释了为什么adam优化器做得更好。总结:sgd做得不好的原因是它需要一个较小的学习速率,而adam有一个自适应的学习速率。

相关问题