Keras:训练和验证准确度较高,但预测结果较差

tuwxkamq  于 2022-11-24  发布在  其他
关注(0)|答案(1)|浏览(187)

我在Keras实现了一个双向LSTM。在训练过程中,训练精度和验证精度都是0.83,损失也是0.45。

Epoch 1/50
32000/32000 [==============================] - 597s 19ms/step - loss: 0.4611 - accuracy: 0.8285 - val_loss: 0.4515 - val_accuracy: 0.8316
Epoch 2/50
32000/32000 [==============================] - 589s 18ms/step - loss: 0.4563 - accuracy: 0.8299 - val_loss: 0.4514 - val_accuracy: 0.8320
Epoch 3/50
32000/32000 [==============================] - 584s 18ms/step - loss: 0.4561 - accuracy: 0.8299 - val_loss: 0.4513 - val_accuracy: 0.8318
Epoch 4/50
32000/32000 [==============================] - 612s 19ms/step - loss: 0.4560 - accuracy: 0.8300 - val_loss: 0.4513 - val_accuracy: 0.8319
Epoch 5/50
32000/32000 [==============================] - 572s 18ms/step - loss: 0.4559 - accuracy: 0.8299 - val_loss: 0.4512 - val_accuracy: 0.8318

这是我的模型:

model = tf.keras.Sequential()
model.add(Masking(mask_value=0., input_shape=(timesteps, features)))
model.add(Bidirectional(LSTM(units=100, return_sequences=True), input_shape=(timesteps, features)))
model.add(Dropout(0.7))
model.add(Dense(1, activation='sigmoid'))

我通过scikit-learnStandardScaler对数据集进行了规范化。
我有一个自定义损失:

def get_top_one_probability(vector):
  return (K.exp(vector) / K.sum(K.exp(vector)))

def listnet_loss(real_labels, predicted_labels):
  return -K.sum(get_top_one_probability(real_labels) * tf.math.log(get_top_one_probability(predicted_labels)))

model.compilemodel.fit设置如下:

model.compile(loss=listnet_loss, optimizer=keras.optimizers.Adadelta(learning_rate=1.0, rho=0.95), metrics=["accuracy"])

model.fit(training_dataset, training_dataset_labels, validation_split=0.2, batch_size=1, 
            epochs=number_of_epochs, workers=10, verbose=1, 
            callbacks=[SaveModelCallback(), keras.callbacks.EarlyStopping(monitor='val_loss', patience=3)])

这是我的测试阶段:

scaler = StandardScaler()
scaler.fit(test_dataset)
test_dataset = scaler.transform(test_dataset)

test_dataset = test_dataset.reshape((int(test_dataset.shape[0]/20), 20, test_dataset.shape[1]))

# Read model
json_model_file = open('/content/drive/My Drive/Tesi_magistrale/LSTM/models_padded_2/model_11.json', 'r')
loaded_model_json = json_model_file.read()
json_model_file.close()
model = model_from_json(loaded_model_json)
model.load_weights("/content/drive/My Drive/Tesi_magistrale/LSTM/models_weights_padded_2/model_11_weights.h5")

with open("/content/drive/My Drive/Tesi_magistrale/LSTM/predictions/padded/en_ewt-padded.H.pred", "w+") as predictions_file:
  predictions = model.predict(test_dataset)

我也重新调整了测试集的规模,在predictions = model.predict(test_dataset)行之后,我放了一些业务逻辑来处理我的预测(这个逻辑也用在训练阶段)。
我得到非常糟糕的结果在测试设置,同样如果结果在训练是好的。我做错了什么?

0kjbasz6

0kjbasz61#

不知何故,Keras的图像生成器在与fit()或fit_generator()函数组合时工作得很好,但在组合时却失败得很惨
使用predict_generator()函数或predict()函数。
当使用AMD处理器的Plaid-ML Keras后端时,我宁愿一个接一个地循环所有测试图像,并在每次迭代中获得每个图像的预测。

import os
from PIL import Image
import keras
import numpy

# code for creating dan training model is not included

print("Prediction result:")
dir = "/path/to/test/images"
files = os.listdir(dir)
correct = 0
total = 0
#dictionary to label all animal category class.
classes = {
    0:'This is Cat',
    1:'This is Dog',
}
for file_name in files:
    total += 1
    image = Image.open(dir + "/" + file_name).convert('RGB')
    image = image.resize((100,100))
    image = numpy.expand_dims(image, axis=0)
    image = numpy.array(image)
    image = image/255
    pred = model.predict_classes([image])[0]
    animals_category = classes[pred]
    if ("cat" in file_name) and ("cat" in sign):
        print(correct,". ", file_name, animals_category)
        correct+=1
    elif ("dog" in file_name) and ("dog" in animals_category):
        print(correct,". ", file_name, animals_category)
        correct+=1
print("accuracy: ", (correct/total))

相关问题