训练精度为0,而在keras中lstm模型在训练期间的损失减少

ao218c7q  于 2021-07-14  发布在  Java
关注(0)|答案(0)|浏览(376)

我正在训练一个小的lstm模型,它有一层7个lstm细胞和一个单位的sigmoid层。损失函数中的损失减小,但精度保持为0。它永远不会改变。你能告诉我为什么会这样吗?如果你需要这些文件,请留下评论。

import pandas
import scipy.io as loader
import tensorflow as tf
import keras
import numpy
import time
import math
from tensorflow.keras.datasets import imdb
from tensorflow.keras.layers import Embedding, Dense, LSTM
from tensorflow.keras.losses import BinaryCrossentropy
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import SGD
from tensorflow.keras.preprocessing.sequence import pad_sequences

additional_metrics = ['accuracy']
loss_function = BinaryCrossentropy()
number_of_epochs = 500
optimizer = SGD()
validation_split = 0.20
verbosity_mode = 1
mini = 0
maxi  = 0
mean = 0

def myfunc(arg): # mean normalization
    global mini, maxi, mean
    return (arg - mean) / (maxi - mini)

cgm = numpy.load('cgm_train_new.npy')
labels = numpy.load('labels_train_new.npy')
labs = list()
cgm_flat = cgm.flatten()
mini = min(cgm_flat)
maxi = max(cgm_flat)
mean = sum(cgm_flat) / len(cgm_flat)
cgm = numpy.apply_along_axis(myfunc, 0, cgm)

for each in labels:
    if each[-1] == 1: labs.append(.99)
    else: labs.append(.01)

RNNmodel = Sequential()
RNNmodel.add(LSTM(7, activation='tanh'))
RNNmodel.add(Dense(1, activation='sigmoid'))
RNNmodel.compile(optimizer=optimizer, loss=loss_function, metrics=additional_metrics)
cgm_rs = tf.reshape(cgm, [len(cgm), 7, 1])
history = RNNmodel.fit(
    cgm_rs,
    tf.reshape(labs, [len(labs), 1, 1]),
    batch_size=len(labs),
    epochs=number_of_epochs,
    verbose=verbosity_mode)

answers = RNNmodel.predict(cgm_rs)
for each in answers:
    print(each)

cgm文件
标签

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题