keras LSTM模型没有改进情感分析,我做错了什么?

hm2xizp9  于 2023-04-21  发布在  其他
关注(0)|答案(1)|浏览(135)

我有情绪数据,包含3个标签(积极,消极,中性),我有3233行数据,已经测试了朴素贝叶斯和SVM模型,我的数据得到了90%的准确率朴素贝叶斯,和92%的准确率SVM
这是我的模型

EMBED_DIM = 16
LSTM_OUT = 32

model = Sequential()
model.add(Embedding(total_words, EMBED_DIM, input_length = x.shape[1]))
model.add(SpatialDropout1D(0.4))
model.add(LSTM(LSTM_OUT, dropout=0.2, recurrent_dropout=0.2,return_sequences=True))
model.add(LSTM(32,dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(3, activation='softmax'))
optimizer = Adam(lr=0.0001)
model.compile(optimizer =optimizer, loss = 'categorical_crossentropy', metrics = ['accuracy'])

print(model.summary())

from tensorflow.keras.callbacks import ModelCheckpoint   # save model
checkpoint = ModelCheckpoint(
    'models/LSTM.h5',
    monitor='accuracy',
    save_best_only=True,
    verbose=1
)

model.fit(x_train, y_train, batch_size = 32, epochs = 5, callbacks=[checkpoint])

结果停留在30%左右,没有改善,谢谢!
我已经尝试使用1层lstm更改模型,删除spatialDropout 1D,更改lstm单元和嵌入维度,同时更改dropout值
我已经尝试过使用sigmoid来激活和二进制交叉熵

nszi6y05

nszi6y051#

我在我的图层中使用了Flatten,我的目标是一个二进制的结果。我使用这个模型能够达到大约80%的准确率

word_count=[]
word_count = [len(x.split()) for x in df['Text'].tolist()]
#print(words)
max_length = np.max(word_count)
vocab_size=70

model = Sequential()
model.add(Embedding(vocab_size, 1, input_length=max_length))
model.add(SpatialDropout1D(0.4))
model.add(LSTM(32, return_sequences=True))
model.add(LSTM(32,dropout=0.2, recurrent_dropout=0.2))
model.add(Flatten())
model.add(Dense(1,activation="sigmoid"))
optimizer = tf.keras.optimizers.Adam(lr=0.0001)
model.compile(optimizer =optimizer, loss = 'binary_crossentropy', metrics = ['accuracy'])

model.fit(padded_docs, target, epochs=3, verbose=1)

model.summary()

loss, accuracy = model.evaluate(padded_docs, target, verbose=0)
print('Accuracy: %f' % (accuracy*100))

相关问题