tensorflow 如何提高CNN模型的准确性[关闭]

lkaoscv7  于 2023-06-30  发布在  其他
关注(0)|答案(1)|浏览(137)

**已关闭。**此问题为not about programming or software development。目前不接受答复。

这个问题似乎不是关于a specific programming problem, a software algorithm, or software tools primarily used by programmers的。如果你认为这个问题与another Stack Exchange site的主题有关,你可以留下评论,解释在哪里可以回答这个问题。
4天前关闭。
Improve this question
我正在创建一个CNN模型来分类100类快速绘制数据集。我设法将精度提高到0.96,瓦尔_accuracy提高到0.84。我尝试过增加复杂性和使用数据增强,但无论我怎么尝试,瓦尔_accuracy仍然是0.83或更低。如何提高准确性。我是深度学习和机器学习的新手,所以如果我说错了什么,请原谅我。
以下是我的模型,没有数据增强:

model_15 = Sequential([
    Conv2D(128, 3, input_shape=(28, 28, 1), activation='relu', padding='same'),
    BatchNormalization(),
    MaxPool2D(),
    Conv2D(256, 3, activation='relu', padding='same'),
    BatchNormalization(),
    MaxPool2D(),
    Conv2D(512, 3, activation='relu', padding='same'),
    BatchNormalization(),
    MaxPool2D(),
    Conv2D(1024, 3, activation='relu', padding='same'),
    BatchNormalization(),
    MaxPool2D(),
    Flatten(),
    Dense(1024, activation='relu'),
    Dropout(0.5),
    Dense(1024, activation='relu'),
    Dropout(0.5),
    Dense(num_classes, activation='softmax')
])

model_15.compile(loss='categorical_crossentropy',
                 optimizer=Adam(),
                 metrics=['accuracy'])

rlronp = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=1)
estop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=4, restore_best_weights=True)

history_15 = model_15.fit(x=x_train, y=y_train_onehot, epochs=50, validation_split=0.1, batch_size=64, callbacks=[rlronp, estop])

这是输出:
Epoch 1/50 5625/5625 [==============================] -126 s 21 ms/步进损耗:1.5990 -准确度:0.6039 -瓦尔_loss:1.0083 -瓦尔_accuracy:0.7563 - lr:0.0010
Epoch 2/50 5625/5625 [=============================] -118 s 21 ms/步进损耗:0.9874 -准确度:0.7558 -瓦尔_loss:0.8360 -瓦尔_accuracy:0.7886 - lr:0.0010
Epoch 3/50 5625/5625 [=============================] -120 s 21 ms/步进-损耗:0.8319 -准确度:0.7943 -瓦尔_loss:0.7624 -瓦尔_accuracy:0.8137 - lr:0.0010
Epoch 4/50 5625/5625 [=============================] -118 s 21 ms/步进损耗:0.7374 -准确度:0.8160 -瓦尔_loss:0.7364 -瓦尔_accuracy:0.8161 - lr:0.0010
Epoch 5/50 5625/5625 [=] -118 s 21 ms/步进损耗:0.6751 -准确度:0.8309 -瓦尔_loss:0.7469 -瓦尔_accuracy:0.8193 - lr:0.0010
Epoch 6/50 5625/5625 [=] -120 s 21 ms/步进损失:0.5142 -准确度:0.8676 -瓦尔_loss:0.6842 -瓦尔_accuracy:0.8342 - lr:5.0000e-04
Epoch 7/50 5625/5625 [=============================] -120 s 21 ms/步进损失:0.4408 -准确度:0.8839 -瓦尔_loss:0.7071 -瓦尔_accuracy:0.8389 - lr:5.0000e-04
Epoch 8/50 5625/5625 [=] -118 s 21 ms/步进损耗:0.3405 -准确度:0.9074 -瓦尔_loss:0.7326 -瓦尔_accuracy:0.8416 - lr:2.5000e-04
Epoch 9/50 5625/5625 [=] -118 s 21 ms/步进损耗:0.2729 -准确度:0.9237 -瓦尔_loss:0.7882 -瓦尔_accuracy:0.8434 - lr:1.2500e-04
Epoch 10/50 5625/5625 [=] -120 s 21 ms/步进-损耗:0.2359 -准确度:0.9333 -瓦尔_loss:0.8250 -瓦尔_accuracy:0.8420 - lr:6.2500e-05

rhfm7lfc

rhfm7lfc1#

1)增加深度/宽度并使用正则化:

我建议你通过添加更多卷积层来增加模型的深度,或者通过在每个层中使用更多过滤器来改变模型的宽度。这通常会提高准确性。然而,它也可能使模型更容易过拟合**,所以考虑用正则化来平衡这一点。下面是一个例子:

from tensorflow.keras import regularizers

model_15 = Sequential([
    Conv2D(128, 3, input_shape=(28, 28, 1), activation='relu', padding='same', kernel_regularizer=regularizers.l2(0.001)),
    BatchNormalization(),
    MaxPool2D(),
    Conv2D(256, 3, activation='relu', padding='same', kernel_regularizer=regularizers.l2(0.001)),
    BatchNormalization(),
    MaxPool2D(),
    Conv2D(512, 3, activation='relu', padding='same', kernel_regularizer=regularizers.l2(0.001)),
    BatchNormalization(),
    MaxPool2D(),
    Conv2D(1024, 3, activation='relu', padding='same', kernel_regularizer=regularizers.l2(0.001)),
    BatchNormalization(),
    MaxPool2D(),
    Conv2D(2048, 3, activation='relu', padding='same', kernel_regularizer=regularizers.l2(0.001)),
    BatchNormalization(),
    MaxPool2D(),
    Flatten(),
    Dense(1024, activation='relu', kernel_regularizer=regularizers.l2(0.001)),
    Dropout(0.5),
    Dense(1024, activation='relu', kernel_regularizer=regularizers.l2(0.001)),
    Dropout(0.5),
    Dense(num_classes, activation='softmax')
])

2)使用预训练模型:

如果上述方法无法提供您想要的结果,请考虑使用预训练模型并针对快速绘制数据集对其进行微调。预先训练的模型(例如在imagenet或COCO数据集上训练的ResNet,Inception或EfficientNet)可以对您的任务有益。您可以按如下方式修改代码:

from tensorflow.keras.applications import EfficientNetB0
# Set the number of classes
num_classes = 100

# Load pre-trained EfficientNet model without the top layers
efficient_net = EfficientNetB0(weights="imagenet", include_top=False, input_shape=(28, 28, 3))

SEE note below about the input_shape=(28, 28, 3)!

# Make the EfficientNet model non-trainable
for layer in efficient_net.layers:
    layer.trainable = False

model_15 = Sequential([
    efficient_net,
    Flatten(),
    Dense(1024, activation='relu'),
    Dropout(0.5),
    Dense(1024, activation='relu'),
    Dropout(0.5),
    Dense(num_classes, activation='softmax')
])

重要提示:您的原始输入形状是(28,28,1),但EfficientNet要求(28,28,3)。您可能需要将灰度图像数据预处理为RGB格式或使用可以接受(28,28,1)输入形状的不同预训练模型。
我提供的代码只是您进一步修改的起点。所以记住这一点:)

相关问题