在这个用于图像分类的简单CNN Tensorflow Keras模型中,我无法获得超过50%的测试数据准确度

5vf7fwbs  于 2022-12-04  发布在  其他
关注(0)|答案(2)|浏览(150)

The code is as follows. I have a highly imbalanced dataset for chest x rays with heart enlargement. The images are separated into a training folder split into positive for cardiomegaly and negative for cardiomegaly subfolders (467 pos images and ~20,000 neg). (Then I have a testing folder with two subfolders (300 pos, 300 neg). Each time I test I keep getting a 50% accuracy with the eval method below. When I look at the predictions it is always that they are all one class (normally negative), however if I give the positive values a very high weight (1000+ compared to the negative values 1) the model will flip and say that they are all instead positive. This leads me to believe it is overfitting but all my attempts to resolve this have come up with issues.

import pandas as pd
import os
import matplotlib.pyplot as plt
import numpy as np
import skimage as sk
import skimage.io as skio
import skimage.transform as sktr
import skimage.filters as skfl
import skimage.feature as skft
import skimage.color as skcol
import skimage.exposure as skexp
import skimage.morphology as skmr
import skimage.util as skut
import skimage.measure as skme
import sklearn.model_selection as le_ms
import sklearn.decomposition as le_de
import sklearn.discriminant_analysis as le_di
import sklearn.preprocessing as le_pr
import sklearn.linear_model as le_lm
import sklearn.metrics as le_me
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
classNames = ["trainpos","trainneg"]
testclassNames = ["testpos", "test"]
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    './data/trainup/',
    labels='inferred',
    label_mode='categorical',
    class_names=classNames,
    color_mode='grayscale',
    batch_size=32,
    image_size=(256, 256),
    shuffle=True,
    seed=123,
    validation_split=0.2,
    subset="training",
    interpolation='gaussian',
    follow_links=False,
)

val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    './data/trainup/',
    labels='inferred',
    label_mode='categorical',
    class_names=classNames,
    color_mode='grayscale',
    batch_size=32,
    image_size=(256, 256),
    shuffle=True,
    seed=23,
    validation_split=0.2,
    subset="validation",
    interpolation='gaussian',
    follow_links=False,
)

test_ds = tf.keras.preprocessing.image_dataset_from_directory(
    './data/testup/',
    labels='inferred',
    label_mode='categorical',
    class_names=testclassNames,
    color_mode='grayscale',
    batch_size=32,
    image_size=(256, 256),
    shuffle=True,
    interpolation='gaussian',
    follow_links=False,
)

AUTOTUNE = tf.data.experimental.AUTOTUNE

train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)

model = tf.keras.Sequential([
  tf.keras.layers.experimental.preprocessing.Rescaling(1./255, input_shape=(256, 256, 1)),
  tf.keras.layers.Conv2D(16, 4, padding='same', activation='relu'),
  tf.keras.layers.MaxPooling2D(),
  tf.keras.layers.Conv2D(32, 4, padding='same', activation='relu'),
  tf.keras.layers.MaxPooling2D(),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dense(2)
])

opt = keras.optimizers.Adam(learning_rate=0.0001)
model.compile(optimizer=opt,
              loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

class_weight = {0: 29, 1: 1}

history = model.fit(
  train_ds,
  validation_data=val_ds,
  epochs=5,
   class_weight=class_weight
)
test_loss, test_accuracy = model.evaluate(test_ds)
print("Test Loss: ", test_loss)
print("Test Accuracy: ", test_accuracy)

19/19 [==============================] - 7s 376ms/step - loss: 3.4121 - accuracy: 0.5000 Test Loss: 3.4121198654174805 Test Accuracy: 0.5
I have tried updating the learning rate to values between 0.1 and 0.00001, adding epochs, removing epochs, changing to SGP for the optimizer, attempting to unpack the test_ds after subscripting it gave me the error that it is a batchdataset and can't be subscripted. This then shows me that the test_ds is giving me ~19 tensors of 32 images each except the last one which has about 25. I then wanted to predict each of these images individually and get the results because it looked like it was grouping all 32 (or 25 for the last one) together and then predicting based on that but that led me down rabbitholes that I haven't come out of with results. Tried many other things I can't fully remember normally tweaking the model itself or adding data augmentation (I am using tensorflow 2.3 as this is for a class with a repeating assignment so the data augmentation cannot be done with the current docs (mostly just vertical and horizontal changes in this version from what I can tell)

kuarbcqp

kuarbcqp1#

请提供每个时段报告的训练损失。准确性可能也有帮助。训练损失可能有助于了解模型是否学习了任何东西。
现在,在最后一个时期训练和测试损失都很高,这表明模型是欠拟合的。过拟合通常表现为具有高的测试损失而低的训练损失。

ioekq8ef

ioekq8ef2#

最好的办法是从一开始就消除不平衡。你有467张正面图片,这对一个模特来说已经足够了。所以从20张图片中随机选择467张负面图片,000可用。这被称为欠采样,它工作得很好。另一种方法是同时使用欠采样和图像增强。下面显示了实现此操作的示例代码,其中我将负片类中的图像数量限制为1000,然后创建533个增强映像并将它们添加到正类目录中。注意下面的代码将从否定类目录中删除图像,并向肯定类目录中添加增强图像,因此在运行代码之前,您可能希望创建这两个目录的备份,以便原始数据是可恢复的。在演示代码中,我在positive目录中有1263个图像,在positive class目录中有467个图像。我测试了代码,它按预期工作。现在,如果你在Kagle上运行笔记本,下面的代码将不起作用,因为你不能更改输入目录中的数据。所以在这种情况下,你必须先把输入目录复制到kagle工作目录,然后设置这些目录的路径。

!pip install -U albumentations
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
import random
import cv2
import albumentations as A
from tqdm import tqdm

def get_augmented_image(image): # this function returns an augmented version of the input img
    # see albumentations documentation at URL https://albumentations.ai/docs/getting_started/image_augmentation/
    # for information on various type of augmentations available these are examples below
    width=int(image.shape[1]*.8)
    height=int(image.shape[0]*.8)
    transform= A.Compose([
        A.HorizontalFlip(p=.5),
        A.RandomBrightnessContrast(p=.5),
        A.RandomGamma(p=.5),
        A.RandomCrop(width=width, height=height, p=.25) ])    
    return transform(image=image)['image']

negative_limit=1000
negative_dir_path=r'C:\Temp\data\trainup\negative'# path to directory holding the negative images
positive_dir_path=r'C:\Temp\data\trainup\positive' # path to directory holding positive images
negative_file_list=os.listdir(negative_dir_path)
positive_file_list=os.listdir(positive_dir_path)
sampled_negative_file_list=np.random.choice(negative_file_list, size=negative_limit, replace=False) 
for f in tqdm(negative_file_list, ncols=120, unit='files', colour='blue', desc='deleting excess neg files'): # this for loop leaves only 1000 images in the negative_image_directory
    if f not in sampled_negative_file_list:
        fpath=os.path.join(negative_dir_path,f)        
        os.remove(fpath)
# now create augmented images
delta=negative_limit-len(os.listdir(positive_dir_path)) # this is the number of augmented images to create to balance the dataset
sampled_positive_image_list=np.random.choice(positive_file_list, delta, replace=True) # replace=True because delta>number of positive images
i=0
for  f in tqdm(sampled_positive_image_list, ncols=120, unit='files', colour='blue',desc='creating augment images'): # this loop creates augmented images and stores them in the positive image directory
    fpath=os.path.join(positive_dir_path,f)
    img=cv2.imread(fpath)
    dest_file_name='aug' +str(i) + '-' + f # create the filename with a unique numeric prefix
    dest_path=os.path.join(positive_dir_path, dest_file_name) # store augmented images witha numeric prefix in the filename
    augmented_image=get_augmented_image(img)
    cv2.imwrite(dest_path, augmented_image)
    i +=1
# when these loops are done, the negative_image_directory will have 1000 images
# and the positive_image_directory will also have 1000 images, 533 of which are augmented images````

在您的代码中,

tf.keras.layers.Dense(2)

更改为

tf.keras.layers.Dense(2, activation='softmax')

在模型中完成移除(from_logits=True)

相关问题