keras 如何使用K-Fold交叉验证与DenseNet 121模型

omhiaaxx  于 2023-04-06  发布在  其他
关注(0)|答案(1)|浏览(266)

我正在使用DensetNet121预训练模型对乳腺癌图像进行分类。我将数据集分为训练,测试和验证。我想应用k-fold cross validation。我使用sklearn库中的cross_validation,但当我运行代码时,我得到了下面的错误。我试图解决它,但没有解决错误。任何人都知道如何解决这个问题。

in_model = tf.keras.applications.DenseNet121(input_shape=(224,224,3),
                                            include_top=False,
                                             weights='imagenet',classes = 2)
in_model.trainable = False
inputs = tf.keras.Input(shape=(224,224,3))
x = in_model(inputs)
flat = Flatten()(x)
dense_1 = Dense(1024,activation = 'relu')(flat)
dense_2 = Dense(1024,activation = 'relu')(dense_1)
prediction = Dense(2,activation = 'softmax')(dense_2)
in_pred = Model(inputs = inputs,outputs = prediction)
validation_data=(valid_data,valid_labels)
#16
in_pred.summary()
in_pred.compile(optimizer = tf.keras.optimizers.Adagrad(learning_rate=0.0002), loss=tf.keras.losses.CategoricalCrossentropy(from_logits = False), metrics=['accuracy'])
history=in_pred.fit(train_data,train_labels,epochs = 3,batch_size=32,validation_data=validation_data)
model_result=cross_validation(in_pred, train_data, train_labels, 5)

错误:

TypeError: Cannot clone object '<keras.engine.functional.Functional object at 0x000001F82E17E3A0>'
(type <class 'keras.engine.functional.Functional'>): 
it does not seem to be a scikit-learn estimator as it does not implement a 'get_params' method.
ijnw1ujt

ijnw1ujt1#

由于您的模型不是scikit-learn估计器,因此无法使用sklearn的内置cross_validate方法。
但是,你可以使用k-fold将数据分割成k个折叠,并获得每个折叠的指标。我们可以使用model.evaluate中内置的TF,或者在这里使用sklearn的指标。

from sklearn.model_selection import KFold

in_model = tf.keras.applications.DenseNet121(
    input_shape=(224, 224, 3), include_top=False, weights="imagenet", classes=2
)
in_model.trainable = False
inputs = tf.keras.Input(shape=(224, 224, 3))
x = in_model(inputs)
flat = Flatten()(x)
dense_1 = Dense(1024, activation="relu")(flat)
dense_2 = Dense(1024, activation="relu")(dense_1)
prediction = Dense(2, activation="softmax")(dense_2)
in_pred = Model(inputs=inputs, outputs=prediction)
validation_data = (valid_data, valid_labels)
# 16
in_pred.summary()
in_pred.compile(
    optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.0002),
    loss=tf.keras.losses.CategoricalCrossentropy(from_logits=False),
    metrics=["accuracy"],
)

kf = KFold(n_splits=2)

kf.get_n_splits(train_data)

for i, (train_index, test_index) in enumerate(kf.split(train_data)):
    print(f"Fold {i}:")
    print(f"  Train: index={train_index}")
    print(f"  Test:  index={test_index}")
    history = in_pred.fit(
        train_data[train_index],
        train_labels[train_index],
        epochs=3,
        batch_size=32,
        validation_data=validation_data,
    )

    in_pred.evaluate(train_data[test_index],train_labels[test_index])

相关问题