我已经检查了我的模型和imagedatagenerator中的通道数是否匹配,并且有rgb图像被传递到模型中。我知道这个问题与另一个问题相似,但是我尝试了那些解决方案,仍然无法克服这个错误。
# Build model
num_channels = 1
image_size = 720
num_labels = 49
model1 = Sequential()
model1.add(Conv2D(32, (3,3), input_shape = (image_size, image_size, num_channels)))
model1.add(Activation('relu'))
model1.add(Conv2D(32, (3,3)))
model1.add(Activation('relu'))
model1.add(MaxPooling2D(pool_size=(2,2)))
model1.add(Conv2D(64, (3,3)))
model1.add(Activation('relu'))
model1.add(Conv2D(64, (3,3)))
model1.add(Activation('relu'))
model1.add(MaxPooling2D(pool_size=(2,2)))
model1.add(Flatten())
model1.add(Dense(200))
model1.add(Activation('relu'))
model1.add(Dense(200))
model1.add(Activation('relu'))
model1.add(Dense(num_labels))
model1.save_weights("ckpt")
model1.load_weights("ckpt")
model1.summary()
# load data into ImageDataGen for on the fly augmented imgs and fit into model[enter image description here][1]
CWD = os.getcwd()
# print(train_dir_file_list[0])
TRAINING_DATA_PATH = os.path.join(CWD, 'campaign_data/data/')
print(os.path.join(CWD, 'campaign_data/data/'))
IMAGE_SIZE = 720
IMG_SHAPE = (IMAGE_SIZE, IMAGE_SIZE, 1)
TRAIN_BATCH_SIZE = 32
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
# rescale=1.0/127.5, # from various posts, if resnet50.preprocess_input is used, do not rescale
rotation_range=90.,
shear_range=0.2,
# for image data rescale as such
# rescale= 1.0/255,
zoom_range=[0.8,1.2],
horizontal_flip=True,
validation_split=0.2,
preprocessing_function=tf.keras.applications.resnet50.preprocess_input)
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# test_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
# preprocessing_function=tf.keras.applications.resnet50.preprocess_input)
# Next we're going to take the images from our directory in batches and categorical classes:
#flow_from_directory
train_generator = train_datagen.flow_from_directory(TRAINING_DATA_PATH,
target_size=(IMAGE_SIZE, IMAGE_SIZE),
color_mode='rgb',
batch_size=TRAIN_BATCH_SIZE,
class_mode='categorical',
shuffle=True,
subset='training',
seed=42)
validation_generator = train_datagen.flow_from_directory(TRAINING_DATA_PATH,
target_size=(IMAGE_SIZE, IMAGE_SIZE),
color_mode='rgb',
batch_size=TRAIN_BATCH_SIZE,
class_mode='categorical',
shuffle=True,
subset='validation',
seed=42)
# confirm the scaling works
batchX, batchY = train_generator.next()
print('Batch shape=%s, min=%.3f, max=%.3f' % (batchX.shape, batchX.min(), batchX.max()))
labels = train_generator.class_indices
print('\nclass_indices = ', labels)
labels_dict = dict((v,k) for k, v in labels.items())
print('\nlabels_dict = ', labels_dict)
print(train_generator.filenames[0:5])
with open('labels_dict.json', 'w') as f:
json.dump(labels_dict, f)
METRICS = [
tf.keras.metrics.TruePositives(name='tp'),
tf.keras.metrics.FalsePositives(name='fp'),
tf.keras.metrics.TrueNegatives(name='tn'),
tf.keras.metrics.FalseNegatives(name='fn'),
tf.keras.metrics.BinaryAccuracy(name='accuracy'),
tf.keras.metrics.Precision(name='precision'),
tf.keras.metrics.Recall(name='recall'),
tf.keras.metrics.AUC(name='auc'),
tf.keras.metrics.AUC(name='prc', curve='PR') # precision-recall curve
]
model1.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-4),
loss='categorical_crossentropy',
metrics=METRICS)
callbacks = [tf.keras.callbacks.TensorBoard(log_dir='./log/transer_learning_model', update_freq='batch'),
tf.keras.callbacks.EarlyStopping(patience=4)]
print('Training model...')
# print(train_generator)
history = model1.fit(train_generator,
steps_per_epoch=train_generator.samples//TRAIN_BATCH_SIZE,
epochs=25,
validation_data=validation_generator,
validation_steps=validation_generator.samples//TRAIN_BATCH_SIZE,
callbacks=callbacks)
下面是确切的错误[1]:https://i.stack.imgur.com/cF2Sa.png
在错误的底部是这样一段文字
Node: 'sequential_8/conv2d_34/Relu'
Fused conv implementation does not support grouped convolutions for now.
[[{{node sequential_8/conv2d_34/Relu}}]] [Op:__inference_train_function_70418]
1条答案
按热度按时间b1uwtaje1#
我认为您需要将num_channels更改为3,因为图像的大小为720 x720 x3。
此外,您可以在没有第一层的情况下进行训练,因为不需要指定input_shape。它只会创建{input_channel_size} * {filer_size}个内核。