tensorflow 无效参数错误:锁定时所需的可扩展形状(未知)

ocebsuys  于 2023-02-05  发布在  其他
关注(0)|答案(6)|浏览(134)
    • 背景**

我是Python和机器学习的新手。我刚刚尝试用我在互联网上找到的代码建立一个UNet,并想让它适应我正在处理的情况。当尝试将UNet .fit到训练数据时,我收到了以下错误:

InvalidArgumentError:  required broadcastable shapes at loc(unknown)
     [[node Equal (defined at <ipython-input-68-f1422c6f17bb>:1) ]] [Op:__inference_train_function_3847]

当我搜索它的时候,我得到了很多结果,但大多是不同的错误。
这意味着什么?更重要的是,我该如何修复它?

    • 导致错误的代码**

此错误的上下文如下所示:我想分割图像并标记不同的类。我为训练、测试和验证数据设置了目录"trn"、"tst"和"val"。dir_dat()函数应用os.path.join()来获取相应data set的完整路径。3个文件夹中的每个文件夹都有用整数标记的每个类的子目录。在每个文件夹中,对于各个类存在一些.tif图像。
我定义了以下图像数据生成器(训练数据是稀疏的,因此需要增强):

classes = np.array([ 0,  2,  4,  6,  8, 11, 16, 21, 29, 30, 38, 39, 51])
bs = 15 # batch size

augGen = ks.preprocessing.image.ImageDataGenerator(rotation_range = 365,
                                                   width_shift_range = 0.05,
                                                   height_shift_range = 0.05,
                                                   horizontal_flip = True,
                                                   vertical_flip = True,
                                                   fill_mode = "nearest") \
    .flow_from_directory(directory = dir_dat("trn"),
                         classes = [str(x) for x in classes.tolist()],
                         class_mode = "categorical",
                         batch_size = bs, seed = 42)
    
tst_batches = ks.preprocessing.image.ImageDataGenerator() \
    .flow_from_directory(directory = dir_dat("tst"),
                         classes = [str(x) for x in classes.tolist()],
                         class_mode = "categorical",
                         batch_size = bs, shuffle = False)

val_batches = ks.preprocessing.image.ImageDataGenerator() \
    .flow_from_directory(directory = dir_dat("val"),
                         classes = [str(x) for x in classes.tolist()],
                         class_mode = "categorical",
                         batch_size = bs)

然后我基于this example设置了UNet,在这里,我修改了一些参数以使UNet适应这种情况(多个类),即最后一层的激活和损失函数:

layer_in = ks.layers.Input(shape = (imgr, imgc, imgdim))
# convert pixel integer values to float
inVals = ks.layers.Lambda(lambda x: x / 255)(layer_in)

# Contraction path
c1 = ks.layers.Conv2D(16, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(inVals)
c1 = ks.layers.Dropout(0.1)(c1)
c1 = ks.layers.Conv2D(16, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(c1)
p1 = ks.layers.MaxPooling2D((2, 2))(c1)

c2 = ks.layers.Conv2D(32, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(p1)
c2 = ks.layers.Dropout(0.1)(c2)
c2 = ks.layers.Conv2D(32, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(c2)
p2 = ks.layers.MaxPooling2D((2, 2))(c2)
 
c3 = ks.layers.Conv2D(64, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(p2)
c3 = ks.layers.Dropout(0.2)(c3)
c3 = ks.layers.Conv2D(64, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(c3)
p3 = ks.layers.MaxPooling2D((2, 2))(c3)
 
c4 = ks.layers.Conv2D(128, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(p3)
c4 = ks.layers.Dropout(0.2)(c4)
c4 = ks.layers.Conv2D(128, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(c4)
p4 = ks.layers.MaxPooling2D(pool_size = (2, 2))(c4)
 
c5 = ks.layers.Conv2D(256, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(p4)
c5 = ks.layers.Dropout(0.3)(c5)
c5 = ks.layers.Conv2D(256, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(c5)

# Expansive path 
u6 = ks.layers.Conv2DTranspose(128, (2, 2), strides = (2, 2), padding = "same")(c5)
u6 = ks.layers.concatenate([u6, c4])
c6 = ks.layers.Conv2D(128, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(u6)
c6 = ks.layers.Dropout(0.2)(c6)
c6 = ks.layers.Conv2D(128, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(c6)
 
u7 = ks.layers.Conv2DTranspose(64, (2, 2), strides = (2, 2), padding = "same")(c6)
u7 = ks.layers.concatenate([u7, c3])
c7 = ks.layers.Conv2D(64, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(u7)
c7 = ks.layers.Dropout(0.2)(c7)
c7 = ks.layers.Conv2D(64, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(c7)
 
u8 = ks.layers.Conv2DTranspose(32, (2, 2), strides = (2, 2), padding = "same")(c7)
u8 = ks.layers.concatenate([u8, c2])
c8 = ks.layers.Conv2D(32, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(u8)
c8 = ks.layers.Dropout(0.1)(c8)
c8 = ks.layers.Conv2D(32, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(c8)
 
u9 = ks.layers.Conv2DTranspose(16, (2, 2), strides = (2, 2), padding = "same")(c8)
u9 = ks.layers.concatenate([u9, c1], axis = 3)
c9 = ks.layers.Conv2D(16, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(u9)
c9 = ks.layers.Dropout(0.1)(c9)
c9 = ks.layers.Conv2D(16, (3, 3), activation = "relu",
                            kernel_initializer = "he_normal", padding = "same")(c9)
 
out = ks.layers.Conv2D(1, (1, 1), activation = "softmax")(c9)
 
model = ks.Model(inputs = layer_in, outputs = out)
model.compile(optimizer = "adam", loss = "sparse_categorical_crossentropy", metrics = ["accuracy"])
model.summary()

最后,我定义了回调并运行了训练,结果产生了错误:

cllbs = [
    ks.callbacks.EarlyStopping(patience = 4),
    ks.callbacks.ModelCheckpoint(dir_out("Checkpoint.h5"), save_best_only = True),
    ks.callbacks.TensorBoard(log_dir = './logs'),# log events for TensorBoard
    ]

model.fit(augGen, epochs = 5, validation_data = val_batches, callbacks = cllbs)
    • 完整的控制台输出**

这是运行最后一行时的完整输出(如果它有助于解决问题):

trained = model.fit(augGen, epochs = 5, validation_data = val_batches, callbacks = cllbs)
Epoch 1/5
Traceback (most recent call last):

  File "<ipython-input-68-f1422c6f17bb>", line 1, in <module>
    trained = model.fit(augGen, epochs = 5, validation_data = val_batches, callbacks = cllbs)

  File "c:\users\manuel\python\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1183, in fit
    tmp_logs = self.train_function(iterator)

  File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\def_function.py", line 889, in __call__
    result = self._call(*args, **kwds)

  File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\def_function.py", line 950, in _call
    return self._stateless_fn(*args, **kwds)

  File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\function.py", line 3023, in __call__
    return graph_function._call_flat(

  File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\function.py", line 1960, in _call_flat
    return self._build_call_outputs(self._inference_function.call(

  File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\function.py", line 591, in call
    outputs = execute.execute(

  File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\execute.py", line 59, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,

InvalidArgumentError:  required broadcastable shapes at loc(unknown)
     [[node Equal (defined at <ipython-input-68-f1422c6f17bb>:1) ]] [Op:__inference_train_function_3847]

Function call stack:
train_function
2admgd59

2admgd591#

当类标签的数量与输出层的输出形状不匹配时,我就遇到了这个问题。
例如,如果有10个类标签,并且我们将输出层定义为:

output = tf.keras.layers.Conv2D(5, (1, 1), activation = "softmax")(c9)

由于类标签的数量(10)不等于输出形状(5)。那么,我们将得到这个错误。

确保类标签的数量与输出图层的输出形状匹配。

k2fxgqgv

k2fxgqgv2#

我在这里发现了几个问题,这个模型本来是要用于多个类的语义分割的(这就是为什么我将输出层激活更改为"softmax"并设置"sparse_categorical_crossentropy"丢失)。因此,在ImageDataGenerators中,必须将class_mode设置为None。不提供classes。相反,我需要将手动分类的图像插入为y。我猜初学者会犯很多初学者的错误。

mf98qq94

mf98qq943#

只需在“完全连接”层之前添加***展平()***层。

zpjtge22

zpjtge224#

我遇到了同样的问题,因为我在模型中使用的n_classes数(对于输出层)与labels/masks数组中的实际类数不同。您有13个类,但您的输出层只有1个。最好的方法是避免硬编码类的数量,并且只在模型中传递一个变量(如n_classes),然后在调用模型之前声明此变量。例如n_classes = y_Train.shape[-1]或n_classes = len(np.unique(y_Train))

bybem2ql

bybem2ql5#

试着检查ks.layers.concatenate layers的输入是否是等维的。例如ks.layers.concatenate([u7, c3]),这里检查u7和c3Tensor是否是相同的形状,除了轴输入到函数ks.layers.concatenateAxis = -1默认值是最后一个维度。为了说明如果你给出ks.layers.concatenate([u7,c3],axis=0),则除了u7和c3两者的第一个轴之外,所有其它轴的维度应当精确匹配,例如u7.shape = [3,4,5], c3.shape = [6,4,5].

wz8daaqr

wz8daaqr6#

即使我面临着同样的错误,请检查您所有的Concat层和乘法层,或任何层有类似的操作发生。
当尺寸不匹配且模型无法执行该操作时,此错误可见。

相关问题