TensorFlow Keras输入错误“值错误:具有多个元素的数组的真值不明确,请使用.any()或.all()”

zy1mlcev  于 2023-02-13  发布在  其他
关注(0)|答案(2)|浏览(140)

我正在尝试训练VGG19模型并对其进行微调。我的训练数据集包含35990张图像,测试数据集包含3720张图像。在训练过程中,我遇到了以下错误:* "数值错误:具有多个元素的数组的真值不明确。请使用. any()或. all()"*
有谁能告诉我这是什么意思以及如何解决它吗?

    • train1_data的形状:(35990,224,224,3)测试1_数据的形状:(3720、224、224、3)**
    • 代码:**
from keras.applications import VGG16
vgg_model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))

for layer in vgg_model.layers[:15]:
    layer.trainable = False
# Make sure you have frozen the correct layers
for i, layer in enumerate(vgg_model.layers):
    print(i, layer.name, layer.trainable)

x = vgg_model.output
x = Flatten()(x) 
x = Dense(512, activation='relu')(x)
x = Dropout(0.5)(x) 
x = Dense(264, activation='relu')(x)
x = Dense(372, activation='softmax')(x) 
transfer_model = Model(inputs=vgg_model.input, outputs=x)

from keras.callbacks import ReduceLROnPlateau
from keras.callbacks import ModelCheckpoint
lr_reduce = ReduceLROnPlateau(monitor='val_accuracy', factor=0.6, patience=8, verbose=1, mode='max', min_lr=5e-5)
checkpoint = ModelCheckpoint('vgg16_finetune.h15', monitor= 'val_accuracy', mode= 'max', save_best_only = True, verbose= 1)

from keras.optimizers import Adam
from tensorflow.keras import layers, models, Model, optimizers
learning_rate= 5e-5
transfer_model.compile(loss="categorical_crossentropy", optimizer=Adam(lr=learning_rate), metrics=["accuracy"])
history = transfer_model.fit(train1_data, batch_size = 15, epochs=5, validation_data=test1_data, callbacks=[lr_reduce,checkpoint])
    • 以下是完整的追溯:**
ValueError                                Traceback (most recent call last)
<ipython-input-26-8fee2cdfeca2> in <module>
      3 learning_rate= 5e-5
      4 transfer_model.compile(loss="categorical_crossentropy", optimizer=Adam(lr=learning_rate), metrics=["accuracy"])
----> 5 history = transfer_model.fit(train1_data, batch_size = 15, epochs=5, validation_data=test1_data, callbacks=[lr_reduce,checkpoint])

~\Anaconda3\envs\OCR_ENV\lib\site-packages\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
   1156         # Prepare validation data.
   1157         do_validation = False
-> 1158         if validation_data:
   1159             do_validation = True
   1160             if len(validation_data) == 2:

ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
    • 列车1数据和Teas1数据**
import cv2

train_data = []
label = []

IMG_SIZE=32

minRange = np.array([0,138,67],np.uint8)  
maxRange = np.array([255,173,133],np.uint8) 

def create_testing_data():
   
    for classs in CLASSES:  

        path = os.path.join(DATADIR,classs)  
        class_num = str(CLASSES.index(classs))          
        for img in os.listdir(path):  
                img_array = cv2.imread(os.path.join(path,img))
                new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))    
                new_array=np.array(new_array)
                #new_array=np.array(image)
                #print(np.shape(new_array))
                new_array = new_array.astype('float32')
                new_array /= 255 
                
                train_data.append(new_array)  
                label.append(class_num)
               
         
            
create_testing_data()
print(len(predicted))
print(len(train_data))
np.shape(train_data)
    • 添加适合的系列和测试标签后进行追溯。**
from keras.optimizers import Adam
from tensorflow.keras import layers, models, Model, optimizers
learning_rate= 5e-5
transfer_model.compile(loss="categorical_crossentropy", optimizer=Adam(lr=learning_rate), metrics=["accuracy"])
history = transfer_model.fit(train1_data, label, batch_size = 15, epochs=5, validation_data=(test1_data,label1), callbacks=[lr_reduce,checkpoint])



AttributeError                            Traceback (most recent call last)
<ipython-input-41-c850ac7c8c6d> in <module>
      3 learning_rate= 5e-5
      4 transfer_model.compile(loss="categorical_crossentropy", optimizer=Adam(lr=learning_rate), metrics=["accuracy"])
----> 5 history = transfer_model.fit(train1_data, label, batch_size = 15, epochs=5, validation_data=(test1_data,label1), callbacks=[lr_reduce,checkpoint])

~\Anaconda3\envs\OCR_ENV\lib\site-packages\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
   1152             sample_weight=sample_weight,
   1153             class_weight=class_weight,
-> 1154             batch_size=batch_size)
   1155 
   1156         # Prepare validation data.

~\Anaconda3\envs\OCR_ENV\lib\site-packages\keras\engine\training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
    619                 feed_output_shapes,
    620                 check_batch_axis=False,  # Don't enforce the batch size.
--> 621                 exception_prefix='target')
    622 
    623             # Generate sample-wise weight values given the `sample_weight` and

~\Anaconda3\envs\OCR_ENV\lib\site-packages\keras\engine\training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
     97         data = data.values if data.__class__.__name__ == 'DataFrame' else data
     98         data = [data]
---> 99     data = [standardize_single_array(x) for x in data]
    100 
    101     if len(data) != len(names):

~\Anaconda3\envs\OCR_ENV\lib\site-packages\keras\engine\training_utils.py in <listcomp>(.0)
     97         data = data.values if data.__class__.__name__ == 'DataFrame' else data
     98         data = [data]
---> 99     data = [standardize_single_array(x) for x in data]
    100 
    101     if len(data) != len(names):

~\Anaconda3\envs\OCR_ENV\lib\site-packages\keras\engine\training_utils.py in standardize_single_array(x)
     32                 'Got tensor with shape: %s' % str(shape))
     33         return x
---> 34     elif x.ndim == 1:
     35         x = np.expand_dims(x, 1)
     36     return x

AttributeError: 'str' object has no attribute 'ndim'
    • 密集图层中的输入形状错误**我已从Train和test标注输入中移除str并将其转换为np数组。CODE
from keras.optimizers import Adam
from tensorflow.keras import layers, models, Model, optimizers
learning_rate= 5e-5
transfer_model.compile(loss="categorical_crossentropy", optimizer=Adam(lr=learning_rate), metrics=["accuracy"])
history = transfer_model.fit(train1_data, train1_label, batch_size = 15, epochs=5, validation_data=(test1_data,test1_label), callbacks=[lr_reduce,checkpoint])
    • 追溯**
ValueError                                Traceback (most recent call last)
<ipython-input-58-4564124fa363> in <module>
      3 learning_rate= 5e-5
      4 transfer_model.compile(loss="categorical_crossentropy", optimizer=Adam(lr=learning_rate), metrics=["accuracy"])
----> 5 history = transfer_model.fit(train1_data, train1_label, batch_size = 15, epochs=5, validation_data=(test1_data,test1_label), callbacks=[lr_reduce,checkpoint])

~\Anaconda3\envs\OCR_ENV\lib\site-packages\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
   1152             sample_weight=sample_weight,
   1153             class_weight=class_weight,
-> 1154             batch_size=batch_size)
   1155 
   1156         # Prepare validation data.

~\Anaconda3\envs\OCR_ENV\lib\site-packages\keras\engine\training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
    619                 feed_output_shapes,
    620                 check_batch_axis=False,  # Don't enforce the batch size.
--> 621                 exception_prefix='target')
    622 
    623             # Generate sample-wise weight values given the `sample_weight` and

~\Anaconda3\envs\OCR_ENV\lib\site-packages\keras\engine\training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
    133                         ': expected ' + names[i] + ' to have ' +
    134                         str(len(shape)) + ' dimensions, but got array '
--> 135                         'with shape ' + str(data_shape))
    136                 if not check_batch_axis:
    137                     data_shape = data_shape[1:]

ValueError: Error when checking target: expected dense_10 to have 4 dimensions, but got array with shape (35990, 1)
    • 型号摘要**
Model: "model_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 224, 224, 3)       0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 224, 224, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 224, 224, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 112, 112, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 112, 112, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 112, 112, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 56, 56, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 56, 56, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 28, 28, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 28, 28, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 14, 14, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 7, 7, 512)         0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 25088)             0         
_________________________________________________________________
dense_1 (Dense)              (None, 512)               12845568  
_________________________________________________________________
dropout_1 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 264)               135432    
_________________________________________________________________
dense_3 (Dense)              (None, 372)               98580     
=================================================================
Total params: 27,794,268
Trainable params: 20,159,004
Non-trainable params: 7,635,264
gev0vcfq

gev0vcfq1#

根据文档,fit()方法接受(data, labels) for validation_data形式的元组,而您只提供NumPy数组,因此当它检查验证数据时,比较会导致异常。您也没有为fit()提供训练标签。
尝试将最后一行转换为

history = transfer_model.fit(train1_data, train1_labels, batch_size = 15, epochs=5, validation_data=(test1_data, test1_labels), callbacks=[lr_reduce,checkpoint])

看看这样是否能避免你所得到的错误。
一般来说,我建议使用tf.data.Dataset来处理数据集和输入到fit()函数,正如我在answer to another question中提到的那样。

**EDIT:**正如我们在评论中讨论的那样,需要更改的另外两件事是使用int作为标签,而不是str

class_num = CLASSES.index(classs)

并且使用sparse_categorical_crossentropy损耗代替categorical_crossentropy

transfer_model.compile(loss="sparse_categorical_crossentropy", optimizer=Adam(lr=learning_rate), metrics=["accuracy"])
jogvjijk

jogvjijk2#

你有一个372个节点的最终密集层。2这意味着你有372宽的输出。3你的损失=“categorical_crossentropy”。4那么你的标签1是热编码的吗?5你在尝试分类吗?6你有多少个类?

相关问题