tensorflow 内部错误:无法将输入Tensor从CPU:0复制到GPU:0以运行_EagerConst:DstTensor未初始化

v7pvogib  于 2022-11-16  发布在  其他
关注(0)|答案(1)|浏览(391)

我正在运行一个包含10次折叠的Tensorflow交叉验证训练代码。代码在for循环中工作,每次循环我都必须运行model.fit。当我运行第一次折叠时,它运行良好,然后GPU内存变满。以下是我的for循环:

acc_per_fold = []
loss_per_fold = []
for train, test in kfold.split(x_train, y_train):
    fold_no = 1
    # Define the model architecture
    model = Sequential()
    model.add(Conv2D(32, kernel_size=(3,3), input_shape = x_train[0].shape, activation = "relu"))
    model.add(MaxPooling2D(2,2))
    model.add(Conv2D(32, kernel_size=(3,3), activation = "relu"))
    model.add(MaxPooling2D(2,2))

    model.add(Flatten())
    model.add(Dense(64, activation = "relu"))
    model.add(Dropout(0.1))
    model.add(Dense(32, activation = "tanh"))
    model.add(Dense(1, activation = "sigmoid"))

    # Compile the model
    model.compile(loss = "binary_crossentropy", 
              optimizer = tf.keras.optimizers.Adam(learning_rate = 0.001), 
              metrics = ["accuracy"])

    # Generate a print
    print('------------------------------------------------------------------------')
    print(f'Training for fold {fold_no} ...')
    # Fit data to model
    history = model.fit(np.array(x_train)[train], np.array(y_train)[train],
              batch_size=32,
              epochs=10,
              verbose=1)

    # Generate generalization metrics
    scores = model.evaluate(np.array(x_train)[test], np.array(y_train)[test], verbose=0)
    print(f"Score for fold {fold_no}: {model.metrics_names[0]} of {scores[0]}; {model.metrics_names[1]} of {scores[1]*100}%")
    acc_per_fold.append(scores[1] * 100)
    loss_per_fold.append(scores[0])

    # Increase fold number
    fold_no += fold_no

此外,我搜索并发现使用numba库是一个选项,以释放gpu内存,它的工作,但内核在Jupyter笔记本电脑死亡,我不得不重置,所以这个解决方案将不会在我的情况下工作。

j0pj023g

j0pj023g1#

我很久以前就遇到过这个问题,即使在减少批处理大小后也不起作用.我的GPU是RTX 3060 12 GB RAM,它在Google Collab Pro上工作不过,有一个解决方案可以解决这个问题.你可以使用gc库,它在每次迭代后清理GPU

import gc

您可以将此语句放在循环中

gc.collect()

希望它能在每次循环后清理RAM

相关问题