我的U网可以预测核磁共振的CT图像我已经使用连接的3D MRI/CT对U-Net进行了训练和验证,如下所示:
def train_valid_model(X, Y, x_val, y_val):
s = X.shape
model = UNet_model_2D(s[1], s[2], 1) # s[1]=256, s[2]=256
callbacks = [
tf.keras.callbacks.TensorBoard(
log_dir='logs',
histogram_freq=1,
write_graph=True,
write_images=True,
)
]
# return a History object whose attribute '.history ' is a record of
# training loss, metrics, validation loss, and validation metrics values
results = model.fit(
x=X, # concatenated 3D MRIs
y=Y, # concatenated 3D reference CTs
batch_size=16,
epochs=200,
verbose=1,
callbacks=callbacks,
validation_data = (x_val, y_val), # concatenated 3D MRIs/CTs
)
tmp = list(results.history.values())
train_loss=tmp[0][:] # train loss
val_loss=tmp[1][:] # val loss
# write/append csv file
f = open('log_train_loss_TF_CT.csv', 'a')
writer = csv.writer(f)
writer.writerow(train_loss)
f.close()
f = open('log_val_loss_TF_CT.csv', 'a')
writer = csv.writer(f)
writer.writerow(val_loss)
f.close()
model.save('pCT_2D_deep_large_batch16', save_format='tf')
查看TensorBoard中的损失函数图,我发现在60个epoch之后,在进一步收敛和过拟合之间有一个很好的折衷。因此,我现在想用模型参数/权重预测60个时期后的CT。我该怎么做?
到目前为止,我有以下方法:
# load trained & validated model
model_name = 'pCT_2D_deep_large_batch16'
model = tf.keras.models.load_model(model_name, compile=False)
# load concatenated test MRIs
X_test = nib.load('test_MRIs.nii.gz').get_fdata()
# predict sCTs
predicted_data = model.predict(X_test, verbose=1)
# save predicted sCTs as concatenated NIfTI file
image = nib.Nifti1Image(predicted_data, affine=None)
nib.save(image, 'predicted_sCTs.nii.gz')
在Spyder控制台中,出现了以下内容:
有没有办法在60/200的时候停下来?有人能帮帮我吗?
1条答案
按热度按时间yyhrrdl81#
如果唯一保存的模型是在200个epoch之后,则没有直接的方法来检索60个epoch处的权重。
如果可以重新训练,最简单的解决方案是重新训练,但使用
model.fit(..., epochs=60, ...)
,并查看损失大致与之前的训练相同。此外,Keras的ModelCheckpoint回调可以用于在训练期间保存检查点,以便中间模型权重不会丢失。