请问训练一会 评估了一次模型 CPU占用率突然升高 之后爆内存错误宕机了,使用的是百度平台的CodeLab
terminate called without an active exception
C++ Traceback (most recent call last):
No stack trace in paddle, may be caused by external reasons.
Error Message Summary:
FatalError: Process abort signal
is detected by the operating system.
[TimeInfo: *** Aborted at 1648746204 (unix time) try "date -d @1648746204" if you are using GNU date ***]
[SignalInfo: *** SIGABRT (@0x3e800000db5) received by PID 3509 (TID 0x7fbd57fff700) from PID 3509 ***]
训练代码python
model_G = init_model(args.num_classes) #生成器
model_D = init_model_D(args.num_classes) #辨别器
iters = args.iters
semi_val_step = args.semi_val_step
total_times = args.iters * (len(train_loader)) #总共的更新次数
semi_start = args.semi_start
#可以在这里设置训练策略
scheduler_G = paddle.optimizer.lr.CosineAnnealingDecay(learning_rate=args.lr, T_max=total_times)
optimizer_G = paddle.optimizer.Adam(learning_rate=scheduler_G,parameters=model_G.parameters())
scheduler_D = paddle.optimizer.lr.CosineAnnealingDecay(learning_rate=args.lr, T_max=total_times)
optimizer_D = paddle.optimizer.Adam(learning_rate=scheduler_D,parameters=model_D.parameters())
be_loss = BCEWithLogitsLoss2d()
ce_loss = CrossEntropyLoss()
train_loader_iter_G = enumerate(train_loader) #G网络训练数据集
train_loader_iter_D = enumerate(train_loader) #D网络训练数据集
semi_loader_iter = enumerate(semi_loader) #无监督数据集迭代器,从里面取数据
real_label = 1
fake_label = 0
optimizer_G.clear_grad()
optimizer_D.clear_grad()
with LogWriter(logdir="./log") as writer:
for epoch in range(iters):
#先训练生成器 冻结鉴别器的梯度
semi_cro_loss = 0 #无监督的交叉熵损失
semi_gan_loss = 0 #无监督的对抗损失
cro_loss = 0 #有监督的交叉熵损失
gan_loss = 0 #有监督的对抗损失
for param in model_D.parameters():
param.stop_gradient = True
if epoch > 1000:
#无监督训练部分 前0~semi_iter 为 拿出无监督的数据集 尽量让生成器尽量骗过辨别器 之后为根据鉴别器的指导结果更新自己
try:
_,(inputs,labels) = next(semi_loader_iter) #取出没有标签的数据
except:
semi_loader_iter = enumerate(train_loader)
_,(inputs,labels) = next(semi_loader_iter)
b, c, h, w = inputs.shape
pred = model_G(inputs)[0] # [batch,num_classes,h,w]
semi_G_pred = pred.detach() # 断了与model_G的联系 这个预测结果是要用来训练鉴别器
if epoch < args.semi_start:
semi_ignore_mask = (paddle.ones([b,h,w]) != 1)
semi_labels = make_gan_label(1,semi_ignore_mask) #[batch,h,w]
semi_gan_loss = be_loss(model_D(pred),semi_labels) #无监督数据集的对抗损失
else:
pred = model_G(inputs)[0] #[batch,num_classes,h,w]
labels = paddle.argmax(pred,axis=1) #[batch,h,w]
b,_,h,w = pred.shape
G_pred = model_D(pred) #[batch,1,h,w] 由判别器产生的真实标签
G_pred = nn.functional.sigmoid(G_pred) #产生概率图
g_ignore_mask = (G_pred > args.mask_T).squeeze(axis=1)
ignore_255 = paddle.ones(g_ignore_mask.shape,dtype='int64')*255
t_labels = paddle.where(g_ignore_mask,ignore_255,labels)
semi_cro_loss = ce_loss(pred,t_labels)*args.lambda_semi
#有监督训练部分
try:
_,(inputs,labels) = next(train_loader_iter_G) #取出有标签的数据
except:
train_loader_iter_G = enumerate(train_loader)
_,(inputs,labels) = next(train_loader_iter_G)
b, c, h, w = inputs.shape
pred = model_G(inputs)[0] #[batch,num_classes,h,w]
G_pred = pred.detach()
cro_loss = ce_loss(pred,labels.astype('int64')) #有监督的loss
# [batch,1,h,w]
t_labels = paddle.ones(labels.shape,dtype='int64')
gan_loss = be_loss(model_D(pred),t_labels)*args.lambda_adv
loss_seg = semi_cro_loss+semi_gan_loss+cro_loss+gan_loss
writer.add_scalar(tag="有监督对抗loss",step=epoch,value=gan_loss.numpy()[0])
writer.add_scalar(tag="有监督交叉熵loss", step=epoch, value=cro_loss.numpy()[0])
writer.add_scalar(tag="total_loss", step=epoch, value=loss_seg.numpy()[0])
loss_seg.backward()
for param in model_D.parameters():
param.stop_gradient = False
loss_D = 0
f_labels = paddle.zeros([b,h,w], dtype='int64')
t_labels = paddle.ones([b,h,w], dtype='int64')
#鉴别器的训练
#先计算假标签loss
if args.use_semi:#是否使用生成器的无监督推理结果 去更新 鉴别器
semi_D_pred = model_D(semi_G_pred).squeeze(axis=1) #[batch,1,h,w]
loss_D += be_loss(semi_D_pred,f_labels)
D_pred = model_D(G_pred).squeeze(axis=1)
loss_D += be_loss(D_pred,f_labels)
#计算真标签loss
try:
_,(_,inputs) = next(train_loader_iter_D) #取出有标签的数据
except:
train_loader_iter_G = enumerate(train_loader)
_,(_,inputs) = next(train_loader_iter_G)
D_pred = model_D(one_hot(inputs,args))
loss_D += be_loss(D_pred,t_labels)
writer.add_scalar(tag="鉴别器total_loss",step=epoch,value=loss_D.numpy()[0])
#更新两个网络
optimizer_G.step()
optimizer_G.clear_grad()
scheduler_G.step()
loss_D.backward()
optimizer_D.step()
optimizer_D.clear_grad()
scheduler_D.step()
writer.add_scalar(tag="lr",step=epoch,value=scheduler_D.get_lr())
if epoch % semi_val_step == 0 and epoch != 0:
print("{}times,start eval......".format(epoch // semi_val_step))
miou = eval(args,val_loader,model_G)
#model_G.train()
writer.add_scalar(tag="miou",step=epoch // semi_val_step,value=miou)
print("{}times,start eval......,miou is:{}".format(epoch // semi_val_step,miou))
评估代码python
def eval(args,dataloader,model):
metric = SegmentationMetric(args.num_classes)
with paddle.no_grad():
for i,(inputs,labels) in enumerate(dataloader):
pred = model(inputs)[0] #[batch,num_classes,h,w]
pred = np.asarray(np.argmax(pred, axis=1))
pred = pred.reshape([-1])
labels = np.asarray(labels.reshape([-1]))
metric.addBatch(pred,labels)
miou = metric.meanIntersectionOverUnion()
return miou
3条答案
按热度按时间j0pj023g1#
您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看 官网API文档 、 常见问题 、 历史Issue 、 AI社区 来寻求解答。祝您生活愉快~
Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the API , FAQ , Github Issue and AI community to get the answer.Have a nice day!
9nvpjoqh2#
你好,问题已收到!
进一步的评估或修复工作需要我们能够复现问题。请问可以提供一下完整的执行代码(目前似乎缺少数据集部分,如果数据集不方便发出来,可以尝试一下用相同形状的随机数(比如paddle.ones())替换看是不是也有同样的问题)以及复现问题的步骤嘛?
x759pob23#
你好,问题已收到!
进一步的评估或修复工作需要我们能够复现问题。请问可以提供一下完整的执行代码(目前似乎缺少数据集部分,如果数据集不方便发出来,可以尝试一下用相同形状的随机数(比如paddle.ones())替换看是不是也有同样的问题)以及复现问题的步骤嘛?
您好,我在本地上是可以正常运行的,但是在云上GPU运行了200epoch就挂掉了