Paddle Cannot find gradient of variable generated_tensor_15@GRAD

wfauudbj  于 2022-11-05  发布在  其他
关注(0)|答案(7)|浏览(149)
  • 版本、环境信息:

   1)PaddlePaddle版本:2.2.1
   2)CPU:aistudio平台
   3)GPU:aistudio平台
   4)系统环境:aistudio平台

  • 报错

Traceback (most recent call last):
File "train.py", line 366, in
main()
File "train.py", line 339, in main
p=p)#encoder_optimizer=encoder_optimizer,
File "train.py", line 90, in train
loss.backward()
File "", line 2, in backward
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/wrapped_decorator.py", line 25, inimpl
return wrapped_func(args,kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/framework.py", line 229, in
impl
*
return func(*args,**kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/varbase_patch_methods.py", line 249, in backward
framework.*dygraph_tracer())
RuntimeError: (NotFound) Cannot find gradient of variable generated_tensor_15@GRAD
[Hint: Expected iter != accumulators*.end() == true, but received iter != accumulators_.end():0 != true:1.] (at /paddle/paddle/fluid/imperative/basic_engine.cc:445)

ljsrvy3e

ljsrvy3e1#

您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看 官网API文档常见问题历史IssueAI社区 来寻求解答。祝您生活愉快~

Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the APIFAQGithub Issue and AI community to get the answer.Have a nice day!

yzxexxkh

yzxexxkh2#

loss.backward(retain_graph=True)
尝试这么设置也不行

nxagd54h

nxagd54h3#

你好,可以提供最小可复现代码吗?

bnlyeluc

bnlyeluc5#

你好,我看了你的代码,你应该是希望encoder和decoder以不同的学习率更新参数,而使用两个optimizer分别更新梯度。你试试先只用一个optimizer更新参数,看看会不会报错。至于针对不同参数使用不同的学习率,可以通过在构造网络时传入weight_attr指定学习率。指定的lr会乘以外面定义的lr,得到该参数实际lr值,如:

self.conv = nn.Conv2D(1,64,3,stride=1,padding=1, weight_attr=paddle.ParamAttr(learning_rate=lr))
ippsafx7

ippsafx76#

encoder和decoder使用同一个optimizer也没能解决,参考这个issue
使用d_loss_cov.backward()时报错
我将版本2.2.1->2.1.2,代码可以跑通

dfddblmv

dfddblmv7#

@d2623587501 你好,可以试试2.2.2,看看代码还能否跑通,我这边测试了2.2.2是不会报这个反向的bug的

相关问题