Paddle paddle2.1 paddle.Model运行的问题

izkcnapc  于 2021-11-30  发布在  Java
关注(0)|答案(6)|浏览(614)

aistuido notebook 环境
在paddle2.0.2时, 用paddle.Model接口,可以正常使用paddle.Model.eval,设置batch_size=128;
但是在paddle2.1时,设置batch_size=128,会报错,但是显卡占用不是很高。
代码文件:https://aistudio.baidu.com/aistudio/projectdetail/2020923
paddle2.0.2(notebook 的version7版本)环境下正常运行的
paddle2.1.0(notebook的version8版本)下运行,batch_size=128

报错信息:
——————————————————————————————
Eval begin...
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/tensor/creation.py:125: DeprecationWarning: np.object is a deprecated alias for the builtin object. To silence this warning, use object by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
if data.dtype == np.object:
ERROR:root:DataLoader reader thread raised an exception!
Exception in thread Thread-6:
Traceback (most recent call last):
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dataloader/dataloader_iter.py", line 482, in _get_data
data = self._data_queue.get(timeout=self._timeout)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/multiprocessing/queues.py", line 105, in get
raise Empty
_queue.Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/threading.py", line 870, in run
self._target(*self._args,**self._kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dataloader/dataloader_iter.py", line 411, in _thread_loop
batch = self._get_data()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dataloader/dataloader_iter.py", line 498, in _get_data
"pids: {}".format(len(failed_workers), pids))
RuntimeError: DataLoader 1 workers exit unexpectedly, pids: 468

---------------------------------------------------------------------------SystemError Traceback (most recent call last) in
1 # 模型验证
----> 2 model.evaluate(val_dataset, batch_size=128, num_workers=1)
3 #alt_gvt_large⸱{'acc_top1': 0.8368, 'acc_top5': 0.96588}
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/hapi/model.py in evaluate(self, eval_data, batch_size, log_freq, verbose, num_workers, callbacks)
1815 'metrics': self._metrics_name()})
1816
-> 1817 logs = self._run_one_epoch(eval_loader, cbks, 'eval')
1818
1819 cbks.on_end('eval', logs)
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/hapi/model.py in _run_one_epoch(self, data_loader, callbacks, mode, logs)
1997 def _run_one_epoch(self, data_loader, callbacks, mode, logs={}):
1998 outputs = []
-> 1999 for step, data in enumerate(data_loader):
2000 # data might come from different types of data_loader and have
2001 # different format, as following:
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dataloader/dataloader_iter.py innext(self)
583
584 if in_dygraph_mode():
--> 585 data = self._reader.read_next_var_list()
586 data = *restore_batch(data, self.*structure_infos.pop(0))
587 else:
SystemError: (Fatal) Blocking queue is killed because the data reader raises an exception.
[Hint: Expected killed* != true, but received killed*:1 == true:1.] (at /paddle/paddle/fluid/operators/reader/blocking_queue.h:166)

bfnvny8b

bfnvny8b1#

您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看官网API文档常见问题历史IssueAI社区来寻求解答。祝您生活愉快~

Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the APIFAQGithub Issue and AI community to get the answer.Have a nice day!

w8ntj3qf

w8ntj3qf2#

已知问题了。可以先把workers的数量设置为>0来绕过一下。这个问题会在下个版本解决。

wbrvyc0a

wbrvyc0a3#

我的问题不只是这个,num_worker是设置为1临时解决的,我还有一个问题是在paddle2.1下,batch_size=128 就跑不了,但是在paddle2.0.2下这个就可以正常跑,paddle2.1下要变成32才能正常运行,我看了一下,dataloader读取数据的类型变成了paddle.tensor,我想知道这也会影响吗

xtupzzrd

xtupzzrd4#

workers也是一样的情况下 batch_size要不同才能跑嘛?

5vf7fwbs

5vf7fwbs5#

workers在paddle2.0下没有添加这个参数,应该是默认值(可能是0?),paddle2.1是因为worker为0跑不了就设置为1了,这样测试下来的batchsize在paddle2.0下设置128可以运行,在2.1下只能32,超过了就出错了,就是上面的错误

mwecs4sa

mwecs4sa6#

那有可能就是因为workers数量的差异导致的

相关问题