paddle.2 我通过预训练模型,进行finetune已经得到了自己的模型,进行预测的时候,为何还要去加载原来的预训练模型,这样会很占用显存,求助能不加载原预训练模型,只加载我训练得到的模型,可以吗?
import paddle
import paddlehub as hub
ifname== 'main':
model = hub.Module(name='resnet50_vd_imagenet_ssld', label_list=["roses", "tulips", "daisy", "sunflowers", "dandelion"], load_checkpoint='/PATH/TO/CHECKPOINT')
result = model.predict(['flower.jpg'])
from:https://github.com/PaddlePaddle/PaddleHub/tree/release/v2.1/demo/image_classification
136M model.pdparams 我自己训练好的模型只有136M,但实际占用了1081
3条答案
按热度按时间wnavrhmk1#
您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看官网API文档、常见问题、历史Issue、AI社区来寻求解答。祝您生活愉快~
Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the API,FAQ,Github Issue and AI community to get the answer.Have a nice day!
hof1towb2#
看您发的文档里,应该是只需要加载finetune得到的模型进行预测就可以了
yrefmtwq3#
另外,预测时实际占用的显存大小,并不会与参数文件大小一致的哈,因为预测时很多显存占用会用于存储中间计算结果Tensor,并且cuda、cudnn等第三方依赖库句柄也会占用不小显存。