Paddle 模型释放的问题

cwdobuhd  于 2021-12-07  发布在  Java
关注(0)|答案(5)|浏览(955)

我用的是Paddleinference C++ 版本。

目前存在这样的问题,由于我这边的场景需要进行不同模型的反复加载和释放,但是我这边发现调用以下接口,只要先吃不退出,模型资源并没有释放。

if (m_Predictor)
    {
        // 释放中间Tensor
        m_Predictor->ClearIntermediateTensor();

        // 释放内存池中的所有临时 Tensor
        m_Predictor->TryShrinkMemory();
    }

不知道是否有这样的案例可供参考,能够满足我这边的模型快速加载和卸载的需求呢?

wr98u20j

wr98u20j1#

您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看官网API文档常见问题历史IssueAI社区来寻求解答。祝您生活愉快~

Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the APIFAQGithub Issue and AI community to get the answer.Have a nice day!

1cosmwyk

1cosmwyk2#

您好,TryShrinkMemory是释放临时变量的api,释放模型是在m_Predictor的析构函数。

dtcbnfnu

dtcbnfnu3#

我的测试代码如下:
#include "paddle/include/paddle_inference_api.h"
#include
#include <gflags/gflags.h>
#include <glog/logging.h>
#include
#include
#include
#include
using namespace std;

namespace paddle_infer
{

void PrepareConfig(Config* config, int num) 
{
    config->EnableUseGpu(100, 0);
    config->DisableGlogInfo();
    config->EnableMemoryOptim();
    config->EnableGpuMultiStream();
    config->EnableTensorRtEngine(1 << 30, 4096, 3, paddle::AnalysisConfig::Precision::kFloat32, true, false);
    if (num == 0)
    {
        config->SetModel("F:/L1/MobileNetV3_small_x0_35/inference.pdmodel", "F:/L1/MobileNetV3_small_x0_35/inference.pdiparams");
        config->SetOptimCacheDir("F:/L1/MobileNetV3_small_x0_35/");
    }

    if (num == 1)
    {
        config->SetModel("F:/L2/MobileNetV3_small_x0_35/inference.pdmodel", "F:/L2/MobileNetV3_small_x0_35/inference.pdiparams");
        config->SetOptimCacheDir("F:/L2/MobileNetV3_small_x0_35/");
    }

    if (num == 2)
    {
        config->SetModel("F:/L4/MobileNetV3_small_x0_35/inference.pdmodel", "F:/L4/MobileNetV3_small_x0_35/inference.pdiparams");
        config->SetOptimCacheDir("F:/L4/MobileNetV3_small_x0_35/");
    }
}

void Run(std::shared_ptr<Predictor> predictor, Config* config)
{
    int batchsize = 4096;

    // 准备输入数据
    std::vector<int>   input_shape = { batchsize, 1, 124, 84 };
    std::vector<float> input_data(batchsize * 1 * 124 * 84, 1.0);
    std::vector<float> out_data;
    vector<double>     counts;
    for (size_t i = 0; i < 200; i++)
    {
        auto curTime      = std::chrono::steady_clock::now();
        int  input_num    = std::accumulate(input_shape.begin(), input_shape.end(), 1, std::multiplies<int>());
        auto input_names  = predictor->GetInputNames();
        auto input_tensor = predictor->GetInputHandle(input_names[0]);
        input_tensor->Reshape(input_shape);
        input_tensor->CopyFromCpu(input_data.data());

        // 执行预测
        predictor->Run();

        // 获取预测输出
        auto output_names  = predictor->GetOutputNames();
        auto output_tensor = predictor->GetOutputHandle(output_names[0]);
        std::vector<int> output_shape = output_tensor->shape();
        int  out_num = std::accumulate(output_shape.begin(), output_shape.end(), 1, std::multiplies<int>());

        out_data.resize(out_num);
        output_tensor->CopyToCpu(out_data.data());

        auto   endTime = std::chrono::steady_clock::now();
        if (i > 10)
        {
            counts.push_back(std::chrono::duration<double, std::milli>(endTime - curTime).count());
        }

        if (i % 100 == 0)
        {
            int    gpu_id = config->gpu_device_id();
            string param_files = config->params_file();
            cout << "gpu id = " << gpu_id << ", " << "param  = " << param_files << ", " << "idx    = " << i << endl;
        }
    }

    double cout_ave = accumulate(counts.begin(), counts.end(), 0) / counts.size();
    std::cout << "bachsize = " << batchsize << ", gpu id = " << config->gpu_device_id() << " avg classify img time : " << cout_ave / batchsize << "ms!" << endl;
}

}

int main(int argc, char**argv)
{
paddle_infer::Config config[3];
std::shared_ptr<paddle_infer::Predictor> predictor[3];
for (size_t i = 0; i < 3; i++)
{
paddle_infer::PrepareConfig(&config[i], i);
predictor[i] = paddle_infer::CreatePredictor(config[i]);
}

std::cout << "start running" << endl;
auto curTime = std::chrono::steady_clock::now();
std::vector<std::thread> threads;
for (int i = 0; i < 3; ++i) 
{
    threads.emplace_back(paddle_infer::Run, predictor[i], &config[i]);
}

for (int i = 0; i < 3; ++i) 
{
    threads[i].join();
}

auto endTime  = std::chrono::steady_clock::now();
auto lastTime = std::chrono::duration<double, std::milli>(endTime - curTime).count();
std::cout << "end runing, ave time = " << lastTime / (4096 * 600) << "ms!" << endl;

}

但是运行完毕后出现以下错误:

C++ Traceback (most recent call last):

Not support stack backtrace yet.

Error Message Summary:

ExternalError: Cuda error(4), driver shutting down.
[Advise: Please search for the error code(4) on website( https://docs.nvidia.com/cuda/archive/9.0/cuda-runtime-api/group__CUDART__TYPES.html#group__CUDART__TYPES_1g3f51e3575c2178246db0a94a430e0038 ) to get Nvidia's official solution about CUDA Error.] (at C:\home\workspace\Paddle_release5\paddle\fluid\platform\gpu_info.cc:275)

麻烦帮忙看下怎么处理呢?

相关问题