您好
我按照您的教程https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/advanced_guide/inference_deployment/inference/build_and_install_lib_cn.html#id1,使用cpu,Paddle版本使用1.7,编译了自己的库,没有安装nccl,make没有出问题,但我的PADDLE_ROOT目录下的third_party下的install目录为:
导致我在使用自己编译的库进行调用模型预测时报错:
make[2]:***No rule to make target '×××/lib/libmklml_intel.so', needed by 'mobilenet_test'. Stop.
make[2]:***Waiting for unfinished jobs....
CMakeFiles/Makefile2:95: recipe for target 'CMakeFiles/mobilenet_test.dir/all' failed
make[1]:***[CMakeFiles/mobilenet_test.dir/all] Error 2
Makefile:103: recipe for target 'all' failed
make:***[all] Error 2
run_impl.sh: 28: run_impl.sh: ./mobilenet_test: not found
9条答案
按热度按时间pb3s4cty1#
您好,我在linux cpu上,使用自己编译的预测库,调用PaddleOCR超轻量模型中的crnn模型,调用代码这样写:
#include <gflags/gflags.h>
#include <glog/logging.h>
#include
#include
#include
#include
#include "paddle/include/paddle_inference_api.h"
namespace paddle {
void CreateConfig(AnalysisConfig* config, const std::string& model_dirname) {
config->SetModel(model_dirname + "/model",
model_dirname + "/params");
config->DisableGpu();
config->SwitchUseFeedFetchOps(false);
}
void RunAnalysis(int batch_size, std::string model_dirname) {
AnalysisConfig config;
CreateConfig(&config, model_dirname);
auto predictor = CreatePaddlePredictor(config);
int channels = 3;
int height = 224;
int width = 224;
//float input[batch_size * channels * height * width] = {0};
int input_num = channels * height * width * batch_size;
// prepare inputs
float *input = new float[input_num];
auto input_names = predictor->GetInputNames();
auto input_t = predictor->GetInputTensor(input_names[0]);
input_t->Reshape({batch_size, channels, height, width});
input_t->copy_from_cpu(input);
CHECK(predictor->ZeroCopyRun());
std::vector out_data;
auto output_names = predictor->GetOutputNames();
auto output_t = predictor->GetOutputTensor(output_names[0]);
std::vector output_shape = output_t->shape();
int out_num = std::accumulate(output_shape.begin(), output_shape.end(), 1, std::multiplies());
out_data.resize(out_num);
output_t->copy_to_cpu(out_data.data());
}
} // namespace paddle
int main() {
paddle::RunAnalysis(1, "./ch_rec_mv3_crnn");
return 0;
}
运行run.sh时报错:
-- The CXX compiler identification is GNU 7.5.0
-- The C compiler identification is GNU 7.5.0
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ - works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc - works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
flags -std=c++11 -g
CMake Warning (dev) in CMakeLists.txt:
No cmake_minimum_required command is present. A line of code such as
should be added at the top of the file. The version specified may be lower
if you wish to support older CMake versions for this project. For more
information run "cmake --help-policy CMP0000".
This warning is for project developers. Use -Wno-dev to suppress it.
-- Configuring done
CMake Warning (dev) at CMakeLists.txt:52 (add_executable):
Policy CMP0003 should be set before this line. Add code such as
as early as possible but after the most recent call to
cmake_minimum_required or cmake_policy(VERSION). This warning appears
because target "ch_rec_mv3_crnn" links to some libraries for which the
linker must search:
and other libraries with known full path:
***/mklml/lib/libiomp5.so
***/mkldnn/lib/libmkldnn.so.0
CMake is adding directories in the second list to the linker search path in
case they are needed to find libraries from the first list (for backwards
compatibility with CMake 2.4). Set policy CMP0003 to OLD or NEW to enable
or disable this behavior explicitly. Run "cmake --help-policy CMP0003" for
more information.
This warning is for project developers. Use -Wno-dev to suppress it.
-- Generating done
CMake Warning:
Manually-specified variables were not used by the project:
-- Build files have been written to:***/sample/inference--/build
Scanning dependencies of target ch_rec_mv3_crnn
[ 50%] Building CXX object CMakeFiles/ch_rec_mv3_crnn.dir/ch_rec_mv3_crnn.o
[100%] Linking CXX executable ch_rec_mv3_crnn
[100%] Built target ch_rec_mv3_crnn
cp: cannot overwrite non-directory './ch_rec_mv3_crnn' with directory '×××/sample/inference--/ch_rec_mv3_crnn'
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0702 15:58:55.758961 30577 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
terminate called after throwing an instance of 'paddle::platform::EnforceNotMet'
what():
C++ Call Stacks (More useful to developers):
0 std::__cxx11::basic_string<char, std::char_traits, std::allocator > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1 paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2 paddle::AnalysisPredictor::LoadProgramDesc()
3 paddle::AnalysisPredictor::PrepareProgram(std::shared_ptrpaddle::framework::ProgramDesc const&)
4 paddle::AnalysisPredictor::Init(std::shared_ptrpaddle::framework::Scope const&, std::shared_ptrpaddle::framework::ProgramDesc const&)
5 std::unique_ptr<paddle::PaddlePredictor, std::default_deletepaddle::PaddlePredictor > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
6 std::unique_ptr<paddle::PaddlePredictor, std::default_deletepaddle::PaddlePredictor > paddle::CreatePaddlePredictorpaddle::AnalysisConfig(paddle::AnalysisConfig const&)
Error Message Summary:
Error: Cannot open file ./ch_rec_mv3_crnn/model at (×××/api/analysis_predictor.cc:664)
Aborted (core dumped)
ufj5ltwl2#
您好,PaddleOCR的C++版本的demo下周会正式发布,现在已经提了pr(https://github.com/PaddlePaddle/PaddleOCR/pull/283/files ),你可以先看下,或者等下周合入后直接按照文档教程使用(文档正在撰写中)。
fnatzsnv3#
您好,您给我的源码提交记录是针对移动端PaddleOCR的部署,我想了解一下有没有针对PC端Linux系统的PaddleOCR的C++版本的demo?
pobjuy324#
pc端的c++代码暂时还没有开发哈,您可以对照着lite版的改改先。
ehxuflar5#
好的多谢
oo7oh9g96#
您好,请问PaddleOCR的C++版本基于移动端的demo会在下周几发布?
huus2vyu7#
我也遇到了这个问题,请问最后怎么解决的
jecbmhm38#
@mtz1992 能发一下你编译时的设置的编译选项吗?从log来看,我感觉像是MKL未开启导致的,可以尝试一下在编译选项中设置
-WITH_MKL = ON
重新编译一下试试。nszi6y059#
您好,上个问题我自己解决了,多谢多谢。不过我还有个问题想请教您,PaddleOCR超轻量用到的那两个模型db和crnn,现在开源出来的推理过程是python的,请问有C++版本的demo吗?