PaddleOCR C++ 推理时如何屏蔽无关log信息的输出?

gcxthw6b  于 2022-12-31  发布在  其他
关注(0)|答案(1)|浏览(333)
  • 系统环境/System Environment:
  • Win10 64位,VS2017, PaddleOCR 2.6

希望屏蔽的log信息如下:

WARNING: Logging before InitGoogleLogging() is written to STDERR I1122 11:49:21.723809 2616 analysis_predictor.cc:964] MKLDNN is enabled e[1me[35m--- Running analysis [ir_graph_build_pass]e[0m e[1me[35m--- Running analysis [ir_graph_clean_pass]e[0m e[1me[35m--- Running analysis [ir_analysis_pass]e[0m e[32m--- Running IR pass [mkldnn_placement_pass]e[0m e[32m--- Running IR pass [simplify_with_basic_ops_pass]e[0m e[32m--- Running IR pass [layer_norm_fuse_pass]e[0m e[37m--- Fused 0 subgraphs into layer_norm op.e[0m e[32m--- Running IR pass [attention_lstm_fuse_pass]e[0m e[32m--- Running IR pass [seqconv_eltadd_relu_fuse_pass]e[0m e[32m--- Running IR pass [seqpool_cvm_concat_fuse_pass]e[0m e[32m--- Running IR pass [mul_lstm_fuse_pass]e[0m e[32m--- Running IR pass [fc_gru_fuse_pass]e[0m e[37m--- fused 0 pairs of fc gru patternse[0m e[32m--- Running IR pass [mul_gru_fuse_pass]e[0m e[32m--- Running IR pass [seq_concat_fc_fuse_pass]e[0m e[32m--- Running IR pass [gpu_cpu_squeeze2_matmul_fuse_pass]e[0m e[32m--- Running IR pass [gpu_cpu_flatten2_matmul_fuse_pass]e[0m e[32m--- Running IR pass [matmul_v2_scale_fuse_pass]e[0m e[32m--- Running IR pass [gpu_cpu_map_matmul_v2_to_mul_pass]e[0m e[32m--- Running IR pass [gpu_cpu_map_matmul_v2_to_matmul_pass]e[0m e[32m--- Running IR pass [matmul_scale_fuse_pass]e[0m e[32m--- Running IR pass [gpu_cpu_map_matmul_to_mul_pass]e[0m e[32m--- Running IR pass [fc_fuse_pass]e[0m e[32m--- Running IR pass [repeated_fc_relu_fuse_pass]e[0m e[32m--- Running IR pass [squared_mat_sub_fuse_pass]e[0m e[32m--- Running IR pass [conv_bn_fuse_pass]e[0m I1122 11:49:21.894163 2616 fuse_pass_base.cc:57] --- detected 33 subgraphs e[32m--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]e[0m e[32m--- Running IR pass [conv_transpose_bn_fuse_pass]e[0m e[32m--- Running IR pass [conv_transpose_eltwiseadd_bn_fuse_pass]e[0m I1122 11:49:21.930526 2616 fuse_pass_base.cc:57] --- detected 1 subgraphs e[32m--- Running IR pass [is_test_pass]e[0m e[32m--- Running IR pass [runtime_context_cache_pass]e[0m e[32m--- Running IR pass [depthwise_conv_mkldnn_pass]e[0m I1122 11:49:21.937283 2616 fuse_pass_base.cc:57] --- detected 15 subgraphs e[32m--- Running IR pass [conv_bn_fuse_pass]e[0m I1122 11:49:21.968236 2616 fuse_pass_base.cc:57] --- detected 15 subgraphs e[32m--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]e[0m e[32m--- Running IR pass [conv_transpose_bn_fuse_pass]e[0m e[32m--- Running IR pass [conv_transpose_eltwiseadd_bn_fuse_pass]e[0m e[32m--- Running IR pass [conv_bias_mkldnn_fuse_pass]e[0m I1122 11:49:22.010740 2616 fuse_pass_base.cc:57] --- detected 16 subgraphs e[32m--- Running IR pass [conv_transpose_bias_mkldnn_fuse_pass]e[0m I1122 11:49:22.015493 2616 fuse_pass_base.cc:57] --- detected 2 subgraphs e[32m--- Running IR pass [conv_elementwise_add_mkldnn_fuse_pass]e[0m e[37m--- Fused 0 projection conv (as y) + elementwise_add patternse[0m e[37m--- Fused 0 conv (as x) + elementwise_add patternse[0m e[37m--- Fused 10 conv (as y) + elementwise_add patternse[0m I1122 11:49:22.151576 2616 fuse_pass_base.cc:57] --- detected 10 subgraphs e[32m--- Running IR pass [conv_concat_relu_mkldnn_fuse_pass]e[0m e[32m--- Running IR pass [conv_relu_mkldnn_fuse_pass]e[0m I1122 11:49:22.177919 2616 fuse_pass_base.cc:57] --- detected 21 subgraphs e[32m--- Running IR pass [conv_leaky_relu_mkldnn_fuse_pass]e[0m e[32m--- Running IR pass [conv_relu6_mkldnn_fuse_pass]e[0m e[32m--- Running IR pass [conv_swish_mkldnn_fuse_pass]e[0m e[32m--- Running IR pass [conv_hard_swish_mkldnn_fuse_pass]e[0m I1122 11:49:22.219810 2616 fuse_pass_base.cc:57] --- detected 20 subgraphs e[32m--- Running IR pass [conv_mish_mkldnn_fuse_pass]e[0m e[32m--- Running IR pass [conv_hard_sigmoid_mkldnn_fuse_pass]e[0m I1122 11:49:22.239617 2616 fuse_pass_base.cc:57] --- detected 8 subgraphs e[32m--- Running IR pass [conv_gelu_mkldnn_fuse_pass]e[0m e[32m--- Running IR pass [scale_matmul_fuse_pass]e[0m e[37m--- fused 0 scale with matmule[0m e[32m--- Running IR pass [reshape_transpose_matmul_mkldnn_fuse_pass]e[0m e[37m--- Fused 0 ReshapeTransposeMatmul patterns for matmul Ope[0m e[37m--- Fused 0 ReshapeTransposeMatmul patterns for matmul Op with transpose's xshapee[0m e[37m--- Fused 0 ReshapeTransposeMatmul patterns for matmul Op with reshape's xshapee[0m e[37m--- Fused 0 ReshapeTransposeMatmul patterns for matmul Op with reshape's xshape with transpose's xshapee[0m e[32m--- Running IR pass [reshape_transpose_matmul_v2_mkldnn_fuse_pass]e[0m e[37m--- Fused 0 ReshapeTransposeMatmul patterns for matmul_v2 Ope[0m e[37m--- Fused 0 ReshapeTransposeMatmul patterns for matmul_v2 Op with transpose's xshapee[0m e[37m--- Fused 0 ReshapeTransposeMatmul patterns for matmul_v2 Op with reshape's xshapee[0m e[37m--- Fused 0 ReshapeTransposeMatmul patterns for matmul_v2 Op with reshape's xshape with transpose's xshapee[0m e[32m--- Running IR pass [matmul_transpose_reshape_fuse_pass]e[0m e[37m--- Fused 0 MatmulTransposeReshape patterns for matmul Ope[0m e[32m--- Running IR pass [matmul_v2_transpose_reshape_fuse_pass]e[0m e[37m--- Fused 0 MatmulTransposeReshape patterns for matmul_v2 Ope[0m e[32m--- Running IR pass [batch_norm_act_fuse_pass]e[0m e[37m--- fused 0 batch norm with relu activatione[0m e[32m--- Running IR pass [softplus_activation_mkldnn_fuse_pass]e[0m e[37m--- fused 0 softplus with relu activatione[0m e[37m--- fused 0 softplus with tanh activatione[0m e[37m--- fused 0 softplus with leaky_relu activatione[0m e[37m--- fused 0 softplus with swish activatione[0m e[37m--- fused 0 softplus with hardswish activatione[0m e[37m--- fused 0 softplus with sqrt activatione[0m e[37m--- fused 0 softplus with abs activatione[0m e[37m--- fused 0 softplus with clip activatione[0m e[37m--- fused 0 softplus with gelu activatione[0m e[37m--- fused 0 softplus with relu6 activatione[0m e[37m--- fused 0 softplus with sigmoid activatione[0m e[32m--- Running IR pass [elt_act_mkldnn_fuse_pass]e[0m e[37m--- fused 0 elementwise_add with relu activatione[0m e[37m--- fused 0 elementwise_add with tanh activatione[0m e[37m--- fused 0 elementwise_add with leaky_relu activatione[0m e[37m--- fused 0 elementwise_add with swish activatione[0m e[37m--- fused 0 elementwise_add with hardswish activatione[0m e[37m--- fused 0 elementwise_add with sqrt activatione[0m e[37m--- fused 0 elementwise_add with abs activatione[0m e[37m--- fused 0 elementwise_add with clip activatione[0m e[37m--- fused 0 elementwise_add with gelu activatione[0m e[37m--- fused 0 elementwise_add with relu6 activatione[0m e[37m--- fused 0 elementwise_add with sigmoid activatione[0m e[37m--- fused 0 elementwise_sub with relu activatione[0m e[37m--- fused 0 elementwise_sub with tanh activatione[0m e[37m--- fused 0 elementwise_sub with leaky_relu activatione[0m e[37m--- fused 0 elementwise_sub with swish activatione[0m e[37m--- fused 0 elementwise_sub with hardswish activatione[0m e[37m--- fused 0 elementwise_sub with sqrt activatione[0m e[37m--- fused 0 elementwise_sub with abs activatione[0m e[37m--- fused 0 elementwise_sub with clip activatione[0m e[37m--- fused 0 elementwise_sub with gelu activatione[0m e[37m--- fused 0 elementwise_sub with relu6 activatione[0m e[37m--- fused 0 elementwise_sub with sigmoid activatione[0m e[37m--- fused 0 elementwise_mul with relu activatione[0m e[37m--- fused 0 elementwise_mul with tanh activatione[0m e[37m--- fused 0 elementwise_mul with leaky_relu activatione[0m e[37m--- fused 0 elementwise_mul with swish activatione[0m e[37m--- fused 0 elementwise_mul with hardswish activatione[0m e[37m--- fused 0 elementwise_mul with sqrt activatione[0m e[37m--- fused 0 elementwise_mul with abs activatione[0m e[37m--- fused 0 elementwise_mul with clip activatione[0m e[37m--- fused 0 elementwise_mul with gelu activatione[0m e[37m--- fused 0 elementwise_mul with relu6 activatione[0m e[37m--- fused 0 elementwise_mul with sigmoid activatione[0m e[1me[35m--- Running analysis [ir_params_sync_among_devices_pass]e[0m e[1me[35m--- Running analysis [adjust_cudnn_workspace_size_pass]e[0m e[1me[35m--- Running analysis [inference_op_replace_pass]e[0m e[1me[35m--- Running analysis [ir_graph_to_program_pass]e[0m I1122 11:49:22.375128 2616 analysis_predictor.cc:1035] ======= optimize end ======= I1122 11:49:22.375128 2616 naive_executor.cc:102] --- skip [feed], feed -> x I1122 11:49:22.378122 2616 naive_executor.cc:102] --- skip [sigmoid_0.tmp_0], fetch -> fetch In PP-OCRv3, default rec_img_h is 48,if you use other model, you should set the param rec_img_h=32 I1122 11:49:22.398947 2616 analysis_predictor.cc:964] MKLDNN is enabled e[1me[35m--- Running analysis [ir_graph_build_pass]e[0m e[1me[35m--- Running analysis [ir_graph_clean_pass]e[0m e[1me[35m--- Running analysis [ir_analysis_pass]e[0m e[32m--- Running IR pass [mkldnn_placement_pass]e[0m e[32m--- Running IR pass [simplify_with_basic_ops_pass]e[0m e[32m--- Running IR pass [layer_norm_fuse_pass]e[0m e[37m--- Fused 0 subgraphs into layer_norm op.e[0m e[32m--- Running IR pass [attention_lstm_fuse_pass]e[0m e[32m--- Running IR pass [seqconv_eltadd_relu_fuse_pass]e[0m e[32m--- Running IR pass [seqpool_cvm_concat_fuse_pass]e[0m e[32m--- Running IR pass [mul_lstm_fuse_pass]e[0m e[32m--- Running IR pass [fc_gru_fuse_pass]e[0m e[37m--- fused 0 pairs of fc gru patternse[0m e[32m--- Running IR pass [mul_gru_fuse_pass]e[0m e[32m--- Running IR pass [seq_concat_fc_fuse_pass]e[0m e[32m--- Running IR pass [gpu_cpu_squeeze2_matmul_fuse_pass]e[0m e[32m--- Running IR pass [gpu_cpu_flatten2_matmul_fuse_pass]e[0m e[32m--- Running IR pass [matmul_v2_scale_fuse_pass]e[0m e[32m--- Running IR pass [gpu_cpu_map_matmul_v2_to_mul_pass]e[0m I1122 11:49:22.471725 2616 fuse_pass_base.cc:57] --- detected 9 subgraphs e[32m--- Running IR pass [gpu_cpu_map_matmul_v2_to_matmul_pass]e[0m I1122 11:49:22.473692 2616 fuse_pass_base.cc:57] --- detected 4 subgraphs e[32m--- Running IR pass [matmul_scale_fuse_pass]e[0m e[32m--- Running IR pass [gpu_cpu_map_matmul_to_mul_pass]e[0m e[32m--- Running IR pass [fc_fuse_pass]e[0m I1122 11:49:22.486030 2616 fuse_pass_base.cc:57] --- detected 9 subgraphs e[32m--- Running IR pass [repeated_fc_relu_fuse_pass]e[0m e[32m--- Running IR pass [squared_mat_sub_fuse_pass]e[0m e[32m--- Running IR pass [conv_bn_fuse_pass]e[0m I1122 11:49:22.552340 2616 fuse_pass_base.cc:57] --- detected 19 subgraphs e[32m--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]e[0m e[32m--- Running IR pass [conv_transpose_bn_fuse_pass]e[0m e[32m--- Running IR pass [conv_transpose_eltwiseadd_bn_fuse_pass]e[0m e[32m--- Running IR pass [is_test_pass]e[0m e[32m--- Running IR pass [runtime_context_cache_pass]e[0m e[32m--- Running IR pass [depthwise_conv_mkldnn_pass]e[0m I1122 11:49:22.572373 2616 fuse_pass_base.cc:57] --- detected 13 subgraphs e[32m--- Running IR pass [conv_bn_fuse_pass]e[0m I1122 11:49:22.595098 2616 fuse_pass_base.cc:57] --- detected 13 subgraphs e[32m--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]e[0m e[32m--- Running IR pass [conv_transpose_bn_fuse_pass]e[0m e[32m--- Running IR pass [conv_transpose_eltwiseadd_bn_fuse_pass]e[0m e[32m--- Running IR pass [conv_bias_mkldnn_fuse_pass]e[0m I1122 11:49:22.614040 2616 fuse_pass_base.cc:57] --- detected 4 subgraphs e[32m--- Running IR pass [conv_transpose_bias_mkldnn_fuse_pass]e[0m e[32m--- Running IR pass [conv_elementwise_add_mkldnn_fuse_pass]e[0m e[37m--- Fused 0 projection conv (as y) + elementwise_add patternse[0m e[37m--- Fused 0 conv (as x) + elementwise_add patternse[0m e[37m--- Fused 0 conv (as y) + elementwise_add patternse[0m e[32m--- Running IR pass [conv_concat_relu_mkldnn_fuse_pass]e[0m e[32m--- Running IR pass [conv_relu_mkldnn_fuse_pass]e[0m I1122 11:49:22.640735 2616 fuse_pass_base.cc:57] --- detected 2 subgraphs e[32m--- Running IR pass [conv_leaky_relu_mkldnn_fuse_pass]e[0m e[32m--- Running IR pass [conv_relu6_mkldnn_fuse_pass]e[0m e[32m--- Running IR pass [conv_swish_mkldnn_fuse_pass]e[0m W1122 11:49:22.653496 2616 op_compat_sensible_pass.cc:207] Attribute(beta) of Op(swish) is not defined in opProto or is in extra set!The compatable check for this attribute is not use. Please remove it from the precondition of pass: conv_swish_mkldnn_fuse_pass I1122 11:49:22.653496 2616 fuse_pass_base.cc:57] --- detected 5 subgraphs e[32m--- Running IR pass [conv_hard_swish_mkldnn_fuse_pass]e[0m I1122 11:49:22.666496 2616 fuse_pass_base.cc:57] --- detected 27 subgraphs e[32m--- Running IR pass [conv_mish_mkldnn_fuse_pass]e[0m e[32m--- Running IR pass [conv_hard_sigmoid_mkldnn_fuse_pass]e[0m I1122 11:49:22.673770 2616 fuse_pass_base.cc:57] --- detected 2 subgraphs e[32m--- Running IR pass [conv_gelu_mkldnn_fuse_pass]e[0m e[32m--- Running IR pass [scale_matmul_fuse_pass]e[0m I1122 11:49:22.678720 2616 fuse_pass_base.cc:57] --- detected 2 subgraphs e[37m--- fused 2 scale with matmule[0m e[32m--- Running IR pass [reshape_transpose_matmul_mkldnn_fuse_pass]e[0m e[37m--- Fused 0 ReshapeTransposeMatmul patterns for matmul Ope[0m e[37m--- Fused 0 ReshapeTransposeMatmul patterns for matmul Op with transpose's xshapee[0m e[37m--- Fused 0 ReshapeTransposeMatmul patterns for matmul Op with reshape's xshapee[0m e[37m--- Fused 0 ReshapeTransposeMatmul patterns for matmul Op with reshape's xshape with transpose's xshapee[0m e[32m--- Running IR pass [reshape_transpose_matmul_v2_mkldnn_fuse_pass]e[0m e[37m--- Fused 0 ReshapeTransposeMatmul patterns for matmul_v2 Ope[0m e[37m--- Fused 0 ReshapeTransposeMatmul patterns for matmul_v2 Op with transpose's xshapee[0m e[37m--- Fused 0 ReshapeTransposeMatmul patterns for matmul_v2 Op with reshape's xshapee[0m e[37m--- Fused 0 ReshapeTransposeMatmul patterns for matmul_v2 Op with reshape's xshape with transpose's xshapee[0m e[32m--- Running IR pass [matmul_v2_transpose_reshape_fuse_pass]e[0m e[37m--- Fused 0 MatmulTransposeReshape patterns for matmul_v2 Ope[0m e[32m--- Running IR pass [batch_norm_act_fuse_pass]e[0m e[37m--- fused 0 batch norm with relu activatione[0m e[32m--- Running IR pass [softplus_activation_mkldnn_fuse_pass]e[0m e[37m--- fused 0 softplus with relu activatione[0m e[37m--- fused 0 softplus with tanh activatione[0m e[37m--- fused 0 softplus with leaky_relu activatione[0m e[37m--- fused 0 softplus with swish activatione[0m e[37m--- fused 0 softplus with hardswish activatione[0m e[37m--- fused 0 softplus with sqrt activatione[0m e[37m--- fused 0 softplus with abs activatione[0m e[37m--- fused 0 softplus with clip activatione[0m e[37m--- fused 0 softplus with gelu activatione[0m e[37m--- fused 0 softplus with relu6 activatione[0m e[37m--- fused 0 softplus with sigmoid activatione[0m e[32m--- Running IR pass [elt_act_mkldnn_fuse_pass]e[0m e[37m--- fused 0 elementwise_add with relu activatione[0m e[37m--- fused 0 elementwise_add with tanh activatione[0m e[37m--- fused 0 elementwise_add with leaky_relu activatione[0m e[37m--- fused 0 elementwise_add with swish activatione[0m e[37m--- fused 0 elementwise_add with hardswish activatione[0m e[37m--- fused 0 elementwise_add with sqrt activatione[0m e[37m--- fused 0 elementwise_add with abs activatione[0m e[37m--- fused 0 elementwise_add with clip activatione[0m e[37m--- fused 0 elementwise_add with gelu activatione[0m e[37m--- fused 0 elementwise_add with relu6 activatione[0m e[37m--- fused 0 elementwise_add with sigmoid activatione[0m e[37m--- fused 0 elementwise_sub with relu activatione[0m e[37m--- fused 0 elementwise_sub with tanh activatione[0m e[37m--- fused 0 elementwise_sub with leaky_relu activatione[0m e[37m--- fused 0 elementwise_sub with swish activatione[0m e[37m--- fused 0 elementwise_sub with hardswish activatione[0m e[37m--- fused 0 elementwise_sub with sqrt activatione[0m e[37m--- fused 0 elementwise_sub with abs activatione[0m e[37m--- fused 0 elementwise_sub with clip activatione[0m e[37m--- fused 0 elementwise_sub with gelu activatione[0m e[37m--- fused 0 elementwise_sub with relu6 activatione[0m e[37m--- fused 0 elementwise_sub with sigmoid activatione[0m e[37m--- fused 0 elementwise_mul with relu activatione[0m e[37m--- fused 0 elementwise_mul with tanh activatione[0m e[37m--- fused 0 elementwise_mul with leaky_relu activatione[0m e[37m--- fused 0 elementwise_mul with swish activatione[0m e[37m--- fused 0 elementwise_mul with hardswish activatione[0m e[37m--- fused 0 elementwise_mul with sqrt activatione[0m e[37m--- fused 0 elementwise_mul with abs activatione[0m e[37m--- fused 0 elementwise_mul with clip activatione[0m e[37m--- fused 0 elementwise_mul with gelu activatione[0m e[37m--- fused 0 elementwise_mul with relu6 activatione[0m e[37m--- fused 0 elementwise_mul with sigmoid activatione[0m e[1me[35m--- Running analysis [ir_params_sync_among_devices_pass]e[0m e[1me[35m--- Running analysis [adjust_cudnn_workspace_size_pass]e[0m e[1me[35m--- Running analysis [inference_op_replace_pass]e[0m e[1me[35m--- Running analysis [ir_graph_to_program_pass]e[0m I1122 11:49:22.790033 2616 analysis_predictor.cc:1035] ======= optimize end ======= I1122 11:49:22.790033 2616 naive_executor.cc:102] --- skip [feed], feed -> x I1122 11:49:22.793025 2616 naive_executor.cc:102] --- skip [softmax_5.tmp_0], fetch -> fetch I1122 11:49:22.823341 2616 device_context.cc:737] oneDNN v2.5.4

目前尝试在对应cpp(ocr_rec.cpp、ocr_det.cpp、ocr_cls.cpp)中取消config.DisableGlogInfo();代码注释,发现log信息还是输出了,请问下要如何才能屏蔽?@littletomatodonkey

相关问题