PaddleNLP [Question]: 当我在使用PPDiffuser时调用fastdeploy出现了如下的错误,请问如何解决

8tntrjer  于 3个月前  发布在  其他
关注(0)|答案(1)|浏览(44)

请提出你的问题:

在运行过程中遇到了一个错误,错误信息如下:

The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['']
Traceback (most recent call last):
 File "main.py", line 34, in 
 image_text2img = fd_pipe.text2img(prompt=prompt, num_inference_steps=50).images[0]
 File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion_mega.py", line 91, in text2img
 output = temp_pipeline(
 File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion.py", line 368, in _encode_prompt
 text_embeddings = self.text_encoder(input_ids=text_input_ids.astype(np.int64))[0]
 File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion.py", line 160, in _encode_prompt
 text_embeddings = self.text_encoder(input_ids=text_input_ids.astype(np.int64))[0]
 File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/fastdeploy_utils.py", line 102, in **call
 return self.model.infer(inputs)
 File "/root/miniconda3/lib/python3.8/site-packages/fastdeploy/runtime.py", line 64, in infer
 return self._runtime.infer(data)
 OSError:

### C++ Traceback (most recent call last):

0 paddle::AnalysisPredictor::ZeroCopyRun()
 1 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, phi::Place const&)
 2 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, phi::Place const&) const
 3 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, phi::Place const&, paddle::framework::RuntimeContext*) const
 4 paddle::framework::StructKernelImpl<paddle::operators::MultiHeadMatMulV2Kernel<float, phi::GPUContext>, void>::Compute(phi::KernelContext*)
 5 paddle::operators::MultiHeadMatMulV2Kernel<float, phi::GPUContext>::Compute(paddle::framework::ExecutionContext const&) const
 6 void paddle::framework::OperatorWithKernel::GEMM(CBLAS_TRANSPOSE, CBLAS_TRANSPOSE, int, int, int, float, float const*, float const*, float, float*) const
 7 void paddle::framework::OperatorWithKernel::GEMM(CBLAS_TRANSPOSE, CBLAS_TRANSPOSE, int, int, int, float, float const*, float const*, float, float*) const
 8 void phi::funcs::Blasphi::GPUContext::MatMul(phi::DenseTensor const&, bool, phi::DenseTensor const&, bool, float, phi::DenseTensor*, float) const
 9 void phi::funcs::Blasphi::GPUContext::GEMM(CBLAS_TRANSPOSE, CBLAS_TRANSPOSE, int, int, int, float, float const*, float const*, float, float*) const
10 void phi::enforce::EnforceNotMet::EnforceNotMet(phi::ErrorSummary const&, char const*, int)
11 phi::enforce::GetCurrentTraceBackString [abi:cxx11](https://github.com/PaddlePaddle/PaddleNLP/issues/bool)

### Error Message Summary:

ExternalError: CUBLAS error(7).
 [Hint: Please search for the error code(7) on website (https://docs.nvidia.com/cuda/cublas/index.html#cublasstatus_t) to get Nvidia's official solution and advice about CUBLAS Error.] (at /home/fastdeploy/develop/paddle_build/v0.0.0/Paddle/paddle/fluid/inference/api/resource_manager.cc:282)

相关问题