PaddleNLP [Question]: 当我在使用PPDiffuser时调用fastdeploy出现了如下的错误,请问如何解决

8tntrjer  于 7个月前  发布在  其他
关注(0)|答案(1)|浏览(138)

请提出你的问题:

在运行过程中遇到了一个错误,错误信息如下:

  1. The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['']
  2. Traceback (most recent call last):
  3. File "main.py", line 34, in
  4. image_text2img = fd_pipe.text2img(prompt=prompt, num_inference_steps=50).images[0]
  5. File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion_mega.py", line 91, in text2img
  6. output = temp_pipeline(
  7. File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion.py", line 368, in _encode_prompt
  8. text_embeddings = self.text_encoder(input_ids=text_input_ids.astype(np.int64))[0]
  9. File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion.py", line 160, in _encode_prompt
  10. text_embeddings = self.text_encoder(input_ids=text_input_ids.astype(np.int64))[0]
  11. File "/root/miniconda3/lib/python3.8/site-packages/ppdiffusers-0.14.2-py3.8.egg/ppdiffusers/pipelines/fastdeploy_utils.py", line 102, in **call
  12. return self.model.infer(inputs)
  13. File "/root/miniconda3/lib/python3.8/site-packages/fastdeploy/runtime.py", line 64, in infer
  14. return self._runtime.infer(data)
  15. OSError:
  16. ### C++ Traceback (most recent call last):
  17. 0 paddle::AnalysisPredictor::ZeroCopyRun()
  18. 1 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, phi::Place const&)
  19. 2 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, phi::Place const&) const
  20. 3 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, phi::Place const&, paddle::framework::RuntimeContext*) const
  21. 4 paddle::framework::StructKernelImpl<paddle::operators::MultiHeadMatMulV2Kernel<float, phi::GPUContext>, void>::Compute(phi::KernelContext*)
  22. 5 paddle::operators::MultiHeadMatMulV2Kernel<float, phi::GPUContext>::Compute(paddle::framework::ExecutionContext const&) const
  23. 6 void paddle::framework::OperatorWithKernel::GEMM(CBLAS_TRANSPOSE, CBLAS_TRANSPOSE, int, int, int, float, float const*, float const*, float, float*) const
  24. 7 void paddle::framework::OperatorWithKernel::GEMM(CBLAS_TRANSPOSE, CBLAS_TRANSPOSE, int, int, int, float, float const*, float const*, float, float*) const
  25. 8 void phi::funcs::Blasphi::GPUContext::MatMul(phi::DenseTensor const&, bool, phi::DenseTensor const&, bool, float, phi::DenseTensor*, float) const
  26. 9 void phi::funcs::Blasphi::GPUContext::GEMM(CBLAS_TRANSPOSE, CBLAS_TRANSPOSE, int, int, int, float, float const*, float const*, float, float*) const
  27. 10 void phi::enforce::EnforceNotMet::EnforceNotMet(phi::ErrorSummary const&, char const*, int)
  28. 11 phi::enforce::GetCurrentTraceBackString [abi:cxx11](https://github.com/PaddlePaddle/PaddleNLP/issues/bool)
  29. ### Error Message Summary:
  30. ExternalError: CUBLAS error(7).
  31. [Hint: Please search for the error code(7) on website (https://docs.nvidia.com/cuda/cublas/index.html#cublasstatus_t) to get Nvidia's official solution and advice about CUBLAS Error.] (at /home/fastdeploy/develop/paddle_build/v0.0.0/Paddle/paddle/fluid/inference/api/resource_manager.cc:282)

相关问题