vllm 当运行pytest测试时,出现未定义的符号:_ZNSt15__exception_ptr13exception_ptr9_M_addrefEv,

6qfn3psc  于 2个月前  发布在  其他
关注(0)|答案(1)|浏览(40)
::~# pytest tests/
 ======================= test session starts ==============================
 platform linux -- Python 3.9.12, pytest-7.1.1, pluggy-1.0.0
 rootdir: /root/vllm
 plugins: anyio-3.5.0, forked-1.6.0, asyncio-0.23.5
 asyncio: mode=strict
 collected 0 items / 1 error
============================= ERRORS==============================
 ___________________ ERROR collecting test session _______________________
 ../anaconda3/lib/python3.9/importlib/**init**.py:127: in import_module
 return _bootstrap._gcd_import(name[level:], package, level)
 :1030: in _gcd_import
 ???
 :1007: in _find_and_load
 ???
 :986: in _find_and_load_unlocked
 ???
 :680: in _load_unlocked
 ???
 ../anaconda3/lib/python3.9/site-packages/_pytest/assertion/rewrite.py:168: in exec_module
 exec(co, module.**dict**)
 tests/lora/conftest.py:15: in 
 from vllm.model_executor.layers.sampler import Sampler
 vllm/model_executor/**init**.py:2: in 
 from vllm.model_executor.model_loader import get_model
 vllm/model_executor/model_loader.py:10: in 
 from vllm.model_executor.weight_utils import (get_quant_config,
 vllm/model_executor/weight_utils.py:18: in 
 from vllm.model_executor.layers.quantization import (get_quantization_config,
 vllm/model_executor/layers/quantization/**init**.py:4: in 
 from vllm.model_executor.layers.quantization.awq import AWQConfig
 vllm/model_executor/layers/quantization/awq.py:6: in 
 from vllm._C import ops
 E ImportError: /root/vllm/vllm/_C.cpython-39-x86_64-linux-gnu.so: undefined symbol: _ZNSt15__exception_ptr13exception_ptr9_M_addrefEv
 ===================== short test summary info=====================
 ERROR - ImportError: /root/vllm/vllm/_C.cpython-39-x86_64-linux-gnu.so: undefined symbol: _ZNSt15__exception_pt...
 !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
 =======================1 error in 0.51s=============================
There is an identical error when running offline_inference.py and the solution I saw is to change gcc from 11 to 10.4, I tried 10.5, it does not work. I wonder if there is a more universal way to solve this issue.
tcbh2hod

tcbh2hod1#

当我尝试从主分支使用pip install -e .安装时,遇到了相同的错误,使用的是torch==2.1.2和cuda12.1。

相关问题