[Bug]: vllm.engine.async_llm_engine.AsyncEngineDeadError: 后台循环已经出错,RuntimeError: Triton错误[CUDA]:设备内核映像无效

zfciruhq  于 2个月前  发布在  其他
关注(0)|答案(3)|浏览(26)

当前环境

/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/requests/__init__.py:102: RequestsDependencyWarning: urllib3 (1.26.16) or char
det (5.2.0)/charset_normalizer (2.0.4) doesn't match a supported version! 
  warnings.warn("urllib3 ({}) or chardet ({})/charset_normalizer ({}) doesn't match a supported "
WARNING 07-24 11:07:17 _custom_ops.py:14] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
/mnt/xie/libs/vllm/vllm/connections.py:8: RuntimeWarning: Failed to read commit hash:
No module named 'vllm.commit_id' 
  from vllm.version import __version__ as VLLM_VERSION
PyTorch version: 2.3.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8 
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect 
CMake version: Could not collect
Libc version: glibc-2.31 

Python version: 3.9.12 (main, Apr  5 2022, 06:56:58)  [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB 
Nvidia driver version: 470.161.03
cuDNN version: Probably one of the following:
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn.so.8.2.4
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.2.4
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.2.4
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.2.4 
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.2.4
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.2.4
/usr/local/cuda-11.4/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.2.4
HIP runtime version: N/A 
MIOpen runtime version: N/A 
Is XNNPACK available: True 
 
CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   46 bits physical, 48 bits virtual 
CPU(s):                          16
On-line CPU(s) list:             0-15
Thread(s) per core:              2
Core(s) per socket:              8
Socket(s):                       1
NUMA node(s):                    1 
Vendor ID:                       GenuineIntel
CPU family:                      6
Model:                           106
Model name:                      Intel(R) Xeon(R) Platinum 8369B CPU @ 2.90GHz
Stepping:                        6
CPU MHz:                         2900.000
BogoMIPS:                        5800.00
Hypervisor vendor:               KVM
Virtualization type:             full
L1d cache:                       384 KiB
L1i cache:                       256 KiB
L2 cache:                        10 MiB
L3 cache:                        48 MiB
NUMA node0 CPU(s):               0-15
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Mmio stale data:   Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht sysca
ll nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pdcm pcid s
se4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stib
p ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_
ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx51
2_vpopcntdq rdpid arch_capabilities

Versions of relevant libraries:
[pip3] flake8==3.8.2
[pip3] flake8-bugbear==22.9.23
[pip3] flake8-comprehensions==3.10.0
[pip3] flake8-executable==2.1.2
[pip3] flake8-pyi==20.5.0
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.5
[pip3] numpydoc==1.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.18.1
[pip3] pytorch-crf==0.7.2
[pip3] sentence-transformers==2.2.2
[pip3] torch==2.3.1+cu118
[pip3] torchaudio==0.12.1+cu116
[pip3] torchnet==0.0.4
[pip3] torchstat==0.0.7
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.18.1+cu118
[pip3] transformers==4.42.4
[pip3] transformers-stream-generator==0.0.4
[pip3] triton==2.3.1
[conda] blas                      1.0                         mkl  
[conda] mkl                       2021.4.0           h06a4308_640  
[conda] mkl-service               2.4.0            py39h7f8727e_0  
[conda] mkl_fft                   1.3.1            py39hd3c417c_0  
[conda] mkl_random                1.2.2            py39h51133e4_0  
[conda] numpy                     1.21.5           py39he7a7128_1  
[conda] numpy-base                1.21.5           py39hf524024_1  
[conda] numpydoc                  1.2                pyhd3eb1b0_0  
[conda] nvidia-nccl-cu11          2.20.5                   pypi_0    pypi
[conda] nvidia-nccl-cu12          2.18.1                   pypi_0    pypi
[conda] pytorch-crf               0.7.2                    pypi_0    pypi
[conda] sentence-transformers     2.2.2                    pypi_0    pypi
[conda] torch                     2.3.1+cu118              pypi_0    pypi
[conda] torchaudio                0.12.1+cu116             pypi_0    pypi
[conda] torchnet                  0.0.4                    pypi_0    pypi
[conda] torchstat                 0.0.7                    pypi_0    pypi
[conda] torchsummary              1.5.1                    pypi_0    pypi
[conda] torchvision               0.18.1+cu118             pypi_0    pypi
[conda] transformers              4.42.4                   pypi_0    pypi
[conda] transformers-stream-generator 0.0.4                    pypi_0    pypi
[conda] triton                    2.3.1                    pypi_0    pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.3.post1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0	CPU Affinity	NUMA Affinity
GPU0	 X 	0-15		N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

🐛 描述bug

第一个错误请求:

INFO 07-24 10:38:51 async_llm_engine.py:173] Added request chat-1e5e4b69dc2b48c0b0c44aea70aea32e.
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
	- Avoid using `tokenizers` before the fork if possible
	- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
	- Avoid using `tokenizers` before the fork if possible
	- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
ERROR 07-24 10:38:53 async_llm_engine.py:56] Engine background task failed
ERROR 07-24 10:38:53 async_llm_engine.py:56] Traceback (most recent call last):
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 46, in _log_task_completion
ERROR 07-24 10:38:53 async_llm_engine.py:56]     return_value = task.result()
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 637, in run_engine_loop
ERROR 07-24 10:38:53 async_llm_engine.py:56]     result = task.result()
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 580, in engine_step
ERROR 07-24 10:38:53 async_llm_engine.py:56]     request_outputs = await self.engine.step_async(virtual_engine)
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 253, in step_async
ERROR 07-24 10:38:53 async_llm_engine.py:56]     output = await self.model_executor.execute_model_async(
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/executor/gpu_executor.py", line 159, in execute_model_async
ERROR 07-24 10:38:53 async_llm_engine.py:56]     output = await make_async(self.driver_worker.execute_model
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/concurrent/futures/thread.py", line 58, in run
ERROR 07-24 10:38:53 async_llm_engine.py:56]     result = self.fn(*self.args, **self.kwargs)
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/worker/worker_base.py", line 272, in execute_model
ERROR 07-24 10:38:53 async_llm_engine.py:56]     output = self.model_runner.execute_model(
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
ERROR 07-24 10:38:53 async_llm_engine.py:56]     return func(*args, **kwargs)
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/worker/model_runner.py", line 1314, in execute_model
ERROR 07-24 10:38:53 async_llm_engine.py:56]     hidden_or_intermediate_states = model_executable(
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
ERROR 07-24 10:38:53 async_llm_engine.py:56]     return self._call_impl(*args, **kwargs)
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
ERROR 07-24 10:38:53 async_llm_engine.py:56]     return forward_call(*args, **kwargs)
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/model_executor/models/qwen2.py", line 336, in forward
ERROR 07-24 10:38:53 async_llm_engine.py:56]     hidden_states = self.model(input_ids, positions, kv_caches,
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
ERROR 07-24 10:38:53 async_llm_engine.py:56]     return self._call_impl(*args, **kwargs)
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
ERROR 07-24 10:38:53 async_llm_engine.py:56]     return forward_call(*args, **kwargs)
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/model_executor/models/qwen2.py", line 257, in forward
ERROR 07-24 10:38:53 async_llm_engine.py:56]     hidden_states, residual = layer(
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
ERROR 07-24 10:38:53 async_llm_engine.py:56]     return self._call_impl(*args, **kwargs)
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
ERROR 07-24 10:38:53 async_llm_engine.py:56]     return forward_call(*args, **kwargs)
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/model_executor/models/qwen2.py", line 209, in forward
ERROR 07-24 10:38:53 async_llm_engine.py:56]     hidden_states = self.self_attn(
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
ERROR 07-24 10:38:53 async_llm_engine.py:56]     return self._call_impl(*args, **kwargs)
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
ERROR 07-24 10:38:53 async_llm_engine.py:56]     return forward_call(*args, **kwargs)
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/model_executor/models/qwen2.py", line 156, in forward
ERROR 07-24 10:38:53 async_llm_engine.py:56]     attn_output = self.attn(q, k, v, kv_cache, attn_metadata)
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
ERROR 07-24 10:38:53 async_llm_engine.py:56]     return self._call_impl(*args, **kwargs)
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
ERROR 07-24 10:38:53 async_llm_engine.py:56]     return forward_call(*args, **kwargs)
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/attention/layer.py", line 97, in forward
ERROR 07-24 10:38:53 async_llm_engine.py:56]     return self.impl.forward(query,
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/attention/backends/xformers.py", line 598, in forward
ERROR 07-24 10:38:53 async_llm_engine.py:56]     out = PagedAttention.forward_prefix(
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/attention/ops/paged_attn.py", line 205, in forward_prefix
ERROR 07-24 10:38:53 async_llm_engine.py:56]     context_attention_fwd(
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
ERROR 07-24 10:38:53 async_llm_engine.py:56]     return func(*args, **kwargs)
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/attention/ops/prefix_prefill.py", line 765, in context_attention_fwd
ERROR 07-24 10:38:53 async_llm_engine.py:56]     _fwd_kernel[grid](
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in <lambda>
ERROR 07-24 10:38:53 async_llm_engine.py:56]     return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/triton/runtime/jit.py", line 425, in run
ERROR 07-24 10:38:53 async_llm_engine.py:56]     kernel.run(grid_0, grid_1, grid_2, kernel.num_warps, kernel.num_ctas,  # number of warps/ctas per instance
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/triton/compiler/compiler.py", line 255, in __getattribute__
ERROR 07-24 10:38:53 async_llm_engine.py:56]     self._init_handles()
ERROR 07-24 10:38:53 async_llm_engine.py:56]   File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/triton/compiler/compiler.py", line 250, in _init_handles
ERROR 07-24 10:38:53 async_llm_engine.py:56]     self.module, self.function, self.n_regs, self.n_spills = driver.utils.load_binary(
ERROR 07-24 10:38:53 async_llm_engine.py:56] RuntimeError: Triton Error [CUDA]: device kernel image is invalid
Exception in callback functools.partial(<function _log_task_completion at 0x7ff67a845310>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7ff66058eac0>>)
handle: <Handle functools.partial(<function _log_task_completion at 0x7ff67a845310>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7ff66058eac0>>)>
Traceback (most recent call last):
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 46, in _log_task_completion
    return_value = task.result()
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 637, in run_engine_loop
    result = task.result()
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 580, in engine_step
    request_outputs = await self.engine.step_async(virtual_engine)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 253, in step_async
    output = await self.model_executor.execute_model_async(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/executor/gpu_executor.py", line 159, in execute_model_async
    output = await make_async(self.driver_worker.execute_model
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/worker/worker_base.py", line 272, in execute_model
    output = self.model_runner.execute_model(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/worker/model_runner.py", line 1314, in execute_model
    hidden_or_intermediate_states = model_executable(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/model_executor/models/qwen2.py", line 336, in forward
    hidden_states = self.model(input_ids, positions, kv_caches,
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/model_executor/models/qwen2.py", line 257, in forward
    hidden_states, residual = layer(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/model_executor/models/qwen2.py", line 209, in forward
    hidden_states = self.self_attn(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/model_executor/models/qwen2.py", line 156, in forward
    attn_output = self.attn(q, k, v, kv_cache, attn_metadata)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/attention/layer.py", line 97, in forward
    return self.impl.forward(query,
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/attention/backends/xformers.py", line 598, in forward
    out = PagedAttention.forward_prefix(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/attention/ops/paged_attn.py", line 205, in forward_prefix
    context_attention_fwd(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/attention/ops/prefix_prefill.py", line 765, in context_attention_fwd
    _fwd_kernel[grid](
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in <lambda>
    return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/triton/runtime/jit.py", line 425, in run
    kernel.run(grid_0, grid_1, grid_2, kernel.num_warps, kernel.num_ctas,  # number of warps/ctas per instance
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/triton/compiler/compiler.py", line 255, in __getattribute__
    self._init_handles()
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/triton/compiler/compiler.py", line 250, in _init_handles
    self.module, self.function, self.n_regs, self.n_spills = driver.utils.load_binary(
RuntimeError: Triton Error [CUDA]: device kernel image is invalid

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "uvloop/cbhandles.pyx", line 63, in uvloop.loop.Handle._run
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 58, in _log_task_completion
    raise AsyncEngineDeadError(
vllm.engine.async_llm_engine.AsyncEngineDeadError: Task finished unexpectedly. This should never happen! Please open an issue on Github. See stack trace above for theactual cause.
INFO 07-24 10:38:53 async_llm_engine.py:180] Aborted request chat-1e5e4b69dc2b48c0b0c44aea70aea32e.
INFO:     10.193.16.2:50112 - "POST /v1/chat/completions HTTP/1.0" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 436, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/middleware/errors.py", line 186, in __call__
    raise exc
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/middleware/errors.py", line 164, in __call__
    await self.app(scope, receive, _send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/middleware/cors.py", line 83, in __call__
    await self.app(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/routing.py", line 758, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/routing.py", line 778, in app
    await route.handle(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/routing.py", line 299, in handle
    await self.app(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/routing.py", line 79, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/routing.py", line 74, in app
    response = await func(request)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/fastapi/routing.py", line 278, in app
    raw_response = await run_endpoint_function(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
    return await dependant.call(**values)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/entrypoints/openai/api_server.py", line 129, in create_chat_completion
    generator = await openai_serving_chat.create_chat_completion(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/entrypoints/openai/serving_chat.py", line 195, in create_chat_completion
    return await self.chat_completion_full_generator(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/entrypoints/openai/serving_chat.py", line 404, in chat_completion_full_generator
    async for res in result_generator:
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 772, in generate
    async for output in self._process_request(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 888, in _process_request
    raise e
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 884, in _process_request
    async for request_output in stream:
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 93, in __anext__
    raise result
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 46, in _log_task_completion
    return_value = task.result()
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 637, in run_engine_loop
    result = task.result()
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 580, in engine_step
    request_outputs = await self.engine.step_async(virtual_engine)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 253, in step_async
    output = await self.model_executor.execute_model_async(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/executor/gpu_executor.py", line 159, in execute_model_async
    output = await make_async(self.driver_worker.execute_model
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/worker/worker_base.py", line 272, in execute_model
    output = self.model_runner.execute_model(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/worker/model_runner.py", line 1314, in execute_model
    hidden_or_intermediate_states = model_executable(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/model_executor/models/qwen2.py", line 336, in forward
    hidden_states = self.model(input_ids, positions, kv_caches,
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/model_executor/models/qwen2.py", line 257, in forward
    hidden_states, residual = layer(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/model_executor/models/qwen2.py", line 209, in forward
    hidden_states = self.self_attn(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/model_executor/models/qwen2.py", line 156, in forward
    attn_output = self.attn(q, k, v, kv_cache, attn_metadata)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/attention/layer.py", line 97, in forward
    return self.impl.forward(query,
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/attention/backends/xformers.py", line 598, in forward
    out = PagedAttention.forward_prefix(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/attention/ops/paged_attn.py", line 205, in forward_prefix
    context_attention_fwd(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/attention/ops/prefix_prefill.py", line 765, in context_attention_fwd
    _fwd_kernel[grid](
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in <lambda>
    return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/triton/runtime/jit.py", line 425, in run
    kernel.run(grid_0, grid_1, grid_2, kernel.num_warps, kernel.num_ctas,  # number of warps/ctas per instance
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/triton/compiler/compiler.py", line 255, in __getattribute__
    self._init_handles()
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/triton/compiler/compiler.py", line 250, in _init_handles
    self.module, self.function, self.n_regs, self.n_spills = driver.utils.load_binary(
RuntimeError: Triton Error [CUDA]: device kernel image is invalid

第二个错误请求:

INFO:     10.193.16.2:41296 - "POST /v1/chat/completions HTTP/1.0" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 436, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/middleware/errors.py", line 186, in __call__
    raise exc
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/middleware/errors.py", line 164, in __call__
    await self.app(scope, receive, _send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/middleware/cors.py", line 83, in __call__
    await self.app(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/routing.py", line 758, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/routing.py", line 778, in app
    await route.handle(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/routing.py", line 299, in handle
    await self.app(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/routing.py", line 79, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/routing.py", line 74, in app
    response = await func(request)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/fastapi/routing.py", line 278, in app
    raw_response = await run_endpoint_function(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
    return await dependant.call(**values)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/entrypoints/openai/api_server.py", line 129, in create_chat_completion
    generator = await openai_serving_chat.create_chat_completion(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/entrypoints/openai/serving_chat.py", line 195, in create_chat_completion
    return await self.chat_completion_full_generator(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/entrypoints/openai/serving_chat.py", line 404, in chat_completion_full_generator
    async for res in result_generator:
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 772, in generate
    async for output in self._process_request(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 888, in _process_request
    raise e
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 884, in _process_request
    async for request_output in stream:
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 93, in __anext__
    raise result
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 46, in _log_task_completion
    return_value = task.result()
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 637, in run_engine_loop
    result = task.result()
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 580, in engine_step
    request_outputs = await self.engine.step_async(virtual_engine)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 253, in step_async
    output = await self.model_executor.execute_model_async(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/executor/gpu_executor.py", line 159, in execute_model_async
    output = await make_async(self.driver_worker.execute_model
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/worker/worker_base.py", line 272, in execute_model
    output = self.model_runner.execute_model(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/worker/model_runner.py", line 1314, in execute_model
    hidden_or_intermediate_states = model_executable(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/model_executor/models/qwen2.py", line 336, in forward
    hidden_states = self.model(input_ids, positions, kv_caches,
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/model_executor/models/qwen2.py", line 257, in forward
    hidden_states, residual = layer(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/model_executor/models/qwen2.py", line 209, in forward
    hidden_states = self.self_attn(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/model_executor/models/qwen2.py", line 156, in forward
    attn_output = self.attn(q, k, v, kv_cache, attn_metadata)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/attention/layer.py", line 97, in forward
    return self.impl.forward(query,
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/attention/backends/xformers.py", line 598, in forward
    out = PagedAttention.forward_prefix(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/attention/ops/paged_attn.py", line 205, in forward_prefix
    context_attention_fwd(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/attention/ops/prefix_prefill.py", line 765, in context_attention_fwd
    _fwd_kernel[grid](
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/triton/runtime/jit.py", line 167, in <lambda>
    return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/triton/runtime/jit.py", line 425, in run
    kernel.run(grid_0, grid_1, grid_2, kernel.num_warps, kernel.num_ctas,  # number of warps/ctas per instance
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/triton/compiler/compiler.py", line 255, in __getattribute__
    self._init_handles()
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/triton/compiler/compiler.py", line 250, in _init_handles
    self.module, self.function, self.n_regs, self.n_spills = driver.utils.load_binary(
RuntimeError: Triton Error [CUDA]: device kernel image is invalid

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 436, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/middleware/errors.py", line 186, in __call__
    raise exc
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/middleware/errors.py", line 164, in __call__
    await self.app(scope, receive, _send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/middleware/cors.py", line 83, in __call__
    await self.app(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/routing.py", line 758, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/routing.py", line 778, in app
    await route.handle(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/routing.py", line 299, in handle
    await self.app(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/routing.py", line 79, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/starlette/routing.py", line 74, in app
    response = await func(request)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/fastapi/routing.py", line 278, in app
    raw_response = await run_endpoint_function(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
    return await dependant.call(**values)
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/entrypoints/openai/api_server.py", line 129, in create_chat_completion
    generator = await openai_serving_chat.create_chat_completion(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/entrypoints/openai/serving_chat.py", line 195, in create_chat_completion
    return await self.chat_completion_full_generator(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/entrypoints/openai/serving_chat.py", line 404, in chat_completion_full_generator
    async for res in result_generator:
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 772, in generate
    async for output in self._process_request(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 873, in _process_request
    stream = await self.add_request(
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 676, in add_request
    self.start_background_loop()
  File "/app/apps/anaconda3/envs/vllm_053p1/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 516, in start_background_loop
    raise AsyncEngineDeadError(
vllm.engine.async_llm_engine.AsyncEngineDeadError: Background loop has errored already.
nwo49xxi

nwo49xxi1#

我相信这提供了一个线索:警告 07-24 11:07:17 _custom_ops.py:14] 从 vllm._C 导入失败,出现 ModuleNotFoundError("没有名为 'vllm._C' 的模块")
在重试之前,请考虑从源代码构建。

j9per5c4

j9per5c42#

我相信这提供了一个线索:警告 07-24 11:07:17 _custom_ops.py:14] 从 vllm._C 导入失败,出现 ModuleNotFoundError("没有名为 'vllm._C' 的模块")。在重试之前,请考虑从源代码构建。

嗨,我尝试从源代码构建 vllm,但仍然遇到了相同的错误。

flvlnr44

flvlnr443#

也许升级GPU驱动程序可以解决这个问题。

相关问题