vllm [Bug]:运行时错误:未知布局

j0pj023g  于 6个月前  发布在  其他
关注(0)|答案(3)|浏览(59)

当前环境:

PyTorch version: 2.2.2+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.29.2
Libc version: glibc-2.27

Python version: 3.10.12 (main, Jul  5 2023, 18:54:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
GPU 2: NVIDIA GeForce RTX 4090
GPU 3: NVIDIA GeForce RTX 4090

Nvidia driver version: 535.146.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
架构:           x86_64
CPU 运行模式:   32-bit, 64-bit
字节序:         Little Endian
CPU:             32
在线 CPU 列表:  0-31
每个核的线程数: 2
每个座的核数:   16
座:             1
NUMA 节点:      4
厂商 ID:        AuthenticAMD
CPU 系列:       23
型号:           49
型号名称:       AMD EPYC 7302 16-Core Processor
步进:           0
CPU MHz:        1486.662
CPU 最大 MHz:   3000.0000
CPU 最小 MHz:   1500.0000
BogoMIPS:       5988.92
虚拟化:         AMD-V
L1d 缓存:       32K
L1i 缓存:       32K
L2 缓存:        512K
L3 缓存:        16384K
NUMA 节点0 CPU: 0-3,16-19
NUMA 节点1 CPU: 4-7,20-23
NUMA 节点2 CPU: 8-11,24-27
NUMA 节点3 CPU: 12-15,28-31
标记:           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.2.2
[pip3] triton==2.2.0
[conda] numpy                     1.26.4                   pypi_0    pypi
[conda] torch                     2.2.2                    pypi_0    pypi
[conda] triton                    2.2.0                    pypi_0    pypiROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.0.post1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      SYS     SYS     SYS     12-15,28-31     3               N/A
GPU1    SYS      X      SYS     SYS     8-11,24-27      2               N/A
GPU2    SYS     SYS      X      SYS     4-7,20-23       1               N/A
GPU3    SYS     SYS     SYS      X      0-3,16-19       0               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

问题描述:
(vllm) root@4090:/DATA4T/text-generation-webui/vllm# python -m vllm.entrypoints.openai.api_server --model /DATA4T/text-generation-webui/models/c4ai-command-r-plus-GPTQ --tensor-parallel-size 4 --enforce-eager

在运行上述命令时,出现了以下警告信息:

WARNING 04-15 07:27:05 config.py:225] gptq quantization is not fully optimized yet. The speed can be slower than non-quantized models.

以及以下错误信息:

INFO 04-15 07:27:16 selector.py:33] Using XFormers backend.
(RayWorkerVllm pid=1969) [rank1]:[W ProcessGroupGloo.cpp:721] Warning: Unable to resolve hostname to a (local) address. Using the loopback address as fallback. Manually set the network interface to bind to with GLOO_SOCKET_IFNAME. (function operator())
[rank0]:[W ProcessGroupGloo.cpp:721] Warning: Unable to resolve hostname to a (local) address. Using the loopback address as fallback. Manually set the network interface to bind to with GLOO_SOCKET_IFNAME. (function operator())
INFO 04-15 07:27:17 pynccl_utils.py:45] vLLM is using nccl==2.18.1

这段文本是一段错误日志,主要记录了在执行模型运行时出现的错误。以下是翻译后的中文文本:

错误日志 (INFO 04-15 07:27:19) custom_all_reduce.py:152] NVLink检测失败,消息为"Not Supported"。如果机器没有配备NVLink,这是正常的。[重复两次,分布在整个集群中]
(RayWorkerVllm pid=1969) WARNING 04-15 07:27:19 custom_all_reduce.py:58] 自定义allreduce被禁用,因为它不支持在多于两个仅支持PCIe的GPU上使用。要消除此警告,请显式指定disable_custom_all_reduce=True。[重复两次,分布在整个集群中]
(RayWorkerVllm pid=1969) INFO 04-15 07:27:23 model_runner.py:169] 加载模型权重耗时14.3474 GB
(RayWorkerVllm pid=1969) INFO 04-15 07:27:25 model_runner.py:169] 加载模型权重耗时14.3474 GB
(RayWorkerVllm pid=2303) INFO 04-15 07:27:16 selector.py:77] 无法使用FlashAttention后端,因为flash_attn包未找到。请安装它以获得更好的性能。[重复两次,分布在整个集群中]
(RayWorkerVllm pid=2303) INFO 04-15 07:27:16 selector.py:33] 使用XFormers后端。[重复两次,分布在整个集群中]
(RayWorkerVllm pid=2303) INFO 04-15 07:27:17 pynccl_utils.py:45] vLLM正在使用nccl==2.18.1 [重复两次,分布在整个集群中]
(RayWorkerVllm pid=2303) INFO 04-15 07:27:19 custom_all_reduce.py:152] NVLink检测失败,消息为"Not Supported"。如果机器没有配备NVLink,这是正常的。[重复两次,分布在整个集群中]
(RayWorkerVllm pid=2303) WARNING 04-15 07:27:19 custom_all_reduce.py:58] 自定义allreduce被禁用,因为它不支持在多于两个仅支持PCIe的GPU上使用。要消除此警告,请显式指定disable_custom_all_reduce=True。[重复两次,分布在整个集群中]
(RayWorkerVllm pid=1969) ERROR 04-15 07:27:27 ray_utils.py:50] Error executing method determine_num_available_blocks。这可能会导致分布式执行中的死锁。[重复两次,分布在整个集群中]
(RayWorkerVllm pid=1969) ERROR 04-15 07:27:27 ray_utils.py:50] File "/DATA4T/text-generation-webui/vllm/vllm/engine/ray_utils.py", line 43, in execute_method
(RayWorkerVllm pid=1969) ERROR 04-15 07:27:27 ray_utils.py:50] return executor(*args, **kwargs)
(RayWorkerVllm pid=1969) ERROR 04-15 07:27:27 ray_utils.py:50] File "/root/anaconda3/envs/vllm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(RayWorkerVllm pid=1969) ERROR 04-15 07:27:27 ray_utils.py:50] return func(*args, **kwargs)
(RayWorkerVllm pid=1969) ERROR 04-15 07:27:27 ray_utils.py:50] File "/DATA4T/text-generation-webui/vllm/vllm/worker/worker.py", line 134, in determine_num_available_blocks
(RayWorkerVllm pid=1969) ERROR 04-15 07:27:27 ray_utils.py:50] self.model_runner.profile_run()
(RayWorkerVllm pid=1969) ERROR 04-15 07:27:27 ray_utils.py:50] File "/DATA4T/text-generation-webui/vllm/vllm/worker/model_runner.py", line 918, in profile_run
(RayWorkerVllm pid=1969) ERROR 04-15 07:27:27 ray_utils.py:50] self.execute_model(seqs, kv_caches)
(RayWorkerVllm pid=1969) ERROR 04-15
这段文本是一段Python程序的错误日志,其中包含了一些堆栈跟踪信息。根据这些信息,我们可以大致了解出错的原因和位置。

首先,这个错误发生在一个名为commandr的模型中,该模型属于vllm项目中的model_executor模块。在执行commandr模型的前向传播时,出现了一个错误。具体来说,是在调用self.self_attn(hidden_states)方法时发生了错误。

接下来,错误信息显示了在调用qkv_proj方法时出现了问题。这个方法返回了hidden_states_,然后将它们作为参数传递给了self.linear_method.apply_weights()方法。然而,在这个过程中,出现了一个未知布局的运行时错误。

最后,错误信息显示了在调用gptq_gemm函数时出现了问题。这个函数可能是自定义的操作,但由于缺乏上下文信息,无法确定其具体作用。总之,这个错误导致了整个程序的崩溃。
self.engine = self._init_engine(*args, **kwargs)
File "/DATA4T/text-generation-webui/vllm/vllm/engine/async_llm_engine.py", line 421, in _init_engine
return engine_class(*args, **kwargs)
File "/DATA4T/text-generation-webui/vllm/vllm/engine/llm_engine.py", line 133, in **init
self._initialize_kv_caches()
File "/DATA4T/text-generation-webui/vllm/vllm/engine/llm_engine.py", line 193, in _initialize_kv_caches
self.model_executor.determine_num_available_blocks())
File "/DATA4T/text-generation-webui/vllm/vllm/executor/ray_gpu_executor.py", line 215, in determine_num_available_blocks
num_blocks = self._run_workers("determine_num_available_blocks", )
File "/DATA4T/text-generation-webui/vllm/vllm/executor/ray_gpu_executor.py", line 313, in _run_workers
driver_worker_output = getattr(self.driver_worker,
File "/root/anaconda3/envs/vllm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/DATA4T/text-generation-webui/vllm/vllm/worker/worker.py", line 134, in determine_num_available_blocks
self.model_runner.profile_run()
File "/root/anaconda3/envs/vllm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/DATA4T/text-generation-webui/vllm/vllm/worker/model_runner.py", line 918, in profile_run
self.execute_model(seqs, kv_caches)
File "/root/anaconda3/envs/vllm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/DATA4T/text-generation-webui/vllm/vllm/worker/model_runner.py", line 839, in execute_model
hidden_states = model_executable(**execute_model_kwargs)
File "/root/anaconda3/envs/vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/anaconda3/envs/vllm/lib/python3.10/site-packages/torch

tyky79it

tyky79it1#

对于我来说也是同样的错误。

falq053o

falq053o2#

我得到了上述错误中的第一行:"执行方法determine_num_available_blocks时出错。这可能会导致分布式执行中的死锁"。如果我设置以下环境变量VLLM负载,则没有错误:NCCL_SOCKET_IFNAME=eth0

3mpgtkmj

3mpgtkmj3#

对于我来说,在amd64上也出现了同样的错误,也许需要安装一些cuda工具包?而且amd64可能不受支持。

相关问题