vllm [Bug]:Ray在多机器集群中无法检测到所有节点,

wxclj1h5  于 3个月前  发布在  其他
关注(0)|答案(1)|浏览(94)

当前环境

python collect_env.py
--2024-05-07 16:14:33--  https://raw.githubusercontent.com/vllm-project/vllm/main/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.110.133, 185.199.111.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 24877 (24K) [text/plain]
Saving to: ‘collect_env.py’

collect_env.py                                                                     100%[================================================================================================================================================================================================================>]  24.29K  --.-KB/s    in 0.003s  

2024-05-07 16:14:33 (9.38 MB/s) - ‘collect_env.py’ saved [24877/24877]

Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.0
Libc version: glibc-2.35

Python version: 3.10.0 (default, Mar  3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-102-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A100 80GB PCIe
GPU 3: NVIDIA A100 80GB PCIe

Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      48 bits physical, 48 bits virtual
Byte Order:                         Little Endian
CPU(s):                             32
On-line CPU(s) list:                0-31
Vendor ID:                          AuthenticAMD
Model name:                         AMD EPYC 73F3 16-Core Processor
CPU family:                         25
Model:                              1
Thread(s) per core:                 2
Core(s) per socket:                 16
Socket(s):                          1
Stepping:                           1
Frequency boost:                    enabled
CPU max MHz:                        4036.6211
CPU min MHz:                        1500.0000
BogoMIPS:                           6986.18
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization:                     AMD-V
L1d cache:                          512 KiB (16 instances)
L1i cache:                          512 KiB (16 instances)
L2 cache:                           8 MiB (16 instances)
L3 cache:                           256 MiB (8 instances)
NUMA node(s):                       1
NUMA node0 CPU(s):                  0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] torch==2.3.0
[pip3] triton==2.3.0
[pip3] vllm-nccl-cu12==2.18.1.0.4.0
[conda] numpy                     1.26.4                   pypi_0    pypi
[conda] nvidia-nccl-cu12          2.20.5                   pypi_0    pypi
[conda] torch                     2.3.0                    pypi_0    pypi
[conda] triton                    2.3.0                    pypi_0    pypi
[conda] vllm-nccl-cu12            2.18.1.0.4.0             pypi_0    pypiROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.2
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0	GPU1	GPU2	GPU3	CPU Affinity	NUMA Affinity	GPU NUMA ID
GPU0	 X 	SYS	SYS	SYS	0-31	0		N/A
GPU1	SYS	 X 	SYS	SYS	0-31	0		N/A
GPU2	SYS	SYS	 X 	SYS	0-31	0		N/A
GPU3	SYS	SYS	SYS	 X 	0-31	0		N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

🐛 描述bug

我在两台机器上分别运行了vllm和ray,每台机器都有4个A100 79Gb。我在head节点和子节点上分别运行了ray start head和ray start address命令。当我运行ray status时,我看到我有8个GPU。在下一步中,当我使用tp 8启动vllm时,我得到了以下错误:

2024-05-07 16:12:51,393	INFO worker.py:1564 -- Connecting to existing Ray cluster at address: 141.195.90.35:6379...
2024-05-07 16:12:51,398	INFO worker.py:1749 -- Connected to Ray cluster.
Traceback (most recent call last):
  File "miniconda3/envs/vllm_cohere/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/miniconda3/envs/vllm_cohere/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File miniconda3/envs/vllm_cohere/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 168, in <module>
    engine = AsyncLLMEngine.from_engine_args(
  File "envs/vllm_cohere/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 357, in from_engine_args
    initialize_ray_cluster(engine_config.parallel_config)
  File "/miniconda3/envs/vllm_cohere/lib/python3.10/site-packages/vllm/executor/ray_utils.py", line 106, in initialize_ray_cluster
    raise ValueError(
ValueError: The number of required GPUs exceeds the total number of available GPUs in the cluster.

当我再次检查ray状态时,我只看到了4个GPU。我不确定为什么在我尝试使用vllm启动它后,ray无法看到我的8个GPU,因为它显然之前是可见的。我使用以下命令启动:
/python -m vllm.entrypoints.openai.api_server --model ibm-granite/granite-34b-code-instruct --worker-use-ray --tensor-parallel-size 8 --trust-remote-code --port 40023 --host 0.0.0.0 --gpu-memory-utilization .65 --tokenizer ibm-granite/granite-34b-code-instruct --worker-use-ray

zf2sa74q

zf2sa74q1#

这个问题可能是由于Raylet节点负载过高导致的。你可以尝试以下解决方案:

  1. 增加机器或raylet的资源,例如内存、CPU等。
  2. 优化你的程序,减少资源消耗。
  3. 调整Ray配置,例如增加并发数、减少任务分配等。
  4. 如果问题依然存在,可以考虑将部分任务迁移到其他节点上,以减轻当前节点的压力。

相关问题