vllm [Bug]: KeyError: 'model.layers.45.block_sparse_moe.gate.g_idx'

mfuanj7w  于 2个月前  发布在  其他
关注(0)|答案(2)|浏览(72)

当前环境

Collecting environment information...
PyTorch version: 2.2.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.2
Libc version: glibc-2.35

Python version: 3.10.14 (main, Mar 21 2024, 16:24:04) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.5.0-25-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA GeForce RTX 2080 Ti
GPU 1: NVIDIA GeForce RTX 2080 Ti
GPU 2: NVIDIA GeForce RTX 2080 Ti
GPU 3: NVIDIA GeForce RTX 2080 Ti
GPU 4: NVIDIA GeForce RTX 2080 Ti
GPU 5: NVIDIA GeForce RTX 2080 Ti
GPU 6: NVIDIA GeForce RTX 2080 Ti
GPU 7: NVIDIA GeForce RTX 2080 Ti

Nvidia driver version: 535.161.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
架构:                              x86_64
CPU 运行模式:                      32-bit, 64-bit
Address sizes:                      46 bits physical, 48 bits virtual
字节序:                            Little Endian
CPU:                                48
在线 CPU 列表:                     0-47
厂商 ID:                           GenuineIntel
型号名称:                          Intel(R) Xeon(R) Gold 6136 CPU @ 3.00GHz
CPU 系列:                          6
型号:                              85
每个核的线程数:                    2
每个座的核数:                      12
座:                                2
步进:                              4
CPU 最大 MHz:                      3700.0000
CPU 最小 MHz:                      1200.0000
BogoMIPS:                          6000.00
标记:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke md_clear flush_l1d arch_capabilities
L1d 缓存:                          768 KiB (24 instances)
L1i 缓存:                          768 KiB (24 instances)
L2 缓存:                           24 MiB (24 instances)
L3 缓存:                           49.5 MiB (2 instances)
NUMA 节点:                         2
NUMA 节点0 CPU:                    0-11,24-35
NUMA 节点1 CPU:                    12-23,36-47
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit:        KVM: Mitigation: VMX unsupported
Vulnerability L1tf:                 Mitigation; PTE Inversion
Vulnerability Mds:                  Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown:             Mitigation; PTI
Vulnerability Mmio stale data:      Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed:             Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Mitigation; Clear CPU buffers; SMT vulnerable

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.19.3
[pip3] torch==2.2.1
[pip3] torchaudio==2.2.2
[pip3] torchvision==0.17.2
[pip3] triton==2.2.0
[pip3] vllm-nccl-cu12==2.18.1.0.3.0
[conda] numpy                     1.26.4                   pypi_0    pypi
[conda] nvidia-nccl-cu12          2.19.3                   pypi_0    pypi
[conda] torch                     2.2.1                    pypi_0    pypi
[conda] torchaudio                2.2.2                    pypi_0    pypi
[conda] torchvision               0.17.2                   pypi_0    pypi
[conda] triton                    2.2.0                    pypi_0    pypi
[conda] vllm-nccl-cu12            2.18.1.0.3.0             pypi_0    pypiROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NV2     NODE    NODE    SYS     SYS     SYS     SYS     0-11,24-35      0               N/A
GPU1    NV2      X      NODE    NODE    SYS     SYS     SYS     SYS     0-11,24-35      0               N/A
GPU2    NODE    NODE     X      NV2     SYS     SYS     SYS     SYS     0-11,24-35      0               N/A
GPU3    NODE    NODE    NV2      X      SYS     SYS     SYS     SYS     0-11,24-35      0               N/A
GPU4    SYS     SYS     SYS     SYS      X      NV2     NODE    NODE    12-23,36-47     1               N/A
GPU5    SYS     SYS     SYS     SYS     NV2      X      NODE    NODE    12-23,36-47     1               N/A
GPU6    SYS     SYS     SYS     SYS     NODE    NODE     X      NV2     12-23,36-47     1               N/A
GPU7    SYS     SYS     SYS     SYS     NODE    NODE    NV2      X      12-23,36-47     1               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

🐛 描述bug

导出CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python -m vllm.entrypoints.openai.api_server --served-model-name=8x22b --model=/home/jarrelscy/Mixtral-8x22B-Instruct-v0.1-GPTQ-4bit --gpu-memory-utilizatio=0.95 --max-model-len=60000 --max-num-seqs=2 --tensor-parallel-size=8 --trust-remote-code --host=0.0.0.0 --port=8001 --max-log-len=1000
KeyError: 'model.layers.45.block_sparse_moe.gate.g_idx'

jw5wzhpr

jw5wzhpr1#

你好,我遇到了相同的"KeyError",但是当我尝试运行Mixtral-8x22B时,出现了"KeyError: 'layers.0.attention.wk.weight'"。

bcs8qyzn

bcs8qyzn2#

你好,我遇到了相同的"KeyError",但是错误信息是"KeyError: 'layers.0.attention.wk.weight'"。当我尝试运行Mixtral-8x22B时,我可以通过vllm0.3.3或0.4推断出mixtral-8x22b-instruct-awq,但速度较慢。

相关问题