当前环境
Collecting environment information...
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Mar 22 2024, 16:50:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX 6000 Ada Generation
Nvidia driver version: 545.23.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9354 32-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3799.0720
CPU min MHz: 1500.0000
BogoMIPS: 6500.47
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 64 MiB (64 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] torch==2.3.1
[pip3] torchvision==0.18.1
[pip3] transformers==4.42.4
[pip3] triton==2.3.1
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.2
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 NIC0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X SYS 32-63,96-127 1 N/A
NIC0 SYS X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_bond_0
🐛 描述错误
遇到了以下错误:
[rank0]: self.layers = nn.ModuleList([
[rank0]: File "/workspace/heliumos-env/lib/python3.10/site-packages/vllm/model_executor/models/qwen2_moe.py", line 328, in <listcomp>
[rank0]: Qwen2MoeDecoderLayer(config,
[rank0]: File "/workspace/heliumos-env/lib/python3.10/site-packages/vllm/model_executor/models/qwen2_moe.py", line 268, in __init__
[rank0]: self.mlp = Qwen2MoeSparseMoeBlock(config=config,
[rank0]: File "/workspace/heliumos-env/lib/python3.10/site-packages/vllm/model_executor/models/qwen2_moe.py", line 103, in __init__
[rank0]: self.experts = FusedMoE(num_experts=config.num_experts,
[rank0]: File "/workspace/heliumos-env/lib/python3.10/site-packages/vllm/model_executor/layers/fused_moe/layer.py", line 145, in __init__
[rank0]: assert self.quant_method is not None
[rank0]: AssertionError
8条答案
按热度按时间fae0ux8s1#
GPTQ尚不支持Qwen MoE。我们正在努力开发它。
qvk1mo1f2#
GPTQ尚不支持Qwen MOE。我们正在努力开发它。
那么,在0.5.2版本中,VLLM可以支持哪种量化的Qwen MOE模型?您能推荐一下Qwen2模型的量化MOE吗?
2eafrhcq3#
我们目前支持fp16和fp8用于qwen MoE。
fp8需要hopper GPUs。
xsuvu9jc4#
这个PR可能会解决这个问题。我使用这个分支创建了一个用于自我测试的WHL:
fjnneemd5#
GPUs
@akai-shuuichi
DeepSeek V2支持?
bqujaahr6#
GPUs
@akai-shuuichi DeepSeek V2支持?
我只测试了Qwen,我没有足够的GPU来运行deepseekV2。
wwodge7n7#
GPUs
@akai-shuuichi DeepSeek V2支持?
我只测试了Qwen,我没有足够的GPU来运行deepseekV2。
fivyi3re8#
GPUs
@akai-shuuichi DeepSeek V2支持?
我只测试了Qwen,我没有足够的GPU来运行deepseekV2。
谢谢。你也可以尝试这个model
抱歉,模型有错误:
Traceback (most recent call last): File "/vllm-workspace/v1Server.py", line 50, in <module> generation_config, tokenizer, stop_word, engine = load_vllm() File "/vllm-workspace/v1Server.py", line 23, in load_vllm generation_config = GenerationConfig.from_pretrained(model_dir, trust_remote_code=True) File "/usr/local/lib/python3.10/dist-packages/transformers/generation/configuration_utils.py", line 915, in from_pretrained resolved_config_file = cached_file( File "/usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py", line 373, in cached_file raise EnvironmentError( OSError: /vllm-workspace/DeepSeek-V2-Lite-gptq-4bit does not appear to have a file named generation_config.json.