vllm [用法]: ValueError: 无法找到 awq 的配置文件

ffx8fchx  于 2个月前  发布在  其他
关注(0)|答案(6)|浏览(64)

当前环境

PyTorch version: 2.1.2+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.29.2
Libc version: glibc-2.31

Python version: 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1058-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L4
Nvidia driver version: 535.161.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Byte Order:                         Little Endian
Address sizes:                      48 bits physical, 48 bits virtual
CPU(s):                             16
On-line CPU(s) list:                0-15
Thread(s) per core:                 2
Core(s) per socket:                 8
Socket(s):                          1
NUMA node(s):                       1
Vendor ID:                          AuthenticAMD
CPU family:                         25
Model:                              1
Model name:                         AMD EPYC 7R13 Processor
Stepping:                           1
CPU MHz:                            2650.000
BogoMIPS:                           5300.00
Hypervisor vendor:                  KVM
Virtualization type:                full
L1d cache:                          256 KiB
L1i cache:                          256 KiB
L2 cache:                           4 MiB
L3 cache:                           32 MiB
NUMA node0 CPU(s):                  0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save vaes vpclmulqdq rdpid

Versions of relevant libraries:
[pip3] numpy==1.26.2
[pip3] nvidia-nccl-cu12==2.18.1
[pip3] pytorch-lightning==2.2.1
[pip3] torch==2.1.2
[pip3] torchmetrics==1.2.0
[pip3] triton==2.1.0
[conda] numpy                     1.26.2                   pypi_0    pypi
[conda] nvidia-nccl-cu12          2.18.1                   pypi_0    pypi
[conda] pytorch-lightning         2.2.1                    pypi_0    pypi
[conda] torch                     2.1.2                    pypi_0    pypi
[conda] torchmetrics              1.2.0                    pypi_0    pypi
[conda] triton                    2.1.0                    pypi_0    pypiROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.0.post1
vLLM Build Flags:
CUDA Archs: 5.0;6.0;7.0;7.5;8.0;8.6;9.0; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      0-15    0               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

您希望如何使用vllm

我想运行一个已微调的 Mistral-7B-v0.1 的推理。

self.model = LLM(model="Mistral-7B-v0.1",
           trust_remote_code=True,  # mandatory for hf models
           dtype="bfloat16",
           gpu_memory_utilization=0.95,
           quantization="awq",
           max_model_len=8192,
           # max_new_tokens=128,
           # top_k=10,
           # top_p=0.95,
           # temperature=0.8,
           # tensor_parallel_size=4
         )

尽管我看到有人在同一模型上使用它,但仍然导致了 ValueError: Cannot find the config file for awq 。有什么建议吗?

jv2fixgn

jv2fixgn1#

你好,拗脾气的,你解决了这个问题吗?我也遇到了同样的问题。

vom3gejh

vom3gejh2#

你好,Xinyumi。我相信我无法解决这个问题。如果你有任何消息,请告诉我。我的"解决方法"是使用另一个软件包!

kyxcudwk

kyxcudwk3#

你好,@grumpyp 。感谢你的反馈。当我运行benchmark_throughput.py时,我遇到了这个问题。当我选择除"fp8"之外的任何量化方法时,问题会出现并无法找到配置文件。你能告诉我另一个可以工作的包吗?😊

rta7y2nd

rta7y2nd5#

Hi @grumpyp,我猜原因是hf模型"Mistral-7B-v0.1"没有quant_config,在使用autoawq运行vllm加载带有"awq"的模型之前应该对其进行量化。你可以尝试使用hf模型"TheBloke/Llama-2-7b-Chat-AWQ",它可以正常工作。

gg0vcinb

gg0vcinb6#

谢谢,如果我需要再次更改当前正在运行的版本,我可能会尝试这个。谢谢你。

相关问题