[Bug]:运行时错误:CHECK_EQ(paged_kv_indptr.size(0), batch_size + 1)失败,1与257不相等,当使用vllm加载gemma-2-9b-it时出现此问题,

tnkciper  于 8个月前  发布在  其他
关注(0)|答案(2)|浏览(78)

当前环境

  1. Collecting environment information...
  2. PyTorch version: 2.3.1+cu121
  3. Is debug build: False
  4. CUDA used to build PyTorch: 12.1
  5. ROCM used to build PyTorch: N/A
  6. OS: Ubuntu 20.04.6 LTS (x86_64)
  7. GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
  8. Clang version: Could not collect
  9. CMake version: version 3.29.3
  10. Libc version: glibc-2.31
  11. Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
  12. Python platform: Linux-5.15.0-117-generic-x86_64-with-glibc2.31
  13. Is CUDA available: True
  14. CUDA runtime version: 11.8.89
  15. CUDA_MODULE_LOADING set to: LAZY
  16. GPU models and configuration:
  17. GPU 0: NVIDIA A100 80GB PCIe
  18. GPU 1: NVIDIA A100 80GB PCIe
  19. GPU 2: NVIDIA A100 80GB PCIe
  20. GPU 3: NVIDIA A100 80GB PCIe
  21. Nvidia driver version: 535.183.01
  22. cuDNN version: Could not collect
  23. HIP runtime version: N/A
  24. MIOpen runtime version: N/A
  25. Is XNNPACK available: True
  26. CPU:
  27. Architecture: x86_64
  28. CPU op-mode(s): 32-bit, 64-bit
  29. Byte Order: Little Endian
  30. Address sizes: 46 bits physical, 48 bits virtual
  31. CPU(s): 72
  32. On-line CPU(s) list: 0-71
  33. Thread(s) per core: 2
  34. Core(s) per socket: 18
  35. Socket(s): 2
  36. NUMA node(s): 2
  37. Vendor ID: GenuineIntel
  38. CPU family: 6
  39. Model: 85
  40. Model name: Intel(R) Xeon(R) Gold 5220 CPU @ 2.20GHz
  41. Stepping: 7
  42. CPU MHz: 2200.000
  43. CPU max MHz: 3900.0000
  44. CPU min MHz: 1000.0000
  45. BogoMIPS: 4400.00
  46. Virtualization: VT-x
  47. L1d cache: 1.1 MiB
  48. L1i cache: 1.1 MiB
  49. L2 cache: 36 MiB
  50. L3 cache: 49.5 MiB
  51. NUMA node0 CPU(s): 0-17,36-53
  52. NUMA node1 CPU(s): 18-35,54-71
  53. Vulnerability Gather data sampling: Mitigation; Microcode
  54. Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
  55. Vulnerability L1tf: Not affected
  56. Vulnerability Mds: Not affected
  57. Vulnerability Meltdown: Not affected
  58. Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
  59. Vulnerability Reg file data sampling: Not affected
  60. Vulnerability Retbleed: Mitigation; Enhanced IBRS
  61. Vulnerability Spec rstack overflow: Not affected
  62. Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
  63. Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
  64. Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
  65. Vulnerability Srbds: Not affected
  66. Vulnerability Tsx async abort: Mitigation; TSX disabled
  67. Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
  68. Versions of relevant libraries:
  69. [pip3] flashinfer==0.1.3+cu121torch2.3
  70. [pip3] mypy-extensions==1.0.0
  71. [pip3] numpy==1.26.4
  72. [pip3] nvidia-nccl-cu11==2.20.5
  73. [pip3] nvidia-nccl-cu12==2.20.5
  74. [pip3] onnxruntime==1.18.0
  75. [pip3] optree==0.11.0
  76. [pip3] pytorch_revgrad==0.2.0
  77. [pip3] sentence-transformers==3.0.1
  78. [pip3] torch==2.3.1
  79. [pip3] torchaudio==2.3.1
  80. [pip3] torchvision==0.18.1
  81. [pip3] transformers==4.43.1
  82. [pip3] transformers-stream-generator==0.0.5
  83. [pip3] triton==2.3.1
  84. [conda] flashinfer 0.1.3+cu121torch2.3 pypi_0 pypi
  85. [conda] numpy 1.26.4 pypi_0 pypi
  86. [conda] nvidia-nccl-cu11 2.20.5 pypi_0 pypi
  87. [conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
  88. [conda] optree 0.11.0 pypi_0 pypi
  89. [conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
  90. [conda] pytorch-revgrad 0.2.0 pypi_0 pypi
  91. [conda] sentence-transformers 3.0.1 pypi_0 pypi
  92. [conda] torch 2.3.1 pypi_0 pypi
  93. [conda] torchaudio 2.3.1 pypi_0 pypi
  94. [conda] torchvision 0.18.1 pypi_0 pypi
  95. [conda] transformers 4.43.1 pypi_0 pypi
  96. [conda] transformers-stream-generator 0.0.5 pypi_0 pypi
  97. [conda] triton 2.3.1 pypi_0 pypi
  98. ROCM Version: Could not collect
  99. Neuron SDK Version: N/A
  100. vLLM Version: 0.5.3.post1
  101. vLLM Build Flags:
  102. CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
  103. GPU Topology:
  104. GPU0 GPU1 GPU2 GPU3 CPU Affinity NUMA Affinity GPU NUMA ID
  105. GPU0 X NODE SYS SYS 0-17,36-53 0 N/A
  106. GPU1 NODE X SYS SYS 0-17,36-53 0 N/A
  107. GPU2 SYS SYS X NODE 18-35,54-71 1 N/A
  108. GPU3 SYS SYS NODE X 18-35,54-71 1 N/A
  109. Legend:
  110. X = Self
  111. SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  112. NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  113. PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  114. PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  115. PIX = Connection traversing at most a single PCIe bridge
  116. NV# = Connection traversing a bonded set of # NVLinks

🐛 描述bug

  1. from langchain_community.llms import VLLM
  2. llm = VLLM(
  3. model="/media/user/datadisk/LLM_models/ko-gemma-2-9b-it",
  4. # model received from huggingface with the git clone command
  5. trust_remote_code=True,
  6. max_new_tokens=4096,
  7. top_k=3,
  8. top_p=0.9,
  9. temperature=0.7,
  10. )

你好,我在使用vLLM库加载gemma-2-9b微调模型时发现了一些bug。

  1. 1. Please use Flashinfer backend for models with logits_soft_cap (i.e., Gemma-2). Otherwise, the output might be wrong. Set Flashinfer backend by export VLLM_ATTENTION_BACKEND=FLASHINFER. (type=value_error)

上述错误通过如下设置环境变量得到解决。

  1. import os
  2. os.environ['VLLM_ATTENTION_BACKEND'] = 'FLASHINFER'
  1. 2. 'NoneType' object is not callable (type=type_error)

上述错误通过安装 flashinfer 库得到解决,详见 #6445

从那以后,我遇到了以下问题

  1. [rank0]: Traceback (most recent call last):
  2. [rank0]: File "/home/user/anaconda3/envs/llm-api/bin/uvicorn", line 8, in <module>
  3. [rank0]: sys.exit(main())
  4. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
  5. [rank0]: return self.main(*args, **kwargs)
  6. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/click/core.py", line 1078, in main
  7. [rank0]: rv = self.invoke(ctx)
  8. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
  9. [rank0]: return ctx.invoke(self.callback, **ctx.params)
  10. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/click/core.py", line 783, in invoke
  11. [rank0]: return __callback(*args, **kwargs)
  12. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/uvicorn/main.py", line 409, in main
  13. [rank0]: run(
  14. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/uvicorn/main.py", line 575, in run
  15. [rank0]: server.run()
  16. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/uvicorn/server.py", line 65, in run
  17. [rank0]: return asyncio.run(self.serve(sockets=sockets))
  18. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/asyncio/runners.py", line 44, in run
  19. [rank0]: return loop.run_until_complete(main)
  20. [rank0]: File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
  21. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/uvicorn/server.py", line 69, in serve
  22. [rank0]: await self._serve(sockets)
  23. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/uvicorn/server.py", line 76, in _serve
  24. [rank0]: config.load()
  25. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/uvicorn/config.py", line 433, in load
  26. [rank0]: self.loaded_app = import_from_string(self.app)
  27. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/uvicorn/importer.py", line 19, in import_from_string
  28. [rank0]: module = importlib.import_module(module_str)
  29. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/importlib/__init__.py", line 126, in import_module
  30. [rank0]: return _bootstrap._gcd_import(name[level:], package, level)
  31. [rank0]: File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  32. [rank0]: File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  33. [rank0]: File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  34. [rank0]: File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  35. [rank0]: File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  36. [rank0]: File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  37. [rank0]: File "/home/user/Patent-LLM/main.py", line 6, in <module>
  38. [rank0]: from routers import chat_manager, clear_memory
  39. [rank0]: File "/home/user/Patent-LLM/routers/clear_memory.py", line 5, in <module>
  40. [rank0]: from core.get_session import get_user_id
  41. [rank0]: File "/home/user/Patent-LLM/core/get_session.py", line 8, in <module>
  42. [rank0]: from core.llm import llm
  43. [rank0]: File "/home/user/Patent-LLM/core/llm.py", line 16, in <module>
  44. [rank0]: llm = VLLM(
  45. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/pydantic/v1/main.py", line 339, in __init__
  46. [rank0]: values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
  47. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/pydantic/v1/main.py", line 1048, in validate_model
  48. [rank0]: input_data = validator(cls_, input_data)
  49. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/langchain_core/utils/pydantic.py", line 149, in wrapper
  50. [rank0]: return func(cls, values)
  51. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/langchain_community/llms/vllm.py", line 89, in validate_environment
  52. [rank0]: values["client"] = VLLModel(
  53. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/vllm/vllm/entrypoints/llm.py", line 155, in __init__
  54. [rank0]: self.llm_engine = LLMEngine.from_engine_args(
  55. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/vllm/vllm/engine/llm_engine.py", line 441, in from_engine_args
  56. [rank0]: engine = cls(
  57. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/vllm/vllm/engine/llm_engine.py", line 265, in __init__
  58. [rank0]: self._initialize_kv_caches()
  59. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/vllm/vllm/engine/llm_engine.py", line 364, in _initialize_kv_caches
  60. [rank0]: self.model_executor.determine_num_available_blocks())
  61. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/vllm/vllm/executor/gpu_executor.py", line 94, in determine_num_available_blocks
  62. [rank0]: return self.driver_worker.determine_num_available_blocks()
  63. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
  64. [rank0]: return func(*args, **kwargs)
  65. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/vllm/vllm/worker/worker.py", line 179, in determine_num_available_blocks
  66. [rank0]: self.model_runner.profile_run()
  67. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
  68. [rank0]: return func(*args, **kwargs)
  69. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/vllm/vllm/worker/model_runner.py", line 896, in profile_run
  70. [rank0]: self.execute_model(model_input, kv_caches, intermediate_tensors)
  71. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
  72. [rank0]: return func(*args, **kwargs)
  73. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/vllm/vllm/worker/model_runner.py", line 1292, in execute_model
  74. [rank0]: model_input.attn_metadata.begin_forward()
  75. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/vllm/vllm/attention/backends/flashinfer.py", line 146, in begin_forward
  76. [rank0]: self.prefill_wrapper.begin_forward(
  77. [rank0]: File "/home/user/anaconda3/envs/llm-api/lib/python3.10/site-packages/flashinfer/prefill.py", line 791, in begin_forward
  78. [rank0]: self._wrapper.begin_forward(
  79. [rank0]: RuntimeError: CHECK_EQ(paged_kv_indptr.size(0), batch_size + 1) failed. 1 vs 257

我通过vLLM库加载并使用了多个其他模型,但这是我第一次遇到这个问题,而且我没有找到任何关于这个问题的文档。
我尝试(无知地)通过将 batch_size 变量直接设置为256或0来强制大小,但它只改变了两边的vs中的数字。有办法解决这个问题吗?

polhcujo

polhcujo1#

+1 看到同样的事情。

von4xj4u

von4xj4u2#

这似乎是Flash推理0.1.3中的一个错误。
成功地使用pip install flashinfer==0.1.2 -i https://flashinfer.ai/whl/cu121/torch2.3修复了相同的问题。

相关问题