text-generation-inference [BUG] 在NVIDIA L4上运行FP8量化模型失败(repack_fp8_for_marlin)

but5z9lq  于 7个月前  发布在  其他
关注(0)|答案(4)|浏览(156)

系统信息

  • 硬件:AWS g6.12xlarge (us-east-2) / 4x NVIDIA L4 GPU
  • 操作系统:Ubuntu 24.04 LTS (Noble Numbat)
  • NVIDIA驱动:nvidia-open 560.28.03
  • CUDA:12.6
  • Docker:Docker版本27.1.1,构建6312585
  • NVIDIA容器工具包:1.16.1
  • TGI:ghcr.io/huggingface/text-generation-inference:latest(4b44be4c038f)
  1. +-----------------------------------------------------------------------------------------+
  2. | NVIDIA-SMI 560.28.03 Driver Version: 560.28.03 CUDA Version: 12.6 |
  3. |-----------------------------------------+------------------------+----------------------+
  4. | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
  5. | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
  6. | | | MIG M. |
  7. |=========================================+========================+======================|
  8. | 0 NVIDIA L4 Off | 00000000:38:00.0 Off | 0 |
  9. | N/A 41C P8 16W / 72W | 1MiB / 23034MiB | 0% Default |
  10. | | | N/A |
  11. +-----------------------------------------+------------------------+----------------------+
  12. | 1 NVIDIA L4 Off | 00000000:3A:00.0 Off | 0 |
  13. | N/A 42C P8 17W / 72W | 1MiB / 23034MiB | 0% Default |
  14. | | | N/A |
  15. +-----------------------------------------+------------------------+----------------------+
  16. | 2 NVIDIA L4 Off | 00000000:3C:00.0 Off | 0 |
  17. | N/A 41C P8 17W / 72W | 1MiB / 23034MiB | 0% Default |
  18. | | | N/A |
  19. +-----------------------------------------+------------------------+----------------------+
  20. | 3 NVIDIA L4 Off | 00000000:3E:00.0 Off | 0 |
  21. | N/A 38C P8 16W / 72W | 1MiB / 23034MiB | 0% Default |
  22. | | | N/A |
  23. +-----------------------------------------+------------------------+----------------------+

信息

  • Docker
  • 直接使用CLI

任务

  • 一个官方支持的命令
  • 我自己的修改

重现

要重现,请运行以下shell脚本:

  1. model=neuralmagic/Meta-Llama-3.1-70B-Instruct-FP8
  2. volume=$PWD/weights
  3. token=<REDACTED>
  4. docker run --rm --runtime nvidia --gpus all --shm-size 1g -p 8080:80 -v $volume:/data \
  5. ghcr.io/huggingface/text-generation-inference:latest --model-id $model

启动过程中出现以下异常:

  1. 2024-08-09T12:44:30.578630Z INFO text_generation_launcher: Args {
  2. model_id: "neuralmagic/Meta-Llama-3.1-70B-Instruct-FP8",
  3. revision: None,
  4. validation_workers: 2,
  5. sharded: None,
  6. num_shard: None,
  7. quantize: None,
  8. speculate: None,
  9. dtype: None,
  10. trust_remote_code: false,
  11. max_concurrent_requests: 128,
  12. max_best_of: 2,
  13. max_stop_sequences: 4,
  14. max_top_n_tokens: 5,
  15. max_input_tokens: None,
  16. max_input_length: None,
  17. max_total_tokens: None,
  18. waiting_served_ratio: 0.3,
  19. max_batch_prefill_tokens: None,
  20. max_batch_total_tokens: None,
  21. max_waiting_tokens: 20,
  22. max_batch_size: None,
  23. cuda_graphs: None,
  24. port: 80,
  25. shard_uds_path: "/tmp/text-generation-server",
  26. master_addr: "localhost",
  27. master_port: 29500,
  28. huggingface_hub_cache: Some(
  29. "/data",
  30. ),
  31. weights_cache_override: None,
  32. disable_custom_kernels: false,
  33. cuda_memory_fraction: 1.0,
  34. rope_scaling: None,
  35. rope_factor: None,
  36. json_output: false,
  37. otlp_endpoint: None,
  38. otlp_service_name: "text-generation-inference.router",
  39. cors_allow_origin: [],
  40. api_key: None,
  41. watermark_gamma: None,
  42. watermark_delta: None,
  43. ngrok: false,
  44. ngrok_authtoken: None,
  45. ngrok_edge: None,
  46. tokenizer_config_path: None,
  47. disable_grammar_support: false,
  48. env: false,
  49. max_client_batch_size: 4,
  50. lora_adapters: None,
  51. usage_stats: On,
  52. }
  53. 2024-08-09T12:44:30.578721Z INFO hf_hub: Token file not found "/root/.cache/huggingface/token"
  54. 2024-08-09T12:44:30.632826Z INFO text_generation_launcher: Model supports up to 131072 but tgi will now set its default to 4096 instead. This is to save VRAM by refusing large prompts in order to allow more users on the same hardware. You can increase that size using `--max-batch-prefill-tokens=131122 --max-total-tokens=131072 --max-input-tokens=131071`.
  55. 2024-08-09T12:44:30.632839Z INFO text_generation_launcher: Default `max_input_tokens` to 4095
  56. 2024-08-09T12:44:30.632841Z INFO text_generation_launcher: Default `max_total_tokens` to 4096
  57. 2024-08-09T12:44:30.632843Z INFO text_generation_launcher: Default `max_batch_prefill_tokens` to 4145
  58. 2024-08-09T12:44:30.632844Z INFO text_generation_launcher: Using default cuda graphs [1, 2, 4, 8, 16, 32]
  59. 2024-08-09T12:44:30.632939Z INFO download: text_generation_launcher: Starting check and download process for neuralmagic/Meta-Llama-3.1-70B-Instruct-FP8
  60. 2024-08-09T12:44:33.840860Z INFO text_generation_launcher: Files are already present on the host. Skipping download.
  61. 2024-08-09T12:44:34.638228Z INFO download: text_generation_launcher: Successfully downloaded weights for neuralmagic/Meta-Llama-3.1-70B-Instruct-FP8
  62. 2024-08-09T12:44:34.638608Z INFO shard-manager: text_generation_launcher: Starting shard rank=0
  63. 2024-08-09T12:44:39.179449Z INFO text_generation_launcher: GPU does not support FP8, using Marlin FP8 kernel
  64. 2024-08-09T12:44:39.213012Z ERROR text_generation_launcher: Error when initializing model
  65. Traceback (most recent call last):
  66. File "/opt/conda/bin/text-generation-server", line 8, in <module>
  67. sys.exit(app())
  68. File "/opt/conda/lib/python3.10/site-packages/typer/main.py", line 311, in __call__
  69. return get_command(self)(*args, **kwargs)
  70. File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
  71. :Error: ShardCannotStart
  72. return self.main(*args, **kwargs)
  73. File "/opt/conda/lib/python3.10/site-packages/typer/core.py", line 778, in main
  74. return _main(
  75. File "/opt/conda/lib/python3.10/site-packages/typer/core.py", line 216, in _main
  76. rv = self.invoke(ctx)
  77. File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1688, in invoke
  78. return _process_result(sub_ctx.command.invoke(sub_ctx))
  79. File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
  80. return ctx.invoke(self.callback, **ctx.params)
  81. File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 783, in invoke
  82. return __callback(*args, **kwargs)
  83. File "/opt/conda/lib/python3.10/site-packages/typer/main.py", line 683, in wrapper
  84. return callback(**use_params) # type: ignore
  85. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/cli.py", line 109, in serve
  86. server.serve(
  87. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/server.py", line 274, in serve
  88. asyncio.run(
  89. File "/opt/conda/lib/python3.10/asyncio/runners.py", line 44, in run
  90. return loop.run_until_complete(main)
  91. File "/opt/conda/lib/python3.10/asyncio/base_events.py", line 636, in run_until_complete
  92. self.run_forever()
  93. File "/opt/conda/lib/python3.10/asyncio/base_events.py", line 603, in run_forever
  94. self._run_once()
  95. File "/opt/conda/lib/python3.10/asyncio/base_events.py", line 1909, in _run_once
  96. handle._run()
  97. File "/opt/conda/lib/python3.10/asyncio/events.py", line 80, in _run
  98. self._context.run(self._callback, *self._args)
  99. > File "/opt/conda/lib/python3.10/site-packages/text_generation_server/server.py", line 229, in serve_inner
  100. model = get_model_with_lora_adapters(
  101. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/__init__.py", line 1195, in get_model_with_lora_adapters
  102. model = get_model(
  103. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/__init__.py", line 766, in get_model
  104. return FlashCausalLM(
  105. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/flash_causal_lm.py", line 896, in __init__
  106. model = model_class(prefix, config, weights)
  107. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py", line 528, in __init__
  108. self.model = FlashLlamaModel(prefix, config, weights)
  109. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py", line 418, in __init__
  110. FlashLlamaLayer(
  111. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py", line 346, in __init__
  112. self.self_attn = FlashLlamaAttention(
  113. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py", line 166, in __init__
  114. self.query_key_value = load_attention(config, prefix, weights, index)
  115. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py", line 94, in load_attention
  116. base_layer = TensorParallelColumnLinear.load_multi(
  117. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/layers/tensor_parallel.py", line 179, in load_multi
  118. linear = get_linear(weight, bias)
  119. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/layers/linear.py", line 102, in get_linear
  120. return weight.get_linear(bias)
  121. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/layers/fp8.py", line 185, in get_linear
  122. return get_fp8_linear().from_fp8(
  123. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/layers/marlin/fp8.py", line 66, in from_fp8
  124. return cls(qweight=weight, scales=scale.to(dtype), bias=bias)
  125. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/layers/marlin/fp8.py", line 45, in __init__
  126. qweight, scales = repack_fp8_for_marlin(qweight, scales)
  127. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/layers/marlin/fp8.py", line 138, in repack_fp8_for_marlin
  128. scales = permute_scales(scales)
  129. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/layers/marlin/util.py", line 48, in permute_scales
  130. scales = scales.reshape((-1, len(scale_perm_single)))[:, scale_perm_single]
  131. RuntimeError: shape '[-1, 32]' is invalid for input of size 3
  132. 2024-08-09T12:44:40.147176Z ERROR shard-manager: text_generation_launcher: Shard complete standard error output:
  133. 2024-08-09 12:44:36.570 | INFO | text_generation_server.utils.import_utils:<module>:73 - Detected system cuda
  134. /opt/conda/lib/python3.10/site-packages/text_generation_server/utils/sgmv.py:18: UserWarning: Could not import SGMV kernel from Punica, falling back to loop.
  135. warnings.warn("Could not import SGMV kernel from Punica, falling back to loop.")
  136. /opt/conda/lib/python3.10/site-packages/mamba_ssm/ops/selective_scan_interface.py:159: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
  137. def forward(ctx, xz, conv1d_weight, conv1d_bias, x_proj_weight, delta_proj_weight,
  138. /opt/conda/lib/python3.10/site-packages/mamba_ssm/ops/selective_scan_interface.py:232: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
  139. def backward(ctx, dout):
  140. /opt/conda/lib/python3.10/site-packages/mamba_ssm/ops/triton/layernorm.py:508: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
  141. def forward(
  142. /opt/conda/lib/python3.10/site-packages/mamba_ssm/ops/triton/layernorm.py:567: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
  143. def backward(ctx, dout, *args):
  144. /opt/conda/lib/python3.10/site-packages/torch/distributed/c10d_logger.py:79: FutureWarning: You are using a Backend <class 'text_generation_server.utils.dist.FakeGroup'> as a ProcessGroup. This usage is deprecated since PyTorch 2.0. Please use a public API of PyTorch Distributed instead.
  145. return func(*args, **kwargs)
  146. Traceback (most recent call last):
  147. File "/opt/conda/bin/text-generation-server", line 8, in <module>
  148. sys.exit(app())
  149. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/cli.py", line 109, in serve
  150. server.serve(
  151. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/server.py", line 274, in serve
  152. asyncio.run(
  153. File "/opt/conda/lib/python3.10/asyncio/runners.py", line 44, in run
  154. return loop.run_until_complete(main)
  155. File "/opt/conda/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
  156. return future.result()
  157. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/server.py", line 229, in serve_inner
  158. model = get_model_with_lora_adapters(
  159. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/__init__.py", line 1195, in get_model_with_lora_adapters
  160. model = get_model(
  161. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/__init__.py", line 766, in get_model
  162. return FlashCausalLM(
  163. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/flash_causal_lm.py", line 896, in __init__
  164. model = model_class(prefix, config, weights)
  165. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py", line 528, in __init__
  166. self.model = FlashLlamaModel(prefix, config, weights)
  167. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py", line 418, in __init__
  168. FlashLlamaLayer(
  169. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py", line 346, in __init__
  170. self.self_attn = FlashLlamaAttention(
  171. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py", line 166, in __init__
  172. self.query_key_value = load_attention(config, prefix, weights, index)
  173. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py", line 94, in load_attention
  174. base_layer = TensorParallelColumnLinear.load_multi(
  175. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/layers/tensor_parallel.py", line 179, in load_multi
  176. linear = get_linear(weight, bias)
  177. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/layers/linear.py", line 102, in get_linear
  178. return weight.get_linear(bias)
  179. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/layers/fp8.py", line 185, in get_linear
  180. return get_fp8_linear().from_fp8(
  181. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/layers/marlin/fp8.py", line 66, in from_fp8
  182. return cls(qweight=weight, scales=scale.to(dtype), bias=bias)
  183. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/layers/marlin/fp8.py", line 45, in __init__
  184. qweight, scales = repack_fp8_for_marlin(qweight, scales)
  185. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/layers/marlin/fp8.py", line 138, in repack_fp8_for_marlin
  186. scales = permute_scales(scales)
  187. File "/opt/conda/lib/python3.10/site-packages/text_generation_server/layers/marlin/util.py", line 48, in permute_scales
  188. scales = scales.reshape((-1, len(scale_perm_single)))[:, scale_perm_single]
  189. RuntimeError: shape '[-1, 32]' is invalid for input of size 3
  190. rank=0
  191. 2024-08-09T12:44:40.244304Z ERROR text_generation_launcher: Shard 0 failed to start

预期行为

我希望在分片初始化期间不会出现任何问题。TGI应该能够正常启动并提供模型服务。

yhxst69z

yhxst69z1#

虽然我在这里没有找到已经报告的问题,但我确实发现了一个与vLLM类似的问题。他们甚至刚刚修复了它,所以也许他们的更改可以移植到TGI?

pexxcrt2

pexxcrt22#

你好,@DrNochi 👋
感谢你报告这个问题,我会@danieldk,他是一个马林Maven!

kmbjn2e3

kmbjn2e33#

附注:为什么TGI甚至要回退到使用Marlin内核?据我所知,NVIDIA L4正在使用具有CUDA计算能力8.9的艾达·洛夫莱斯架构,这应该为FP提供硬件支持:NVIDIA CUDA文档。我是不是漏掉了什么?
快速浏览了一下代码,我发现了以下PR #2277,这是最新版本的一部分,基本上“阻止”TGI利用原生FP8支持,通过强制Marlin内核用于CC 8.9。我没有发现任何与这些更改相关的问题或进一步解释。也许@OlivierDehaene可以解释一下这个改变背后的原因?

3vpjnl9f

3vpjnl9f4#

附注:为什么TGI甚至开始使用Marlin内核?据我所知,NVIDIA L4正在使用具有CUDA计算能力8.9的Ada Lovelace架构,这应该为FP提供硬件支持:NVIDIA CUDA文档。我是不是漏掉了什么?
我们切换到了fbgemm-gpu进行FP8矩阵乘法。然而,它使用了TMA(Tensor内存加速器),而在CC 8.9中并不受支持。

相关问题