vllm [Bug]: 当使用chunked-prefill托管TheBloke/Llama-2-7B-Chat-GPTQ时出现服务器错误

y0u0uwnf  于 6个月前  发布在  其他
关注(0)|答案(1)|浏览(44)

# 描述错误

我尝试使用Marlin内核和分块预填充运行Llama-2-7b-gptq。但是服务器出现了一些错误。我在不使用分块预填充的情况下使用Marlin内核运行Llama-2-7b-gptq,没有出现错误。

# 示例代码以重现问题

python -m vllm.entrypoints.openai.api_server --disable-log-requests --model TheBloke/Llama-2-7B-Chat-GPTQ --seed 0 --enable-chunked-prefill
python benchmarks/benchmark_serving.py --model TheBloke/Llama-2-7B-Chat-GPTQ --dataset-path /u/yao1/ShareGPT_V3_unfiltered_cleaned_split.json --request-rate 10 --num-prompts 1000 --seed 0


# 服务器结果

INFO: ::1:36304 - "POST /v1/completions HTTP/1.1" 200 OK
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/responses.py", line 265, in call
await wrap(partial(self.listen_for_disconnect, receive))
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/responses.py", line 261, in wrap
await func()
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/responses.py", line 238, in listen_for_disconnect
message = await receive()
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 553, in receive
await self.message_event.wait()
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/asyncio/locks.py", line 226, in wait
await fut
asyncio.exceptions.CancelledError: Cancelled by cancel scope 7fed68137880

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 399, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in call
return await self.app(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/fastapi/applications.py", line 1054, in call
await super().call(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/applications.py", line 123, in call
await self.middleware_stack(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/middleware/errors.py", line 186, in call
raise exc
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/middleware/errors.py", line 164, in call
await self.app(scope, receive, _send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/middleware/cors.py", line 85, in call
await self.app(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 65, in call
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/routing.py", line 756, in call
await self.middleware_stack(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/routing.py", line 776, in app
await route.handle(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/routing.py", line 297, in handle
await self.app(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/routing.py", line 75, in app
await response(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/responses.py", line 265, in call
await wrap(partial(self.listen_for_disconnect, receive))
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 680, in aexit
raise BaseExceptionGroup(
exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)


# 客户端结果

$ python benchmarks/benchmark_serving.py --model TheBloke/Llama-2-7B-Chat-GPTQ --dataset-path /u/yao1/ShareGPT_V3_unfiltered_cleaned_split.json --request-rate 10 --num-prompts 100
Namespace(backend='vllm', base_url=None, host='localhost', port=8000, endpoint='/v1/completions', dataset=None, dataset_name='sharegpt', dataset_path='/u/yao1/ShareGPT_V3_unfiltered_cleaned_split.json', model='TheBloke/Llama-2-7B-Chat-GPTQ', tokenizer=None, best_of=1, use_beam_search=False, num_prompts=100, sharegpt_output_len=None, sonnet_input_len=550, sonnet_output_len=150, sonnet_prefix_len=200, request_rate=10.0, seed=0, trust_remote_code=False, disable_tqdm=False, save_result=False, metadata=None, result_dir=None, result_filename=None)
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
Starting initial single prompt test run...
Initial test run completed.
Starting main benchmark run...
Traffic request rate: 10.0
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [00:09<00:00, 10.23it/s]
============ Serving Benchmark Result ============
Successful requests: 1
Benchmark duration (s): 9.77
Total input tokens: 28
Total generated tokens: 18
Request throughput (req/s): 0.10
Input token throughput (tok/s): 2.86
Output token throughput (tok/s): 1.84
---------------Time to First Token----------------
Mean TTFT (ms): 17.51
Median TTFT (ms): 17.51
P99 TTFT (ms): 17.51
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 8.03
Median TPOT (ms): 8.03
P99 TPOT (ms): 8.03
---------------Inter-token Latency----------------
Mean ITL (ms): 8.04
Median ITL (ms): 7.45
P99 ITL (ms): 14.29

h22fl7wq

h22fl7wq1#

你好,@rkooo567。
我已经打开了一个新的问题。当你有时间的时候,请帮我检查一下好吗?谢谢。

相关问题