vllm [Bug]:使用 --enable-prefix-caching 时,在某些提示长度以上将 echo=True 的情况下,/completions 会导致服务器崩溃,

pgccezyw  于 6个月前  发布在  其他
关注(0)|答案(3)|浏览(46)

当前环境

vLLM 0.4.3

RTX 4090 24GB (reproduces also on A100)

🐛 描述bug

你好,
当服务器以以下方式启动时:

python -m vllm.entrypoints.openai.api_server --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 --enable-prefix-caching

运行以下客户端代码:

import openai

client = openai.OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="foo"
)

prompt = [1] * 256
out = client.completions.create(
    model="TinyLlama/TinyLlama-1.1B-Chat-v1.0",
    prompt=prompt,
    max_tokens=1,
    logprobs=5,
    echo=True
)
print(out)

触发以下Assert:

INFO:     127.0.0.1:39724 - "POST /v1/completions HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
    return await self.app(scope, receive, send)
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/starlette/applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 186, in __call__
    raise exc
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in __call__
    await self.app(scope, receive, _send)
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/starlette/middleware/cors.py", line 85, in __call__
    await self.app(scope, receive, send)
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/starlette/routing.py", line 756, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/starlette/routing.py", line 776, in app
    await route.handle(scope, receive, send)
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/starlette/routing.py", line 297, in handle
    await self.app(scope, receive, send)
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/starlette/routing.py", line 77, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/starlette/routing.py", line 72, in app
    response = await func(request)
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/fastapi/routing.py", line 278, in app
    raw_response = await run_endpoint_function(
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
    return await dependant.call(**values)
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 118, in create_completion
    generator = await openai_serving_completion.create_completion(
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/vllm/entrypoints/openai/serving_completion.py", line 166, in create_completion
    async for i, res in result_generator:
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/vllm/utils.py", line 244, in consumer
    raise e
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/vllm/utils.py", line 235, in consumer
    raise item
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/vllm/utils.py", line 219, in producer
    async for item in iterator:
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 662, in generate
    async for output in self._process_request(
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 769, in _process_request
    raise e
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 765, in _process_request
    async for request_output in stream:
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 80, in __anext__
    raise result
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 40, in _raise_exception_on_finish
    task.result()
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 521, in run_engine_loop
    has_requests_in_progress = await asyncio.wait_for(
  File "/usr/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
    return fut.result()
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 495, in engine_step
    request_outputs = await self.engine.step_async()
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 226, in step_async
    output = await self.model_executor.execute_model_async(
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/vllm/executor/gpu_executor.py", line 117, in execute_model_async
    output = await make_async(self.driver_worker.execute_model
  File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/vllm/worker/worker.py", line 272, in execute_model
    output = self.model_runner.execute_model(seq_group_metadata_list,
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 738, in execute_model
    output = self.model.sample(
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 378, in sample
    next_tokens = self.sampler(logits, sampling_metadata)
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/vllm/model_executor/layers/sampler.py", line 112, in forward
    prompt_logprobs, sample_logprobs = _get_logprobs(
  File "/home/user/code/play-vllm/.venv/lib/python3.10/site-packages/vllm/model_executor/layers/sampler.py", line 760, in _get_logprobs
    assert len(next_token_ids) == len(query_indices)
AssertionError

然后服务器进入死机状态: vllm.engine.async_llm_engine.AsyncEngineDeadError: Background loop has errored already.
根据上述错误触发器,在一定提示长度阈值以上,我怀疑这是一个由Assert遮蔽的OOM。
如果我通过添加 --gpu-memory-utilization 0.5 为服务器增加更多内存余地,从我的RTX 4090的24GB内存中留下12GB空闲,那么在将提示大小增加到512个tokens时会出现错误。
如果没有 echo=True,这种情况就不会发生。
在上面的示例中,如果没有 --enable-prefix-caching,它可以处理最大提示大小为2047。
谢谢!

hts6caw3

hts6caw31#

在LLM入口点上也看到了这个带有大批量的内容。

ewm0tg9j

ewm0tg9j2#

@KuntaiDu,你有空查看这个吗?

ff29svar

ff29svar3#

相同。可能是由40系列显卡和flash-attn引起的问题。
#5678
#5537
#5376

5376 (注解)

相关问题