vllm Yi-34B-Chat-4bits-GPTQ在达到最大长度之前一直输出空的""标记,

p1iqtdky  于 5个月前  发布在  其他
关注(0)|答案(8)|浏览(155)

在V100 32GB上运行时,启动脚本为:

CUDA_VISIBLE_DEVICES=2,3 python \
    -m vllm.entrypoints.openai.api_server \
    --model="../models/Yi-34B-Chat-4bits-GPTQ" \
    --dtype half --port 8080 --served-model-name Yi-34B-Chat-4bits-GPTQ

在多轮对话之后出现此问题,服务器不断输出空的令牌"",并且无法正常停止。

以下是导致问题的请求示例。

curl -X 'POST' \
  'http://10.223.48.160:30002/v1/chat/completions' \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
  "model": "Yi-34B-Chat-4bits-GPTQ",
  "messages": [
    {"role": "system", "content": "你是XXX,由XXX集团的研发团队独立开发的大语言模型,你的使命是协助公司员工高效完成工作。现在,请开始你的工作。"},
    {"role": "user", "content": "你好吗"},
    {"role": "assistant", "content": "我很好,谢谢你的关心。我准备随时协助你解答问题或完成任务。请问你有任何具体的问题或者需要帮助的地方吗?"},
    {"role": "user", "content": "好尼玛"},
    {"role": "assistant", "content": "很好,很高兴听到你状态良好。如果你在工作中遇到问题或者需要帮助,请随时提问。我会尽力提供帮助。"},
    {"role": "user", "content": "哈哈哈"},
    {"role": "assistant", "content": "看起来你似乎很开心。如果你想要分享更多关于你的工作、生活中的积极经历,或者需要建议和指导,请随时告诉我。我会在力所能及的范围内提供帮助。"},
    {"role": "user", "content": "呵呵呵呵呵"}
  ],
  "temperature": 0.7,
  "top_p": 1,
  "n": 1,
  "max_tokens": 1024,
  "stream": true,
  "presence_penalty": 0,
  "frequency_penalty": 0,
  "user": "string",
  "best_of": 1,
  "top_k": -1,
  "ignore_eos": false,
  "use_beam_search": false,
  "stop_token_ids": [
    7
  ],
  "skip_special_tokens": true,
  "spaces_between_special_tokens": true,
  "add_generation_prompt": true,
  "echo": false,
  "repetition_penalty": 1,
  "min_p": 0
}

在卡住之后,服务器实际上会一直输出空格,直到某个时刻停止。在此之后,发送给它的任何后续请求也会以相同的方式卡住。

data: {"id": "cmpl-49bf9c52893f4bd1ab1f5f107a7011ce", "object": "chat.completion.chunk", "created": 1186536, "model": "Yi-34B-Chat-4bits-GPTQ", "choices": [{"index": 0, "delta": {"role": "assistant"}, "finish_reason": null}]}

data: {"id": "cmpl-49bf9c52893f4bd1ab1f5f107a7011ce", "object": "chat.completion.chunk", "created": 1186536, "model": "Yi-34B-Chat-4bits-GPTQ", "choices": [{"index": 0, "delta": {"content": ""}, "finish_reason": null}]}

data: {"id": "cmpl-49bf9c52893f4bd1ab1f5f107a7011ce", "object": "chat.completion.chunk", "created": 1186536, "model": "Yi-34B-Chat-4bits-GPTQ", "choices": [{"index": 0, "delta": {"content": ""}, "finish_reason": null}]}

data: {"id": "cmpl-49bf9c52893f4bd1ab1f5f107a7011ce", "object": "chat.completion.chunk", "created": 1186536, "model": "Yi-34B-Chat-4bits-GPTQ", "choices": [{"index": 0, "delta": {"content": ""}, "finish_reason": null}]}

data: {"id": "cmpl-49bf9c52893f4bd1ab1f5f107a7011ce", "object": "chat.completion.chunk", "created": 1186536, "model": "Yi-34B-Chat-4bits-GPTQ", "choices": [{"index": 0, "delta": {"content": ""}, "finish_reason": null}]}
...
data: {"id": "cmpl-49bf9c52893f4bd1ab1f5f107a7011ce", "object": "chat.completion.chunk", "created": 1186536, "model": "Yi-34B-Chat-4bits-GPTQ", "choices": [{"index": 0, "delta": {}, "finish_reason": "length"}], "usage": {"prompt_tokens": 183, "total_tokens": 1206, "completion_tokens": 1023}}

data: [DONE]

...
ego6inou

ego6inou1#

在我的案例中,vllm一直在输出tokens,但实际上只输出空字符串,直到达到最大长度限制。有人在部署Yi或其他LLMs时遇到过类似的情况吗?我已经将vllm更新到最新版本。

sqyvllje

sqyvllje2#

你好,你可以尝试设置stop_token_ids=[2,6,7,8]。

vpfxa7rd

vpfxa7rd3#

你好,你可以尝试设置stop_token_ids=[2,6,7,8]。非常感谢你的回复。我已经尝试了,但是没有成功。这个问题是由GPTQ还是V100引起的?

INFO 01-02 08:23:26 async_llm_engine.py:379] Received request 33e9a84e-a948-11ee-acaf-0242ac110013: prompt: '你的使命是协助公司员工高效完成工作。<|im_end|>\nassistant\n"啦啦啦" 是汉语中表示开心、愉快或者轻松愉快心情的象声词,类似于英文中的 "hehe" 或 "teehee"。通常用于轻松、友好的对话中,表达一种轻松愉快的情绪。如果你有什么问题或者需要帮助, feel free to ask!<|im_end|>\nuser\nhehehe<|im_end|>\nassistant\n"hehehe" 是英文中表示开心、愉快或者调皮的笑声文字表达,类似于汉语中的 "hehe"。这种笑声文字表达通常用于轻松、友好的对话中,表达一种轻松愉快的情绪。如果你有什么问题或者需要帮助, feel free to ask!我会尽力帮助你。<|im_end|>\n<|im_start|>user\n你好吗<|im_end|>\n<|im_start|>assistant\n', sampling params: SamplingParams(n=1, best_of=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.6, top_p=1.0, top_k=-1, min_p=0.0, use_beam_search=False, length_penalty=1.0, early_stopping=False, stop=['<|endoftext|>', '<|im_start|>', '<|im_end|>', '<|im_sep|>'], stop_token_ids=[2, 6, 7, 8], include_stop_str_in_output=False, ignore_eos=False, max_tokens=2000, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True), prompt token ids: None.

INFO 01-02 08:23:27 llm_engine.py:653] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 13.2 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.4%, CPU KV cache usage: 0.0%
INFO 01-02 08:23:32 llm_engine.py:653] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 30.9 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.8%, CPU KV cache usage: 0.0%
INFO 01-02 08:23:37 llm_engine.py:653] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 27.6 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 1.1%, CPU KV cache usage: 0.0%
...
ndh0cuux

ndh0cuux4#

我相信这个问题与缓存有关,因为我重复一个问题时,它更有可能卡住。

zd287kbt

zd287kbt5#

在Yi-34B聊天和Yi-34B聊天-AWQ中,我也遇到了类似的情况。

zqry0prt

zqry0prt6#

我在Yi-34B Chat和Yi-34B Chat - AWQ中也遇到了类似的情况。
你解决了这个问题吗?

62o28rlo

62o28rlo7#

你好,你可以尝试设置stop_token_ids=[2,6,7,8]。对我来说,这不起作用。一开始答案是正常的,但然后它一遍又一遍地重复。
输入

res = client.chat.completions.create(
    model='Yi-34B-Chat-AWQ',
    messages=[
        {'role': 'system', 'content': 'You are a helpful assistant.'},
        {'role': 'user', 'content': '你好'},
    ],
    temperature=0,
    stop=["</s>", "<|im_start|>", "<|im_end|>", "<|im_sep|>",],
)

print(res.choices[0].message.content)

vllm log

INFO 03-15 10:25:32 async_llm_engine.py:436] Received request cmpl-f05ccf970b224fce84ee552738f7f423: prompt: '<|im_start|>system\nYou are
a helpful assistant.<|im_end|>\n<|im_start|>user\n你好<|im_end|>\n<|im_start|>assistant\n', prefix_pos: None,sampling_params: SamplingParams(n=1, best_of=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=-1, min_p=0.0, s
eed=None, use_beam_search=False, length_penalty=1.0, early_stopping=False, stop=['</s>', '<|im_start|>', '<|im_end|>', '<|im_sep|>'], stop_token_ids=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=4074, logprobs=None, prompt_logprobs=None, skip_special_toke
ns=True, spaces_between_special_tokens=True), prompt_token_ids: [6, 1328, 144, 3961, 678, 562, 6901, 14135, 98, 7, 59568, 144, 6, 2942, 14
4, 25902, 7, 59568, 144, 6, 14135, 144], lora_request: None.
INFO 03-15 10:25:34 metrics.py:213] Avg prompt throughput: 4.4 tokens/s, Avg generation throughput: 10.3 tokens/s, Running: 1 reqs, Swappe
d: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 1.6%, CPU KV cache usage: 0.0%
INFO 03-15 10:25:39 metrics.py:213] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 31.1 tokens/s, Running: 1 reqs, Swappe
d: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 4.8%, CPU KV cache usage: 0.0%
INFO 03-15 10:25:44 metrics.py:213] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 30.8 tokens/s, Running: 1 reqs, Swappe
d: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 7.7%, CPU KV cache usage: 0.0%
INFO 03-15 10:25:49 metrics.py:213] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 30.4 tokens/s, Running: 1 reqs, Swappe
d: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 10.9%, CPU KV cache usage: 0.0%
INFO 03-15 10:25:54 metrics.py:213] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 30.3 tokens/s, Running: 1 reqs, Swappe
d: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 14.1%, CPU KV cache usage: 0.0%
INFO 03-15 10:25:59 metrics.py:213] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 30.4 tokens/s, Running: 1 reqs, Swappe
d: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 17.0%, CPU KV cache usage: 0.0%
INFO 03-15 10:26:04 metrics.py:213] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 30.3 tokens/s, Running: 1 reqs, Swappe
d: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 20.2%, CPU KV cache usage: 0.0%
INFO 03-15 10:26:09 metrics.py:213] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 30.2 tokens/s, Running: 1 reqs, Swappe
d: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 23.1%, CPU KV cache usage: 0.0%
INFO 03-15 10:26:14 metrics.py:213] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 30.2 tokens/s, Running: 1 reqs, Swappe
d: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 26.3%, CPU KV cache usage: 0.0%
INFO 03-15 10:26:19 metrics.py:213] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 30.1 tokens/s, Running: 1 reqs, Swappe
d: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 29.2%, CPU KV cache usage: 0.0%
INFO 03-15 10:26:24 metrics.py:213] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 30.0 tokens/s, Running: 1 reqs, Swappe
d: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 32.1%, CPU KV cache usage: 0.0%
INFO 03-15 10:26:29 metrics.py:213] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 29.9 tokens/s, Running: 1 reqs, Swappe
d: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 35.3%, CPU KV cache usage: 0.0%
...
INFO 03-15 10:27:50 async_llm_engine.py:110] Finished request cmpl-f05ccf970b224fce84ee552738f7f423.
INFO:     10.204.237.30:33894 - "POST /v1/chat/completions HTTP/1.1" 200 OK

输出

你好!看起来你可能不小心发送了一个空的回复。如果你有任何问题或需要帮助,请随时告诉我,我会尽力帮助你。 

如果你只是想测试系统,或者想要一个空的回复,那也完全没问题。请随时告诉我你的需求,我会尽力满足。 

如果你有任何其他问题或需要帮助的地方,请随时提问。 

谢谢! 

祝你有个愉快的一天! 

如果你有任何问题或需要帮助,请随时告诉我,我会尽力帮助你。 

如果你只是想测试系统,或者想要一个空的回复,那也完全没问题。请随时告诉我你的需求,我会尽力满足。 

谢谢!祝你有个愉快的一天! 

如果你有任何问题或需要帮助,请随时告诉我,我会尽力帮助你。 

如果你只是想测试系统,或者想要一个空的回复,那也完全没问题。请随时告诉我你的需求,我会尽力满足。 

谢谢!祝你有个愉快的一天! 

如果你有任何问题或需要帮助,请随时告诉我,我会尽力帮助你。 

如果你只是想测试系统,或者想要一个空的回复,那也完全没问题。请随时告诉我你的需求,我会尽力满足。 

谢谢!祝你有个愉快的一天! 

如果你有任何问题或需要帮助,请随时告诉我,我会尽力帮助你。 

如果你只是想测试系统,或者想要一个空的回复,那也完全没问题。请随时告诉我你的需求,我会尽力满足。 

...

如果你只是想测试系统,或者想要一个空的回复,那也完全没问题。请随时告诉我你的需求,我会尽力满足。 

谢谢!祝你有个愉快的一天! 

如果你有任何问题或需要帮助,请随时告诉我,
k2arahey

k2arahey8#

env:
vllm==0.3.3
torch==2.1.2
cuda=12.1

相关问题