运行以下代码:
`python -m vllm.entrypoints.api_server \ --model meta-llama/Llama-2-7b-hf \ --enable-lora \ --lora-modules sql-lora=~/.cache/huggingface/hub/models--yard1--llama-2-7b-sql-lora-test/`
raise an error:
api_server.py: error: unrecognized arguments: --lora-modules sql-lora=~/.cache/huggingface/hub/models--yard1--llama-2-7b-sql-lora-test/
6条答案
按热度按时间iyfamqjs1#
尝试从vllm/vllm-openai:latest启动docker时出现相同的错误。我将尝试重新拉取几天前的提交的docker(待定)。
5us2dqdw2#
好的,我仍然在使用新的docker时遇到错误。它不是一个可用的参数。
我建议查看这个示例,了解如何在启动引擎后加载lora:
$https://github.com/vllm-project/vllm/blob/main/examples/multilora_inference.py$
g2ieeal73#
是的,同样的错误。有解决这个问题的方法吗?如何使用vlllm来提供lora训练好的模型?谢谢!
blpfk2vs4#
从源代码构建环境可以解决这个问题。
https://docs.vllm.ai/en/latest/getting_started/installation.html#build-from-source
rseugnpd5#
升级到
vllm==0.3.3
解决了我的问题。yzuktlbb6#
Instead of filing a new issue, tagging on to this open issue. I am having a similar experience with
vllm-openai:0.5.0
where it is giving me the same message. It seems that the--lora-modules
argument is unrecognized. I'll try building from source, but is this a bug?