vllm api_server.py: 错误:无法识别的参数:--lora-modules sql-lora=~/.cache/huggingface/hub/models--yard1--llama-2-7b-sql-lora-test/

rhfm7lfc  于 2个月前  发布在  其他
关注(0)|答案(6)|浏览(111)

运行以下代码:

`python -m vllm.entrypoints.api_server \ --model meta-llama/Llama-2-7b-hf \ --enable-lora \ --lora-modules sql-lora=~/.cache/huggingface/hub/models--yard1--llama-2-7b-sql-lora-test/`
raise an error:
 api_server.py: error: unrecognized arguments: --lora-modules sql-lora=~/.cache/huggingface/hub/models--yard1--llama-2-7b-sql-lora-test/
iyfamqjs

iyfamqjs1#

尝试从vllm/vllm-openai:latest启动docker时出现相同的错误。我将尝试重新拉取几天前的提交的docker(待定)。

5us2dqdw

5us2dqdw2#

好的,我仍然在使用新的docker时遇到错误。它不是一个可用的参数。

我建议查看这个示例,了解如何在启动引擎后加载lora:

$https://github.com/vllm-project/vllm/blob/main/examples/multilora_inference.py$

g2ieeal7

g2ieeal73#

是的,同样的错误。有解决这个问题的方法吗?如何使用vlllm来提供lora训练好的模型?谢谢!

blpfk2vs

blpfk2vs4#

从源代码构建环境可以解决这个问题。
https://docs.vllm.ai/en/latest/getting_started/installation.html#build-from-source

rseugnpd

rseugnpd5#

升级到vllm==0.3.3解决了我的问题。

yzuktlbb

yzuktlbb6#

Instead of filing a new issue, tagging on to this open issue. I am having a similar experience with vllm-openai:0.5.0 where it is giving me the same message. It seems that the --lora-modules argument is unrecognized. I'll try building from source, but is this a bug?

相关问题