vllm Amazon/FalconLite2

ahy6op9u  于 2个月前  发布在  其他
关注(0)|答案(2)|浏览(21)

我想使用vllm和模型amazon/FalconLite2进行基准测试吞吐量和延迟。然而,该模型不受vllm支持。我应该怎么做才能使这个运行?谢谢。

sd2nnvve

sd2nnvve1#

@sadrafh 我通过修改配置文件得到了这个模型:

diff --git a/vllm/transformers_utils/configs/falcon.py b/vllm/transformers_utils/configs/falcon.py
index c82cc606..43eb0438 100644
--- a/vllm/transformers_utils/configs/falcon.py
+++ b/vllm/transformers_utils/configs/falcon.py
@@ -69,6 +69,7 @@ class RWConfig(PretrainedConfig):
         self.bias = bias
         self.parallel_attn = parallel_attn
         self.new_decoder_architecture = new_decoder_architecture
+        self.num_ln_in_parallel_attn = None
 
         if self.hidden_size == 8192:
             # Hack for falcon-40b

然而,它似乎产生了乱码:

>>> from vllm import LLM
>>> model = LLM("amazon/FalconLite2", trust_remote_code=True, quantization="gptq")
>>> model.generate("<|prompter|>What are the main challenges to support a long context for LLM?<|endoftext|><|assistant|>")
Processed prompts: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00,  3.64it/s, est. speed input: 62.00 toks/s, output: 58.35 toks/s]
[RequestOutput(request_id=0, prompt='<|prompter|>What are the main challenges to support a long context for LLM?<|endoftext|><|assistant|>', prompt_token_ids=[65028, 1562, 362, 248, 1316, 4922, 271, 1164, 241, 916, 4436, 312, 31370, 56, 42, 11, 65027], prompt_logprobs=None, outputs=[CompletionOutput(index=0, text=' Ern Solar approachedThomas kost lol igual Alarm creatorvenue交Verified Debor soybean circulatecaster', token_ids=(53224, 15197, 15377, 24174, 33011, 12929, 26976, 41954, 17087, 4255, 20098, 48622, 37688, 52315, 54369, 30414), cumulative_logprob=None, logprobs=None, finish_reason=length, stop_reason=None)], finished=True, metrics=RequestMetrics(arrival_time=1722542626.8144124, last_token_time=1722542626.8144124, first_scheduled_time=1722542626.8371897, first_token_time=1722542626.9011965, time_in_queue=0.022777318954467773, finished_time=1722542627.1092346), lora_request=None)]
nwnhqdif

nwnhqdif2#

从错误日志来看,问题出在模型配置文件(config.json)中缺少了一些必要的键值对。具体来说,max_position_embeddingsn_positionsmax_seq_lenseq_length等键值对没有在模型的配置文件中找到。这导致了模型的最大长度无法正确确定,从而引发了警告和错误。

为了解决这个问题,你需要检查并修改模型的配置文件(config.json),确保其中包含了所有必要的键值对。你可以参考以下示例:

{
  "model": {
    "type": "FalconLite2",
    "params": {
      "quantization": "gptq"
    }
  },
  "tokenizer": null,
  "tensor_parallel_size": 1,
  "input_len": 128,
  "output_len": 128,
  "batch_size": 1,
  "n": 1,
  "use_beam_search": false,
  "num_iters_warmup": 2,
  "num_iters": 4,
  "trust_remote_code": true,
  "dtype": "auto",
  "enforce_eager": false,
  "kv_cache_dtype": "auto",
  "quantization_param_path": None,
  "profile": false,
  "profile_result_dir": None,
  "device": "cuda",
  "block_size": 16,
  "enable_chunked_prefill": false,
  "ray_workers_use_nsight": false,
  "download_dir": None
}

在这个示例中,我添加了"quantization": "gptq"键值对。你可以根据实际情况修改其他键值对。修改完成后,重新运行程序,问题应该可以得到解决。
[rank0]: self.model_runner.load_model()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 134, in load_model
[rank0]: self.model = get_model(
[rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/init.py", line 21, in get_model
[rank0]: return loader.load_model(model_config=model_config,
[rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/loader.py", line 240, in load_model
[rank0]: model = _initialize_model(model_config, self.load_config,
[rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/loader.py", line 91, in _initialize_model
[rank0]: return model_class(config=model_config.hf_config,
[rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/falcon.py", line 389, in init
[rank0]: self.transformer = FalconModel(config, cache_config, quant_config)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/falcon.py", line 350, in init
[rank0]: self.h = nn.ModuleList([
[rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/falcon.py", line 351, in
[rank0]: FalconDecoderLayer(config, cache_config, quant_config)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/falcon.py", line 249, in init
[rank0]: if (config.num_ln_in_parallel_attn is None
[rank0]: File "/home/ubuntu/.local/lib/python3.10/site-packages/transformers/configuration_utils.py", line 264, in getattribute
[rank0]: return super().getattribute(key)
[rank0]: AttributeError: 'RWConfig' object has no attribute 'num_ln_in_parallel_attn'

################################

I I changed the modifications here: /vllm/vllm/transformers_utils/configs$ falcon.py
self.bias = bias
self.parallel_attn = parallel_attn
self.new_decoder_architecture = new_decoder_architecture
self.num_ln_in_parallel_attn = None

if self.hidden_size == 8192:
        # Hack for falcon-40b

相关问题