inference GLM-4V 模型显存使用量计算bug

axzmvihb  于 5个月前  发布在  其他
关注(0)|答案(2)|浏览(54)

系统信息 / 系统信息

Ubuntu18.04
python==3.10

是否使用 Docker 运行 Xinference? / 是否使用 Docker 运行 Xinfernece?

  • docker / docker
  • pip install / 通过 pip install 安装
  • installation from source / 从源码安装

版本信息 / 版本信息

xinference==0.13.3

用以启动 xinference 的命令 / 用以启动 xinference 的命令

XINFERENCE_MODEL_SRC=modelscope xinference cal-model-mem -s 9 -f pytorch -c 8192 -n glm-4v

复现过程 / 复现过程

  1. 输入cmd:
    XINFERENCE_MODEL_SRC=modelscope xinference cal-model-mem -s 9 -f pytorch -c 8192 -n glm-4v
  2. cmd输出:
    Traceback (most recent call last): File "/root/anaconda3/envs/glm-4v-x/bin/xinference", line 8, in <module> sys.exit(cli()) File "/root/anaconda3/envs/glm-4v-x/lib/python3.10/site-packages/click/core.py", line 1157, in __call__ return self.main(*args, **kwargs) File "/root/anaconda3/envs/glm-4v-x/lib/python3.10/site-packages/click/core.py", line 1078, in main rv = self.invoke(ctx) File "/root/anaconda3/envs/glm-4v-x/lib/python3.10/site-packages/click/core.py", line 1688, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/root/anaconda3/envs/glm-4v-x/lib/python3.10/site-packages/click/core.py", line 1434, in invoke return ctx.invoke(self.callback, **ctx.params) File "/root/anaconda3/envs/glm-4v-x/lib/python3.10/site-packages/click/core.py", line 783, in invoke return __callback(*args, **kwargs) File "/root/anaconda3/envs/glm-4v-x/lib/python3.10/site-packages/xinference/deploy/cmdline.py", line 1561, in cal_model_mem mem_info = estimate_llm_gpu_memory( File "/root/anaconda3/envs/glm-4v-x/lib/python3.10/site-packages/xinference/model/llm/memory.py", line 102, in estimate_llm_gpu_memory info = get_model_layers_info( File "/root/anaconda3/envs/glm-4v-x/lib/python3.10/site-packages/xinference/model/llm/memory.py", line 227, in get_model_layers_info return load_model_config_json(config_path) File "/root/anaconda3/envs/glm-4v-x/lib/python3.10/site-packages/xinference/model/llm/memory.py", line 186, in load_model_config_json vocab_size=int(_load_item_from_json(config_data, "vocab_size")), File "/root/anaconda3/envs/glm-4v-x/lib/python3.10/site-packages/xinference/model/llm/memory.py", line 179, in _load_item_from_json raise ValueError("load ModelLayersInfo: missing %s" % (keys[0])) ValueError: load ModelLayersInfo: missing vocab_size

期待表现 / 期待表现

修复glm-4v显存计算问题。另外,–quantization {precision}参数也有问题,建议一并查改。

twh00eeo

twh00eeo1#

感谢,@frostyplanet 你有时间看这个问题吗?

eanckbw9

eanckbw92#

@Jalen-Zhong 这个 vl 模型的config.json 格式和其他模型不同,这个好办。但 vl 模型的计算原理有不同,所以按原算法算不准的,有没有原理的文章能看看

相关问题