在配置QAnything/assets/custom_models/.时,bash.sh出现问题,

pexxcrt2  于 2个月前  发布在  其他
关注(0)|答案(6)|浏览(61)

是否已有关于该错误的issue或讨论? | 是否存在关于此错误的issue或讨论?

  • 我已经搜索过已有的issues和讨论 | 我已搜索过现有的issues和讨论

该问题是否在FAQ中有解答? | 此问题是否在FAQ中有解答?

  • 我已经搜索过FAQ | 我已搜索过FAQ

当前行为 | 当前行为

(base) z@z:~/文档/Code/QAnything$ sudo bash run.sh
来自 https://gitclone.com/github.com/netease-youdao/QAnything
 * branch            master     -> FETCH_HEAD
当前master分支已是最新,无需更新。
model_size=7B
GPUID1=0, GPUID2=0, device_id=0
GPU1 Model: Tesla M40 24GB
Compute Capability: null
OCR_USE_GPU=False because null >= 7.5
====================================================
******************** 重要提示 ********************
====================================================

您的显卡型号 Tesla M40 24GB 部署默认后端FasterTransformer需要Nvidia RTX 30系列或40系列显卡,将自动为您切换后端:
根据匹配算法,已自动为您切换为huggingface后端
您当前的显存为 23040 MiB 推荐部署7B模型
llm_api is set to [local]
device_id is set to [0]
runtime_backend is set to [hf]
model_name is set to []
conv_template is set to []
tensor_parallel is set to [1]
gpu_memory_utilization is set to [0.81]
models 文件夹已存在,无需下载。
检查模型版本成功,当前版本为 v2.1.0。
Model directories check passed. (0/8)
模型路径和模型版本检查通过. (0/8)
Do you want to use the previous host: localhost? (yes/no) 是否使用上次的host: localhost?(yes/no) 回车默认选yes,请输入:
Running under native Linux
Stopping qanything-container-local ... 
Stopping milvus-standalone-local   ... 
Stopping milvus-minio-local        ... 
Stopping mysql-container-local     ... 
Stopping milvus-etcd-local         ... 
Stopping qanything-container-local ... done
Stopping mysql-container-local     ... done
Stopping milvus-standalone-local   ... done
Stopping milvus-etcd-local         ... done
Stopping milvus-minio-local        ... done
Removing qanything-container-local ... 
Removing milvus-standalone-local   ... 
Removing milvus-minio-local        ... 
Removing mysql-container-local     ... 
Removing milvus-etcd-local         ... 
Removing mysql-container-local     ... done
Removing milvus-standalone-local   ... done
Removing milvus-etcd-local         ... done
Removing qanything-container-local ... done
Removing milvus-minio-local        ... done
Removing network qanything_milvus_mysql_local
Creating network "qanything_milvus_mysql_local" with the default driver
Creating milvus-minio-local    ... done
Creating mysql-container-local ... done
Creating milvus-etcd-local     ... done
Creating milvus-standalone-local ... done
Creating qanything-container-local ... done
Attaching to qanything-container-local
qanything-container-local | 
qanything-container-local | =============================
qanything-container-local | == Triton Inference Server ==
qanything-container-local | =============================
qanything-container-local | 
qanything-container-local | NVIDIA Release 23.05 (build 61161506)
qanything-container-local | Triton Server Version 2.34.0
qanything-container-local | 
qanything-container-local | Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES.  All rights reserved.
qanything-container-local | 
qanything-container-local | Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.
qanything-container-local | 
qanything-container-local | This container image and its contents are governed by the NVIDIA Deep Learning Container License.
qanything-container-local | By pulling and using the container, you accept the terms and conditions of this license:
qanything-container-local | https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
qanything-container-local | 
qanything-container-local | llm_api is set to [local]
qanything-container-local | device_id is set to [0]
qanything-container-local | runtime_backend is set to [hf]
qanything-container-local | model_name is set to [-t]
qanything-container-local | conv_template is set to []
qanything-container-local | tensor_parallel is set to [1]
qanything-container-local | gpu_memory_utilization is set to [0.81]
qanything-container-local | checksum 77275c133c7dfcf1553a7b5ef043168d
qanything-container-local | default_checksum 77275c133c7dfcf1553a7b5ef043168d
qanything-container-local | 
qanything-container-local | [notice] A new release of pip is available: 23.3.2 -> 24.0
qanything-container-local | [notice] To update, run: python3 -m pip install --upgrade pip
qanything-container-local | GPU ID: 0, 0
qanything-container-local | The triton server for embedding and reranker will start on 0 GPUs
qanything-container-local | The -t folder does not exist under QAnything/assets/custom_models/. Please check your setup.
qanything-container-local | ���QAnything/assets/custom_models/������������-t������������������������������������������

期望行为 | 期望行为

这个情况是什么问题呀,应该如何解决

运行环境 | 运行环境

OS: Ubuntu 23.04
Driver Version: 535.146.02 
CUDA Version: 12.2 
Docker version 24.0.5, build 24.0.5-0ubuntu1
NVIDIA GPU Memory: 10+24GB

QAnything日志 | QAnything日志

没有api.log

复现方法 | 重现步骤

运行 sudo bash run.sh

备注 | 其他信息

wu

icnyk63a

icnyk63a2#

我在 Windows 11 的 WSL Ubuntu 子系统上也遇到了和题主相同的问题。由于我预先下载了 Qwen-7B-QAnything 模型,所以我将 "bash run.sh" 替换为了 "bash ./run.sh -c local -i 0 -b hf -m Qwen-7B-QAnything -t qwen-7b-qanything",之后程序正常运行。

ou6hu8tu

ou6hu8tu3#

这个错误提示表明在加载模型时出现了问题,可能是由于网络连接不稳定或者模型文件损坏等原因导致的。建议您检查网络连接是否正常,并尝试重新下载或更新模型文件。如果问题仍然存在,您可以尝试联系相关技术支持人员进行进一步的排查和解决。

fiei3ece

fiei3ece4#

我在Windows 11的WSL Ubuntu子系统上也遇到了和题主相同的问题。由于我预先下载了Qwen-7B-QAnything模型,所以我将“bash run.sh”替换为了“bash ./run.sh -c local -i 0 -b hf -m Qwen-7B-QAnything -t qwen-7b-qanything”,之后正常运行。

请问模型是下载到了./assets/custom_models下吗?

jckbn6z7

jckbn6z75#

我发现需要在custom_models下先创建好下载的模型名称,才能运行起来。

yduiuuwa

yduiuuwa6#

下载模型

50 (comment)

相关问题