ragflow [问题]:在使用Xinference部署本地模型时,应该使用哪种基本URL?

fykwrbwg  于 3个月前  发布在  其他
关注(0)|答案(1)|浏览(44)

Describe your problem

I use Xinference (or ollama) to deploy local llm models
I can download glm4-chat-1m from Xinference, or local llm custom-glm4-chat
and I can enter the UI of the conversation, and the conversation is successful, but I can't add it to ragflow,
which basic URL is right.
or I did wrong things.
Here's the URL I've tried, and the error info
" http://host.docker.internal:9997/v1 " --- "提示 : 102--Fail to access model(glm4-chat-1m).ERROR: Connection error."
" http://127.0.0.1:9997/v1 " --- same sith above
" http://host.docker.internal:9997 " --- same with above
" http://127.0.0.1:9997 " --- same with above
" http://localhost:9997 " --- same with above
" http://localhost:9997/v1 " --- same with above

提示 : 102
Fail to access model(llama3).ERROR: [Errno -2] Name or service not known

1yjd4xko

1yjd4xko1#

我不确定主机的IP地址是否正确,并且不要忘记/v1。

相关问题