Describe your problem
I use Xinference (or ollama) to deploy local llm models
I can download glm4-chat-1m from Xinference, or local llm custom-glm4-chat
and I can enter the UI of the conversation, and the conversation is successful, but I can't add it to ragflow,
which basic URL is right.
or I did wrong things.
Here's the URL I've tried, and the error info
" http://host.docker.internal:9997/v1 " --- "提示 : 102--Fail to access model(glm4-chat-1m).ERROR: Connection error."
" http://127.0.0.1:9997/v1 " --- same sith above
" http://host.docker.internal:9997 " --- same with above
" http://127.0.0.1:9997 " --- same with above
" http://localhost:9997 " --- same with above
" http://localhost:9997/v1 " --- same with above
“
提示 : 102
Fail to access model(llama3).ERROR: [Errno -2] Name or service not known
1条答案
按热度按时间1yjd4xko1#
我不确定主机的IP地址是否正确,并且不要忘记/v1。