quivr [Bug]: 我正在尝试运行在ollama上的ollama/dolphin-phi,但是/chat/{chat_id}/question抛出了一个错误信息:"model 'llama2' not found, try pulling it first",

uxh89sit  于 2个月前  发布在  其他
关注(0)|答案(8)|浏览(30)

发生了什么?
我正在尝试在ollama上运行ollama/dolphin-phi模型,但是/chat/{chat_id}/question抛出了{"error":"model 'llama2' not found, try pulling it first"}错误。由于我的系统上的资源有限,我不想在我的ollama上加载llama2
相关日志输出:

INFO:     192.168.29.176:50344 - "POST /chat/9ba4c7d0-80a0-4798-a2b0-5749748775c7/question?brain_id=f92b9784-847c-48df-a016-fb0933fc2323 HTTP/1.
1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 428, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 276, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
    raise exc
  File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.11/site-packages/starlette/middleware/cors.py", line 91, in __call__
    await self.simple_response(scope, receive, send, request_headers=headers)
  File "/usr/local/lib/python3.11/site-packages/starlette/middleware/cors.py", line 146, in simple_response
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
    raise exc
  File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "/usr/local/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
    raise e
  File "/usr/local/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 66, in app
    response = await func(request)
               ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 237, in app
    raw_response = await run_endpoint_function(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 163, in run_endpoint_function
    return await dependant.call(**values)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/code/modules/chat/controller/chat_routes.py", line 171, in create_question_handler
    chat_answer = gpt_answer_generator.generate_answer(
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/code/llm/knowledge_brain_qa.py", line 122, in generate_answer
    model_response = qa(
                     ^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 312, in __call__
    raise e
  File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 306, in __call__
    self._call(inputs, run_manager=run_manager)
  File "/usr/local/lib/python3.11/site-packages/langchain/chains/conversational_retrieval/base.py", line 151, in _call
    docs = self._get_docs(new_question, inputs, run_manager=_run_manager)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/chains/conversational_retrieval/base.py", line 308, in _get_docs
    docs = self.retriever.get_relevant_documents(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain_core/retrievers.py", line 211, in get_relevant_documents
    raise e
  File "/usr/local/lib/python3.11/site-packages/langchain_core/retrievers.py", line 204, in get_relevant_documents
    result = self._get_relevant_documents(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 656, in _get_relevant_documents
    docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/code/vectorstore/supabase.py", line 32, in similarity_search
    vectors = self._embedding.embed_documents([query])
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/embeddings/ollama.py", line 190, in embed_documents
    embeddings = self._embed(instruction_pairs)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/embeddings/ollama.py", line 175, in _embed
    embeddings = self._process_emb_response(prompt)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/embeddings/ollama.py", line 160, in _process_emb_response
    raise ValueError(
ValueError: Error raised by inference API HTTP code: 404, {"error":"model 'llama2' not found, try pulling it first"}

Twitter / LinkedIn详细信息:无响应

to94eoyn

to94eoyn1#

你好,@shubham-dayma! 👋 我是 Dosu ,一个友好的机器人,在人类维护者处理你的问题时提供帮助。我擅长解决bug、回答问题,并帮助像你这样的人成为贡献者。
请将我视为等待人类干预时的友好助手。我正在深入研究你发布的问题,并将在不久后为你提供完整的回复。请耐心等待! 🚀

az31mfrm

az31mfrm2#

ollama pull dolphin-phi 是我的朋友;)

gzjq41n4

gzjq41n43#

@StanGirard Thanks for looking into this. I already had dolphin-phi in ollama. I tried checking ollama logs and it ended up that, while initializing langchain.embeddings.ollama.OllamaEmbeddings we are not passing param model and by default this is considering model as llama2 . I changed this code as below and it worked.
@Property
def embeddings(self):
if self.brain_settings.ollama_api_base_url:
return OllamaEmbeddings(
base_url=self.brain_settings.ollama_api_base_url,

model=self.model.replace('ollama/', "")

) # pyright: ignore reportPrivateUsage=none
else:
return OpenAIEmbeddings()
Now I am wondering how should I get brain's model in backend/models/settings.py and update the initializing of OllamaEmbeddings here.

f45qwnt8

f45qwnt84#

  • 🤖*

根据提供的上下文,似乎您已经通过修改 OllamaEmbeddings 初始化在 quivr_rag.py 中解决了问题。要在 backend/models/settings.py 中应用类似的修复,您需要在 get_embeddings 函数中获得模型名称的访问权限。
实现这一点的一种方法是修改 BrainSettings 类以包含模型名称,然后在初始化 OllamaEmbeddings 时使用它。以下是如何执行此操作的示例:

class BrainSettings(BaseSettings):
    ...
    model: str = "llama2"  # Add this line

    class Config:
        env_file = ".env"

def get_embeddings():
    settings = BrainSettings()
    if settings.ollama_api_base_url:
        embeddings = OllamaEmbeddings(
            base_url=settings.ollama_api_base_url,
            model=settings.model  # Use the model from settings here
        )
    else:
        embeddings = OpenAIEmbeddings()
    return embeddings

此解决方案假定 OllamaEmbeddings 类在初始化过程中接受一个 model 参数。如果它没有,您可能需要修改 OllamaEmbeddings 类以接受并使用此参数。
请注意,这是基于提供的信息和Quivr存储库的上下文的一个潜在解决方案。它可能不是最佳的或唯一的解决方案。如果您有任何进一步的问题或需要更多帮助,请随时提问。
有关更多信息,请参阅设置管理中的 Quivr repositoryFastAPI documentation

来源

此响应旨在节省您的时间并对您有所帮助。它不是精确的解决方案,而是您自己研究的起点。
帮助我变得更有用!如果这有帮助,请留下一个 👍 如果它无关紧要,请留下一个 👎 。
如果您想继续对话,请以 @dosu-bot 开始您的回复。

lyfkaqu1

lyfkaqu15#

@StanGirard Thanks for looking into this. I already had dolphin-phi in ollama. I tried checking ollama logs and it ended up that, while initializing langchain.embeddings.ollama.OllamaEmbeddings we are not passing param model and by default this is considering model as llama2 . I changed this code as below and it worked.
@Property
def embeddings(self):
if self.brain_settings.ollama_api_base_url:
return OllamaEmbeddings(
base_url=self.brain_settings.ollama_api_base_url,

model=self.model.replace('ollama/', "")

) # pyright: ignore reportPrivateUsage=none
else:
return OpenAIEmbeddings()
Now I am wondering how should I get brain's model in backend/models/settings.py and update the initializing of OllamaEmbeddings here.
embeddings = OllamaEmbeddings(model="XXX")
make it your own model, like "qwen:7b"、“llama2” and so on, that will be solved

eqzww0vc

eqzww0vc6#

确切地说!...

在2024年3月17日星期日晚上21:20:51,Wills < @***. >写道: > > > @ StanGirard ( https://github.com/StanGirard )感谢调查此事。我已经在ollama中有了dolphin-phi。我尝试检查ollama日志,结果发现,在初始化langchain.embeddings.ollama.OllamaEmbeddings时,我们没有传递参数model,而默认情况下这将model视为llama2。我修改了这段代码( https://github.com/StanGirard/quivr/blob/31ce6c05660db8088da96f7bd6f5a0f5f60ffaf7/backend/llm/rags/quivr_rag.py#L67 )如下,它可以工作。

@ Property ( https://github.com/Property ) >> def embeddings(self): >> if self.brain_settings.ollama_api_base_url: >> return OllamaEmbeddings( >> base_url=self.brain_settings.ollama_api_base_url, >> model=self.model.replace('ollama/', "") >> ) # pyright: ignore reportPrivateUsage=none >> else: >> return OpenAIEmbeddings() >> >> > > > >现在我想知道如何在backend/ models/ settings.py中获取大脑的模型( http://backend/models/settings.py ),并在此更新OllamaEmbeddings的初始化( https://github.com/StanGirard/quivr/blob/31ce6c05660db8088da96f7bd6f5a0f5f60ffaf7/backend/models/settings.py#L133 )。

embeddings = OllamaEmbeddings(model="XXX")将其设置为您自己的模型,如"qwen:7b"、“llama2”等,这样就可以解决问题——直接回复此电子邮件,查看GitHub( #2056 (comment) ),或取消订阅( https://github.com/notifications/unsubscribe-auth/AEVUW3DBZREPV5C7J6UV65LYYZTSHAVCNFSM6AAAAABCFEO5WWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMBSHEYDCNBQGA )。您收到此邮件是因为有人提到了您。消息ID:<QuivrHQ/quivr/issues/2056/2002901400 @ github. com>

bq9c1y66

bq9c1y667#

确切地说!
...
在 2024年3月17日,星期日,21:20:51,Wills <***@***.***>写道: > > > @ StanGirard(https://github.com/StanGirard)感谢你调查这个问题。我已经在ollama中有了dolphin-phi。我尝试检查ollama日志,结果发现,在初始化langchain.embeddings.ollama.OllamaEmbeddings时,我们没有传递参数model,而默认情况下这会将model视为llama2。我修改了这段代码(https://github.com/StanGirard/quivr/blob/31ce6c05660db8088da96f7bd6f5a0f5f60ffaf7/backend/llm/rags/quivr_rag.py#L67),如下所示,它可以工作。

@ Property(https://github.com/Property) def embeddings(self): if self.brain_settings.ollama_api_base_url: return OllamaEmbeddings(base_url=self.brain_settings.ollama_api_base_url, model=self.model.replace('ollama/', "")) # pyright: ignore reportPrivateUsage=none else: return OpenAIEmbeddings() > > > > >现在我在想如何在backend/models/settings.py(http://backend/models/settings.py)中获取大脑的模型,并在这里更新OllamaEmbeddings的初始化(https://github.com/StanGirard/quivr/blob/31ce6c05660db8088da96f7bd6f5a0f5f60ffaf7/backend/models/settings.py#L133)。
embeddings = OllamaEmbeddings(model="XXX")将其设置为你自己的模型,如"qwen:7b"、“llama2”等,这样就可以解决问题——直接回复此电子邮件,查看GitHub(#2056(评论)),或取消订阅(https://github.com/notifications/unsubscribe-auth/AEVUW3DBZREPV5C7J6UV65LYYZTSHAVCNFSM6AAAAABCFEO5WWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMBSHEYDCNBQGA)。您收到此邮件是因为有人提到了您。消息ID: </issues/2056/2002901400 @ github.com>
https://github.com/StanGirard/quivr/blob/31ce6c05660db8088da96f7bd6f5a0f5f60ffaf7/backend/llm/rags/quivr_rag.py#L67
如果最新版本没有这个文件,我该如何设置本地嵌入式模型?

vtwuwzda

vtwuwzda8#

请提出修复方案。

相关问题