ollama 嵌入API接口无法正常工作,

w51jfk4q  于 4个月前  发布在  其他
关注(0)|答案(8)|浏览(38)

问题是什么?

我在graphrag中使用bge-m3模型,并使用以下参数:

embeddings:
  ## parallelization: override the global parallelization settings for embeddings
  async_mode: asyncio
  llm:
    api_key: 
    type: openai_embedding # or azure_openai_embedding
    model: chatfire/bge-m3:q8_0
    api_base: http://localhost:11434/api

返回以下错误:

17:11:30,126 httpx INFO HTTP Request: POST http://localhost:11434/api/embeddings "HTTP/1.1 200 OK"
17:11:30,129 datashaper.workflow.workflow ERROR Error executing verb "text_embed" in create_final_entities: 'NoneType' object is not iterable
Traceback (most recent call last):
  File "E:\Langchain-Chatchat\glut\lib\site-packages\datashaper\workflow\workflow.py", line 415, in _execute_verb
    result = await result
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\index\verbs\text\embed\text_embed.py", line 105, in text_embed
    return await _text_embed_in_memory(
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\index\verbs\text\embed\text_embed.py", line 130, in _text_embed_in_memory
    result = await strategy_exec(texts, callbacks, cache, strategy_args)
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\index\verbs\text\embed\strategies\openai.py", line 61, in run
    embeddings = await _execute(llm, text_batches, ticker, semaphore)
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\index\verbs\text\embed\strategies\openai.py", line 105, in _execute
    results = await asyncio.gather(*futures)
  File "E:\Langchain-Chatchat\glut\lib\asyncio\tasks.py", line 304, in __wakeup
    future.result()
  File "E:\Langchain-Chatchat\glut\lib\asyncio\tasks.py", line 232, in __step
    result = coro.send(None)
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\index\verbs\text\embed\strategies\openai.py", line 99, in embed
    chunk_embeddings = await llm(chunk)
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\base\caching_llm.py", line 104, in __call__
    result = await self._delegate(input, **kwargs)
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\base\rate_limiting_llm.py", line 177, in __call__
    result, start = await execute_with_retry()
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\base\rate_limiting_llm.py", line 159, in execute_with_retry
    async for attempt in retryer:
  File "E:\Langchain-Chatchat\glut\lib\site-packages\tenacity\_asyncio.py", line 71, in __anext__
    do = self.iter(retry_state=self._retry_state)
  File "E:\Langchain-Chatchat\glut\lib\site-packages\tenacity\__init__.py", line 314, in iter
    return fut.result()
  File "E:\Langchain-Chatchat\glut\lib\concurrent\futures\_base.py", line 451, in result
    return self.__get_result()
  File "E:\Langchain-Chatchat\glut\lib\concurrent\futures\_base.py", line 403, in __get_result
    raise self._exception
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\base\rate_limiting_llm.py", line 165, in execute_with_retry
    return await do_attempt(), start
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\base\rate_limiting_llm.py", line 147, in do_attempt
    return await self._delegate(input, **kwargs)
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\base\base_llm.py", line 49, in __call__
    return await self._invoke(input, **kwargs)
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\base\base_llm.py", line 53, in _invoke
    output = await self._execute_llm(input, **kwargs)
  File "E:\Langchain-Chatchat\glut\lib\site-packages\graphrag\llm\openai\openai_embeddings_llm.py", line 36, in _execute_llm
    embedding = await self.client.embeddings.create(
  File "E:\Langchain-Chatchat\glut\lib\site-packages\openai\resources\embeddings.py", line 215, in create
    return await self._post(
  File "E:\Langchain-Chatchat\glut\lib\site-packages\openai\_base_client.py", line 1826, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
  File "E:\Langchain-Chatchat\glut\lib\site-packages\openai\_base_client.py", line 1519, in request
    return await self._request(
  File "E:\Langchain-Chatchat\glut\lib\site-packages\openai\_base_client.py", line 1622, in _request
    return await self._process_response(
  File "E:\Langchain-Chatchat\glut\lib\site-packages\openai\_base_client.py", line 1714, in _process_response
    return await api_response.parse()
  File "E:\Langchain-Chatchat\glut\lib\site-packages\openai\_response.py", line 419, in parse
    parsed = self._options.post_parser(parsed)
  File "E:\Langchain-Chatchat\glut\lib\site-packages\openai\resources\embeddings.py", line 203, in parser
    for embedding in obj.data:
TypeError: 'NoneType' object is not iterable
17:11:30,131 graphrag.index.reporting.file_workflow_callbacks INFO Error executing verb "text_embed" in create_final_entities: 'NoneType' object is not iterable details=None
17:11:30,142 graphrag.index.run ERROR error running workflow create_final_entities

操作系统

Windows

GPU

Nvidia

CPU

AMD

Ollama版本

0.2.8

6ioyuze2

6ioyuze21#

我切换到了xinference的嵌入API,它运行得很好!

7kjnsjlb

7kjnsjlb2#

看起来请求http://localhost:11434/api/embeddings是有效的,但我不确定嵌入式模型为何无法正常工作。

jhkqcmku

jhkqcmku3#

OpenAI的接口调用嵌入模型是否还不支持?

wfsdck30

wfsdck304#

The ollama嵌入端点是localhost:11434/api/embed
与openai兼容的嵌入端点是localhost:11434/v1/embeddings
返回的数据格式也不同(为了空间原因,我已从示例中删除了嵌入):

$ curl -s localhost:11434/api/embed -d '{"model":"chatfire/bge-m3:q8_0","input":"Your text string goes here"}' | jq '.embeddings=[]' 
{
  "model": "chatfire/bge-m3:q8_0",
  "embeddings": []
}
$ curl -s localhost:11434/v1/embeddings -d '{"model":"chatfire/bge-m3:q8_0","input":"Your text string goes here"}' | jq '.data[].embedding=[]' 
{
  "object": "list",
  "data": [
    {
      "object": "embedding",
      "embedding": [],
      "index": 0
    }
  ],
  "model": "chatfire/bge-m3:q8_0"
}
vyswwuz2

vyswwuz25#

localhost:11434/api/embeddings 需要一个不同的请求(prompt 而不是 input ),并生成一个与其他两种方法不同格式的响应,其中嵌入的精度也与这两种方法不同:

$ curl -s localhost:11434/api/embeddings -d '{"model":"chatfire/bge-m3:q8_0","prompt":"Your text string goes here"}'  | jq '.embedding=[]'
{
  "embedding": []
}
q1qsirdb

q1qsirdb6#

GraphRAG也存在同样的问题,下一个版本是否会兼容OpenAI的嵌入API?

kgsdhlau

kgsdhlau7#

以下是您提供的文本内容的翻译结果:

ollama 嵌入端点是 `localhost:11434/api/embed` 。 openai 兼容嵌入端点是 `localhost:11434/v1/embeddings` 。 返回的数据也是不同的格式(为了节省空间,我从示例中删除了嵌入):

$ curl -s localhost:11434/api/embed -d '{"model":"chatfire/bge-m3:q8_0","input":"Your text string goes here"}' | jq '.embeddings=[]'
{
"model": "chatfire/bge-m3:q8_0",
"embeddings": []
}

$ curl -s localhost:11434/v1/embeddings -d '{"model":"chatfire/bge-m3:q8_0","input":"Your text string goes here"}' | jq '.data[].embedding=[]'
{
"object": "list",
"data": [
{
"object": "embedding",
"embedding": [],
"index": 0
}
],
"model": "chatfire/bge-m3:q8_0"
}

text_embedder = OpenAIEmbedding(
api_key="ollama",
api_base="http://localhost:11434/v1",
model="chatfire/bge-m3:q8_0",
deployment_name="chatfire/bge-m3:q8_0",
api_type=OpenaiApiType.OpenAI,
max_retries=20,
)


Code Run Error
 2024-07-25 18:05:22,568 - httpx - INFO - HTTP Request: POST  [http://localhost:11434/v1/embeddings](http://localhost:11434/v1/embeddings)  "HTTP/1.1 400 Bad Request"
qyyhg6bp

qyyhg6bp8#

你的应用正在发送一个被ollama无法理解的请求。你需要查看这个请求是什么,可以通过让客户端记录它,或者使用如wireshark、tcpdump、tcpflow等工具来捕获网络流量。

相关问题