发生了什么?
当我通过Ollama与新创建的大脑进行聊天时,出现了“不同向量维度”的错误。嵌入查询的预期维度为4096(chatgpt3.5-turbo-0125是正常的)。
相关日志输出
backend-core | 2024-03-17 08:57:39,590:INFO - HTTP Request: GET http://host.docker.internal:54321/rest/v1/brains?select=%2A&brain_id=eq.b1c76842-fa7a-416e-b6a0-3f84fef9752f "HTTP/1.1 200 OK"
backend-core | 2024-03-17 08:57:39,596:INFO - HTTP Request: GET http://host.docker.internal:54321/rest/v1/integrations_user?select=%2A&brain_id=eq.b1c76842-fa7a-416e-b6a0-3f84fef9752f&user_id=eq.39418e3b-0258-4452-af60-7acfcc1263ff "HTTP/1.1 200 OK"
backend-core | 2024-03-17 08:57:39,601:INFO - HTTP Request: GET http://host.docker.internal:54321/rest/v1/integrations?select=%2A&id=eq.b37a2275-61b3-460b-b4ab-94dfdf3642fb "HTTP/1.1 200 OK"
backend-core | 2024-03-17 08:57:39,606:INFO - HTTP Request: GET http://host.docker.internal:54321/rest/v1/brains_users?select=id%3Abrain_id%2C%20rights%2C%20brains%20%28id%3A%20brain_id%2C%20status%2C%20name%2C%20brain_type%2C%20description%29&user_id=eq.39418e3b-0258-4452-af60-7acfcc1263ff&brain_id=eq.b1c76842-fa7a-416e-b6a0-3f84fef9752f "HTTP/1.1 200 OK"
backend-core | 2024-03-17 08:57:39,611:INFO - HTTP Request: GET http://host.docker.internal:54321/rest/v1/brains?select=%2A&brain_id=eq.b1c76842-fa7a-416e-b6a0-3f84fef9752f "HTTP/1.1 200 OK"
backend-core | 2024-03-17 08:57:39,617:INFO - HTTP Request: GET http://host.docker.internal:54321/rest/v1/integrations_user?select=%2A&brain_id=eq.b1c76842-fa7a-416e-b6a0-3f84fef9752f&user_id=eq.39418e3b-0258-4452-af60-7acfcc1263ff "HTTP/1.1 200 OK"
backend-core | 2024-03-17 08:57:39,621:INFO - HTTP Request: GET http://host.docker.internal:54321/rest/v1/integrations?select=%2A&id=eq.b37a2275-61b3-460b-b4ab-94dfdf3642fb "HTTP/1.1 200 OK"
backend-core | INFO: 172.21.0.1:37134 - "GET /brains/b1c76842-fa7a-416e-b6a0-3f84fef9752f/ HTTP/1.1" 200 OK
backend-core | INFO: 172.21.0.1:37134 - "OPTIONS /chat/6604e220-9b7d-4b82-b33f-fbd7038ec291/question/stream?brain_id=b1c76842-fa7a-416e-b6a0-3f84fef9752f HTTP/1.1" 200 OK
backend-core | 2024-03-17 08:57:41,384:INFO - HTTP Request: GET http://host.docker.internal:54321/rest/v1/brains?select=%2A&brain_id=eq.b1c76842-fa7a-416e-b6a0-3f84fef9752f "HTTP/1.1 200 OK"
backend-core | 2024-03-17 08:57:41,389:INFO - HTTP Request: GET http://host.docker.internal:54321/rest/v1/integrations_user?select=%2A&brain_id=eq.b1c76842-fa7a-416e-b6a0-3f84fef9752f&user_id=eq.39418e3b-0258-4452-af60-7acfcc1263ff "HTTP/1.1 200 OK"
backend-core | 2024-03-17 08:57:41,392:INFO - HTTP Request: GET http://host.docker.internal:54321/rest/v1/integrations?select=%2A&id=eq.b37a2275-61b3-460b-b4ab-94dfdf3642fb "HTTP/1.1 200 OK"
backend-core | 2024-03-17 08:57:41,395:INFO - HTTP Request: GET http://host.docker.internal:54321/rest/v1/brains_users?select=id%3Abrain_id%2C%20rights%2C%20brains%20%28id%3A%20brain_id%2C%20status%2C%20name%2C%20brain_type%2C%20description%29&user_id=eq.39418e3b-0258-4452-af60-7acfcc1263ff&brain_id=eq.b1c76842-fa7a-416e-b6a0-3f84fef9752f "HTTP/1.1 200 OK"
backend-core | [INFO] modules.chat.controller.chat_routes [chat_routes.py:228]: Creating question for chat 6604e220-9b7d-4b82-b33f-fbd7038ec291 with brain b1c76842-fa7a-416e-b6a0-3f84fef9752f of type <class 'uuid.UUID'>
backend-core | 2024-03-17 08:57:41,399:INFO - HTTP Request: GET http://host.docker.internal:54321/rest/v1/brains?select=%2A&brain_id=eq.b1c76842-fa7a-416e-b6a0-3f84fef9752f "HTTP/1.1 200 OK"
backend-core | 2024-03-17 08:57:41,403:INFO - HTTP Request: GET http://host.docker.internal:54321/rest/v1/integrations_user?select=%2A&brain_id=eq.b1c76842-fa7a-416e-b6a0-3f84fef9752f&user_id=eq.39418e3b-0258-4452-af60-7acfcc1263ff "HTTP/1.1 200 OK"
backend-core | 2024-03-17 08:57:41,406:INFO - HTTP Request: GET http://host.docker.internal:54321/rest/v1/integrations?select=%2A&id=eq.b37a2275-61b3-460b-b4ab-94dfdf3642fb "HTTP/1.1 200 OK"
backend-core | 2024-03-17 08:57:41,409:INFO - HTTP Request: GET http://host.docker.internal:54321/rest/v1/brains_users?select=id%3Abrain_id%2C%20rights%2C%20brains%20%28id%3A%20brain_id%2C%20status%2C%20name%2C%20brain_type%2C%20description%29&user_id=eq.39418e3b-0258-4452-af60-7acfcc1263ff&brain_id=eq.b1c76842-fa7a-416e-b6a0-3f84fef9752f "HTTP/1.1 200 OK"
backend-core | 2024-03-17 08:57:41,418:INFO - HTTP Request: GET http://host.docker.internal:54321/rest/v1/chat_history?select=%2A&chat_id=eq.6604e220-9b7d-4b82-b33f-fbd7038ec291&order=message_time "HTTP/1.1 200 OK"
backend-core | 2024-03-17 08:57:41,427:INFO - HTTP Request: GET http://host.docker.internal:54321/rest/v1/brains?select=id%3Abrain_id%2C%20name%2C%20%2A&brain_id=eq.b1c76842-fa7a-416e-b6a0-3f84fef9752f "HTTP/1.1 200 OK"
backend-core | 2024-03-17 08:57:41,601:INFO - HTTP Request: POST http://host.docker.internal:54321/rest/v1/rpc/match_brain "HTTP/1.1 400 Bad Request"
backend-core | INFO: 172.21.0.1:37134 - "POST /chat/6604e220-9b7d-4b82-b33f-fbd7038ec291/question/stream?brain_id=b1c76842-fa7a-416e-b6a0-3f84fef9752f HTTP/1.1" 500 Internal Server Error
backend-core | ERROR: Exception in ASGI application
backend-core | Traceback (most recent call last):
backend-core | File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
backend-core | result = await app( # type: ignore[func-returns-value]
backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core | File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
backend-core | return await self.app(scope, receive, send)
backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core | File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
backend-core | await super().__call__(scope, receive, send)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
backend-core | await self.middleware_stack(scope, receive, send)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__
backend-core | raise exc
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__
backend-core | await self.app(scope, receive, _send)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/cors.py", line 91, in __call__
backend-core | await self.simple_response(scope, receive, send, request_headers=headers)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/cors.py", line 146, in simple_response
backend-core | await self.app(scope, receive, send)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
backend-core | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
backend-core | raise exc
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
backend-core | await app(scope, receive, sender)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 758, in __call__
backend-core | await self.middleware_stack(scope, receive, send)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 778, in app
backend-core | await route.handle(scope, receive, send)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 299, in handle
backend-core | await self.app(scope, receive, send)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 79, in app
backend-core | await wrap_app_handling_exceptions(app, request)(scope, receive, send)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
backend-core | raise exc
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
backend-core | await app(scope, receive, sender)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 74, in app
backend-core | response = await func(request)
backend-core | ^^^^^^^^^^^^^^^^^^^
backend-core | File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 278, in app
backend-core | raw_response = await run_endpoint_function(
backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core | File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
backend-core | return await dependant.call(**values)
backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core | File "/code/modules/chat/controller/chat_routes.py", line 232, in create_stream_question_handler
backend-core | gpt_answer_generator = get_answer_generator(
backend-core | ^^^^^^^^^^^^^^^^^^^^^
backend-core | File "/code/modules/chat/controller/chat_routes.py", line 76, in get_answer_generator
backend-core | brain, metadata_brain = brain_service.find_brain_from_question(
backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core | File "/code/modules/brain/service/brain_service.py", line 109, in find_brain_from_question
backend-core | list_brains = vector_store.find_brain_closest_query(user.id, question)
backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core | File "/code/vectorstore/supabase.py", line 54, in find_brain_closest_query
backend-core | ).execute()
backend-core | ^^^^^^^^^
backend-core | File "/usr/local/lib/python3.11/site-packages/postgrest/_sync/request_builder.py", line 127, in execute
backend-core | raise APIError(r.json())
backend-core | postgrest.exceptions.APIError: {'code': '22000', 'details': None, 'hint': None, 'message': 'different vector dimensions 1536 and 4096'}
Twitter / LinkedIn详细信息
- 无响应*
6条答案
按热度按时间xmjla07d1#
嘿,@binhaoIsUsed,很高兴看到你回来!开始另一个挑战了吗?让我们一起解决这个问题。
你遇到的错误可能是由于
find_brain_closest_query
函数期望的向量维度与Embeddings
示例生成的实际向量维度之间存在不匹配。find_brain_closest_query
函数试图使用query_embedding
参数执行一个RPC调用,该参数应为具有特定维度的向量。错误信息表明,函数期望向量维度为4096,但由Embeddings
示例生成的向量维度为1536。要解决此问题,你需要确保由
Embeddings
示例生成的向量与预期的维度相匹配。这可能包括:Embeddings
示例的配置,以确保其设置为生成正确大小(在这种情况下为4096)的向量。Embeddings
示例是正确的且无法更改,您可能需要调整find_brain_closest_query
函数以接受大小为1536的向量。find_brain_closest_query
函数无法更改,您可能需要找到一种将1536维向量转换或适应到预期的4096维格式的方法,尽管这可能会导致信息或准确性的损失。不幸的是,由于缺乏关于
Embeddings
类在CustomSupabaseVectorStore
中的具体实现的更多信息,很难提供更具体的解决方案。如果您能提供更多关于此的详细信息,将有助于诊断和解决问题。来源
关于 Dosu
此响应旨在帮助您节省时间并提高效率。它不是精确解决方案,而是您自己研究的起点。
请帮助我变得更有用!如果这有帮助,请留下一个👍,如果无关紧要,请留下一个👎。
如果你想继续对话,请以 @dosu-bot 开头回复。
ruarlubt2#
对不起,数据库的种子设置使得它只接受OpenAI。
你能在localhost:54323上删除brain_vectors表中的所有行吗?然后再尝试一次这个过程?
3npbholx3#
想要修复这个bug...。
jgzswidk4#
请继续:)
在2024年3月17日星期日晚上10点42分11秒,Mirza Ismail Baig <***@***.***> 写道:想要修复这个bug。——直接回复此电子邮件,查看GitHub上的内容(#2356(评论)),或者取消订阅(https://github.com/notifications/unsubscribe-auth/AEVUW3B7QR2XPQFQNFNFHV3YYZ5DHAVCNFSM6AAAAABE2BK4ISVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMBSHE3TEMJSHE)。您收到此邮件是因为您进行了评论。消息ID:QuivrHQ/quivr/issues/2356/2002972129@github.com
c8ib6hqw5#
添加一个大脑,但即使brains_vector表为空,仍然出现错误。
biswetbf6#
在这里,您可以找到关于模式调整的评论。修复模型属性#2690