Bug描述
在创建Streamlit聊天机器人时,我观察到以下情况:chat_engine = index.as_chat_engine(chat_mode="condense_question", verbose=True)
可以正常工作,但是chat_engine = index.as_chat_engine(chat_mode="react", verbose=True)
或其他模式(如上下文等)对于Hugging Face API抛出NotImplemented错误。在这两种情况下,我都从Streamlit UI传递相同的消息。因此,对于相同的消息,一个聊天模式可以正常工作,另一个则不行。
版本
0.10.38
重现步骤
- 从huggingfaceinterfaceapi和embeddings创建一个向量索引
- 使用不同的模式创建聊天引擎
chat_engine = index.as_chat_engine(chat_mode="condense_question", verbose=True)
- 观察condensed_quesrion和react chat模式的结果
相关日志/回溯
Local URL: http://localhost:8501
Network URL: http://192.168.0.7:8501
C:\Users\lenovo\AppData\Local\pypoetry\Cache\virtualenvs\simplerag-GtP-BaND-py3.12\Lib\site-packages\pydantic\_internal\_fields.py:160: UserWarning: Field "model_id" has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
warnings.warn(
C:\Users\lenovo\AppData\Local\pypoetry\Cache\virtualenvs\simplerag-GtP-BaND-py3.12\Lib\site-packages\langchain\_api\module_import.py:92: LangChainDeprecationWarning: Importing ChatMessageHistory from langchain.memory is deprecated. Please replace deprecated imports:
>> from langchain.memory import ChatMessageHistory
with new imports of:
>> from langchain_community.chat_message_histories import ChatMessageHistory
You can use the langchain cli to **automatically** upgrade many imports. Please see documentation here https://python.langchain.com/v0.2/docs/versions/v0_2/
warn_deprecated(
2024-05-31 23:07:27.003 Uncaught app exception
Traceback (most recent call last):
File "C:\Users\lenovo\AppData\Local\pypoetry\Cache\virtualenvs\simplerag-GtP-BaND-py3.12\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 600, in _run_script
exec(code, module.__dict__)
File "E:\ai\llmaindex\simpledocument\simpleRAG\simplerag\mychatbot.py", line 31, in <module>
response = chat_engine.chat(prompt)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lenovo\AppData\Local\pypoetry\Cache\virtualenvs\simplerag-GtP-BaND-py3.12\Lib\site-packages\llama_index\core\instrumentation\dispatcher.py", line 274, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lenovo\AppData\Local\pypoetry\Cache\virtualenvs\simplerag-GtP-BaND-py3.12\Lib\site-packages\llama_index\core\callbacks\utils.py", line 41, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lenovo\AppData\Local\pypoetry\Cache\virtualenvs\simplerag-GtP-BaND-py3.12\Lib\site-packages\llama_index\core\agent\runner\base.py", line 644, in chat
chat_response = self._chat(
^^^^^^^^^^^
File "C:\Users\lenovo\AppData\Local\pypoetry\Cache\virtualenvs\simplerag-GtP-BaND-py3.12\Lib\site-packages\llama_index\core\instrumentation\dispatcher.py", line 274, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lenovo\AppData\Local\pypoetry\Cache\virtualenvs\simplerag-GtP-BaND-py3.12\Lib\site-packages\llama_index\core\agent\runner\base.py", line 574, in _chat
cur_step_output = self._run_step(
^^^^^^^^^^^^^^^
File "C:\Users\lenovo\AppData\Local\pypoetry\Cache\virtualenvs\simplerag-GtP-BaND-py3.12\Lib\site-packages\llama_index\core\instrumentation\dispatcher.py", line 274, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lenovo\AppData\Local\pypoetry\Cache\virtualenvs\simplerag-GtP-BaND-py3.12\Lib\site-packages\llama_index\core\agent\runner\base.py", line 411, in _run_step
cur_step_output = self.agent_worker.run_step(step, task, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lenovo\AppData\Local\pypoetry\Cache\virtualenvs\simplerag-GtP-BaND-py3.12\Lib\site-packages\llama_index\core\callbacks\utils.py", line 41, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lenovo\AppData\Local\pypoetry\Cache\virtualenvs\simplerag-GtP-BaND-py3.12\Lib\site-packages\llama_index\core\agent\react\step.py", line 738, in run_step
return self._run_step(step, task)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lenovo\AppData\Local\pypoetry\Cache\virtualenvs\simplerag-GtP-BaND-py3.12\Lib\site-packages\llama_index\core\agent\react\step.py", line 533, in _run_step
chat_response = self._llm.chat(input_chat)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lenovo\AppData\Local\pypoetry\Cache\virtualenvs\simplerag-GtP-BaND-py3.12\Lib\site-packages\llama_index\core\instrumentation\dispatcher.py", line 274, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
index\llms\huggingface\base.py", line 631, in chat
**{**chat_messages_to_conversational_kwargs(messages), **kwargs}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lenovo\AppData\Local\pypoetry\Cache\virtualenvs\simplerag-GtP-BaND-py3.12\Lib\site-packages\llama_index\llms\huggingface\base.py", line 436, in chat_messages_to_conversational_kwargs
raise NotImplementedError("Messages passed in must be of odd length.")
NotImplementedError: Messages passed in must be of odd length.
8条答案
按热度按时间yr9zkbsy1#
嘿 @1ly1mk! 👋 我在这里帮助你解决任何bug、问题或你脑海中的建议。我是你的友好机器人,在你等待人类维护者时准备好提供帮助。
消息 "Messages passed in must be of odd length" 的出现是因为 LlamaIndex 框架中的
react
聊天模式要求消息序列的长度必须是奇数。这是react
模式的一个特定要求,而你的当前实现并未满足这个要求。要解决这个问题,请确保传递给
react
聊天模式的消息序列长度是奇数。你可以通过在将消息传递给聊天引擎之前添加一个检查并根据需要调整消息长度来实现这一点。以下是如何修改你的代码以确保消息长度为奇数的示例:
这个修改确保在将消息传递给聊天引擎之前,消息长度是奇数,这应该可以防止出现
NotImplementedError
的情况。pobjuy322#
一些Huggingface模型对消息传递的工作方式非常严格。很难处理,需要更新类以强制消息长度为奇数。
gtlvzcf83#
我添加了一个方法来确保消息是奇数,但我仍然遇到了问题。似乎对于消息序列进行了奇数检查,而不是对消息本身进行检查。
我打印了'messages',它产生了一个包含两个ChatMessage的数组,一个来自系统,另一个来自用户。
[ChatMessage(role=<MessageRole.SYSTEM: 'system'>, content='You are designed to help with a variety of tasks, from answering questions to providing summaries to other types of analyses.
Tools
You have access to a wide variety of tools. You are responsible for using the tools in any sequence you deem appropriate to complete the task at
hand.
This may require breaking the task into subtasks and using different tools to complete each subtask.
You
have access to the following tools:
Tool Args: {"type": "object", "properties": {"input": {"title": "Input", "type": "string"}}, "required": ["input"]}
Output Format
Please answer in the same language as the question and use the following format:
\nThought: The current language of the user is: (user\'s language). I need to use a tool to help me answer the question.\nAction: tool name (one of query_engine_tool) if using a tool.\nAction Input: the input to the tool, in a JSON format representing the kwargs (e.g. {"input": "hello world", "num_beams": 5})\n
Please ALWAYS start with a Thought.
Please use a valid JSON format for the Action Input. Do NOT do this {'input': 'hello world', 'num_beams': 5}.
If this format is used, the user will respond in the following format:
\nObservation: tool response\n
You should keep repeating the above format till you have enough information to answer the question without using any more tools. At that point, you MUST respond in the one of the following two formats:
\nThought: I can answer without using any more tools. I\'ll use the user\'s language to answer\nAnswer: [your answer here (In the same language as the user\'s question)]\n
\nThought: I cannot answer the question with the provided tools.\nAnswer: [your answer here (In the same language as the user\'s question)]\n
Current Conversation
Below is the current conversation consisting of interleaving human and assistant messages.
', additional_kwargs={}), ChatMessage(role=<MessageRole.USER: 'user'>, content='who is paul? ', additional_kwargs={})]
可能在llama_index\llms\huggingface\base.py实现中存在问题。
cigdeys34#
NotImplementedError
消息为 "传入的消息必须为奇数长度" ,这是因为 LlamaIndex 框架中的react
聊天模式要求消息序列的长度必须为奇数。这是react
模式的一个特定要求,而在您当前的实现中并未满足。要解决此问题,您需要确保传递给
chat
方法的消息序列长度为奇数。错误是由于消息序列(而非单个消息)的长度必须为奇数而引发的。以下是如何修改代码以确保序列长度为奇数的方法:
chat
方法之前,检查消息序列的长度。以下是一个示例实现:
from typing import List
from llama_index.legacy.core.llms.types import ChatMessage, MessageRole
def ensure_odd_length(messages: List[ChatMessage]) -> List[ChatMessage]:
if len(messages) % 2 == 0:
# Add a dummy message to make the length odd
messages.append(ChatMessage(role=MessageRole.SYSTEM, content="Dummy message"))
return messages
Example usage
messages = [
ChatMessage(role=MessageRole.SYSTEM, content="System message content"),
ChatMessage(role=MessageRole.USER, content="User message content")
] # This is even length
messages = ensure_odd_length(messages) # This will make it odd length
response = chat_engine.chat(messages)
bq9c1y665#
Implement ensure odd message and now I get another exception
File "C:\Users\lenovo\AppData\Local\pypoetry\Cache\virtualenvs\simplerag-GtP-BaND-py3.12\Lib\site-packages\llama_index\llms\huggingface\base.py", line 450, in chat_messages_to_conversational_kwargs
raise NotImplementedError(
NotImplementedError: Didn't handle when messages aren't ordered in alternating pairs of (<MessageRole.USER: 'user'>, <MessageRole.ASSISTANT: 'assistant'>).
I had to modify the below line in chat_memory_buffer.py as m.content was throwing an error.
I guess it's too much to handle to make this work.
@logan-markewich@dosu
iyfamqjs6#
要解决与消息顺序相关的
NotImplementedError
问题,您需要确保消息是按照交替的<MessageRole.USER: 'user'>
和<MessageRole.ASSISTANT: 'assistant'>
对进行排序的。您可以使用merge_neighboring_same_role_messages
函数来帮助维护这种顺序。以下是如何使用
merge_neighboring_same_role_messages
函数的方法:此函数首先合并具有相同角色的相邻消息,然后检查以确保消息是按照交替的
USER
和ASSISTANT
对进行排序的。如果消息顺序不正确,它会引发一个ValueError
错误。通过采用这种方法,您可以确保消息序列满足
chat_messages_to_conversational_kwargs
函数的要求,并避免出现NotImplementedError
1(https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/utilities/gemini_utils.py)2(https://github.com/run-llama/llama_index/blob/main/llama-index-legacy/llama_index/legacy/llms/gemini_utils.py) 错误。dzhpxtsq7#
在实现以下异常时,弹出了while循环。
文件 "C:\Users\lenovo\AppData\Local\pypoetry\Cache\virtualenvs\simplerag-GtP-BaND-py3.12\Lib\site-packagesllama_index\legacyllms\gemini_utils.py",第116行,in merge_neighboring_same_role_messages
merged_message = ChatMessage(
^^^^^^^^^^^^
文件 "C:\Users\lenovo\AppData\Local\pypoetry\Cachevirtualenvs\simplerag-GtP-BaND-py3.12\Lib\site-packages\pydantic\v1\main.py",第341行,in init
raise validation_error
pydantic.v1.error_wrappers.ValidationError:对于ChatMessage有1个验证错误
additional_kwargs
none不是允许的值(类型=type_error.none.not_allowed)
@logan-markewich@dosu
7dl7o3gd8#
您遇到的
ValidationError
表示ChatMessage
类中的additional_kwargs
字段不能是None
。这个字段必须是一个字典,即使它是空的。要解决这个问题,确保如果没有提供
additional_kwargs
字段,additional_kwargs
字段始终被初始化为空字典。以下是如何修改merge_neighboring_same_role_messages
函数以处理此问题的示例:此修改确保
additional_kwargs
始终是一个字典,从而防止ValidationError
的发生。