langchain 问题与/v0.2/docs/tutorials/qa_chat_history/相关,

9vw9lbht  于 24天前  发布在  其他
关注(0)|答案(3)|浏览(14)

URL

https://python.langchain.com/v0.2/docs/tutorials/qa_chat_history/

待办事项清单

  • 我为这个问题添加了一个非常描述性的标题。
  • 如果适用,我包含了一个指向我参考的文档页面的链接。

当前文档的问题:

如何限制检查点内存中的先前对话数量?如文档所示,检查点只是不断增长,最终超过LLM输入令牌限制。
https://python.langchain.com/v0.2/docs/tutorials/qa_chat_history/

关于内容的想法或请求:

请添加一个关于如何限制进入检查点的先前对话数量的部分。
https://python.langchain.com/v0.2/docs/tutorials/qa_chat_history/

cotxawn7

cotxawn71#

我怀疑这不仅仅是文档的问题。使用
from langgraph.prebuilt import create_react_agent
创建的react代理,以及其在langgraph中的实现,将消息保留在AgentState中。目前还没有实现来提供某种缓冲窗口内存来管理这些消息,就像
ConversationBufferWindowMemory

enxuqcxy

enxuqcxy2#

The workaround is to instantiate a create_react_agent with tools. Then place the agent in a LCEL chain with a prompt and then to wrap all of it with RunnableWithMessageHistory. See:- https://python.langchain.com/v0.2/docs/tutorials/chatbot/
The message list can be limited quite easily.

mwkjh3gx

mwkjh3gx3#

感谢@amersheikh,这个方法确实可以作为解决办法。

# create agent executor without checkpointer
agent_executor = chat_agent_executor.create_tool_calling_executor(
    llm, tools
)

# manage message history limit
from langchain_core.runnables import RunnablePassthrough
def filter_messages(messages, k=3):
    return messages[-k:]

chain = (
    RunnablePassthrough.assign(messages=lambda x: filter_messages(x["messages"]))
    | agent_executor
)

# manage message history using a store
store = {}
def get_session_history(session_id: str) -> BaseChatMessageHistory:
    if session_id not in store:
        store[session_id] = ChatMessageHistory()
    return store[session_id]
with_message_history = RunnableWithMessageHistory(
    chain,
    get_session_history,
    input_messages_key="messages",
)

config = {"configurable": {"session_id": "abc20"}}  
# 1st run: tell AI my name
response = with_message_history.invoke(
    {
        "messages": [HumanMessage(content="hi, I'm Bob")],
    },
    config=config,
)
print(response)

# 2nd run: give AI some non-sense information
response = with_message_history.invoke(
    {
        "messages": [HumanMessage(content="what is 1+1?")],
    },
    config=config,
)
print(response)

# 3rd run: ask AI what's my name, it should have forgeten
response = with_message_history.invoke(
    {
        "messages": [HumanMessage(content="what's my name?")],
    },
    config=config,
)
print(response)

然而,我不确定这是否是使用StateGraph的一种习惯用法。它似乎是一个值得拥有的功能,可以提供一种限制消息历史记录的方法。

相关问题