检查其他资源
- 我为这个问题添加了一个非常描述性的标题。
- 我使用集成搜索在LangChain文档中进行了搜索。
- 我使用GitHub搜索查找了一个类似的问题,但没有找到。
- 我确信这是LangChain中的一个bug,而不是我的代码。
- 通过更新到LangChain的最新稳定版本(或特定集成包)无法解决此bug。
示例代码
from langchain.agents import initialize_agent, AgentType
from langchain.agents import Tool
from langchain_experimental.utilities import PythonREPL
import datetime
from langchain.agents import AgentExecutor
from langchain.chains.conversation.memory import (
ConversationBufferMemory,
)
from langchain.prompts import MessagesPlaceholder
from langchain_community.chat_models import BedrockChat
from langchain.agents import OpenAIMultiFunctionsAgent
# You can create the tool to pass to an agent
repl_tool = Tool(
name="python_repl",
description="A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.",
func=PythonREPL().run,
)
memory = ConversationBufferMemory(return_messages=True, k=10, memory_key="chat_history")
prompt = OpenAIMultiFunctionsAgent.create_prompt(
system_message=SystemMessage(content="You are an helpful AI bot"),
extra_prompt_messages=[MessagesPlaceholder(variable_name="chat_history")],
)
llm = BedrockChat(
model_id="anthropic.claude-3-5-sonnet-20240620-v1:0",
client=client, #initialized elsewhere
model_kwargs={"max_tokens": 4050, "temperature": 0.5},
verbose=True,
)
tools = [
repl_tool,
]
agent_executor = initialize_agent(
tools,
llm,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
max_iterations=10,
memory=memory,
prompt=prompt,
)
res = agent_executor.invoke({
'input': 'hi how are you?'
})
print(res['output']
# Hello! I'm an AI assistant, so I don't have feelings, but I'm functioning well and ready to help. How can I assist you today?
res=agent_executor.invoke({
"input": "what was my previous message?"
})
print(res['output']
# I'm sorry, but I don't have access to any previous messages. Each interaction starts fresh, so I don't have information about what you said before this current question. If you have a specific topic or question you'd like to discuss, please feel free to ask and I'll be happy to help.
# but when I checked the memory buffer
print(memory.buffer)
# [HumanMessage(content='hi how are you?'), AIMessage(content="Hello! As an AI assistant, I don't have feelings, but I'm functioning well and ready to help you. How can I assist you today?"), HumanMessage(content='hi how are you?'), AIMessage(content="Hello! I'm an AI assistant, so I don't have feelings, but I'm functioning well and ready to help. How can I assist you today?"), HumanMessage(content='what was my previous message?'), AIMessage(content="I'm sorry, but I don't have access to any previous messages. Each interaction starts fresh, so I don't have information about what you said before this current question. If you have a specific topic or question you'd like to discuss, please feel free to ask and I'll be happy to help.")]
# As you can see memory is getting updated
# so I checked the prompt template of the agent executor
pprint(agent_executor.agent.llm_chain.prompt)
# ChatPromptTemplate(input_variables=['agent_scratchpad', 'input'], messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='Respond to the human as helpfully and accurately as possible. You have access to the following tools:\n\npython_repl: A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`., args: {{\'tool_input\': {{\'type\': \'string\'}}}}\n\nUse a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or python_repl\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{{\n "action": $TOOL_NAME,\n "action_input": $INPUT\n}}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{{\n "action": "Final Answer",\n "action_input": "Final response to human"\n}}\n```\n\nBegin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\nThought:')), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['agent_scratchpad', 'input'], template='{input}\n\n{agent_scratchpad}'))])
# As you can see there is no input variable placeholder for `chat_memory`
错误信息和堆栈跟踪(如果适用)
描述
- 我正在尝试使用内存(
AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION
)的代理执行器 - 我传递了正确的提示模板,其中包含
memory_key
- 初始化的代理执行器的提示模板返回一个不包含
memory_key
占位符的默认提示模板。
1条答案
按热度按时间33qvvth11#
我尝试使用正常的聊天提示模板,结果仍然一样。