LangChain的提示就像一个神圣的设计,当我在AgentExecutor中使用它时,出现了很多错误,你能改进一下吗?

mlmc2os5  于 3个月前  发布在  其他
关注(0)|答案(3)|浏览(65)

检查其他资源

  • 我为这个问题添加了一个非常描述性的标题。
  • 我使用集成搜索在LangChain文档中进行了搜索。
  • 我使用GitHub搜索找到了一个类似的问题,但没有找到。
  • 我确信这是LangChain中的一个错误,而不是我的代码中的错误。
  • 通过更新到LangChain的最新稳定版本(或特定集成包)无法解决此错误。

示例代码

PROMPT_TEMPLATE = """Respond to the human as helpfully and accurately as possible. You have access to the following tools:

{tools}

    Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).

    Valid "action" values: "Final Answer" or {tool_names}

    Provide only ONE action per $JSON_BLOB, as shown:

    ```
    {{
      "action": $TOOL_NAME,
      "action_input": $INPUT
    }}
    ```

    Follow this format:

    Question: input question to answer
    Thought: consider previous and subsequent steps
    Action:
    ```
    $JSON_BLOB
    ```
    Observation: action result
    ... (repeat Thought/Action/Observation N times)
    Thought: I know what to respond
    Action:
    ```
    {{
      "action": "Final Answer",
      "action_input": "Final response to human"
    }}

    Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation'''

    human = '''{input}

    '''

"""
def create_agent(
    llm: BaseLanguageModel,
    tools: list,
    output_parser: WebCrawleRegexParser,
    tools_renderer: ToolsRenderer = render_text_description_and_args,
    **kwargs
):

prompt_template = PromptTemplate.from_template(
PROMPT_TEMPLATE
)

missing_vars = {"tools", "tool_names", "agent_scratchpad"}.difference(
prompt_template.input_variables
)
if missing_vars:
raise ValueError(f"Prompt missing required variables: {missing_vars}")
from langchain.agents.format_scratchpad import format_log_to_str
prompt = prompt_template.format(
tools=tools_renderer(list(tools)),
tool_names=", ".join([t.name for t in tools]),
)

print(f"prompt={prompt}")
stop = ["\nObservation"]
llm_with_stop = llm.bind(stop=stop)

agent = (
prompt_template|
llm_with_stop
)
return agent


agent = create_web_crawler_agent(llm=llm, tools=tools, output_parser=None)
 agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
 ret = agent_executor.invoke({"input": "hi"})

错误信息和堆栈跟踪(如果适用)

文件 "D:\ProgramData\anaconda3\lib\site-packages\langchain\chains\base.py",第163行,在调用时引发异常 e
文件 "D:\ProgramData\anaconda3\lib\site-packages\langchain\chains\base.py",第153行,在调用时引发异常 e
self._call(inputs, run_manager=run_manager)
文件 "D:\ProgramData\anaconda3\lib\site-packages\langchain\agents\agent.py",第1432行,在 _call 方法中引发异常 e
[
文件 "D:\ProgramData\anaconda3\lib\site-packages\langchain\agents\agent.py",第1138行,在 _call 方法中引发异常 e
[
文件 "D:\ProgramData\anaconda3\lib\site-packages\langchain\agents\agent.py",第1138行,在 _call 方法中引发异常 e
[
文件 "D:\ProgramData\anaconda3\lib\site-packages\langchain\agents\agent.py",第1166行,在 _iter_next_step 方法中引发异常 e
输出 = self.agent.plan(
文件 "D:\ProgramData\anaconda3\lib\site-packages\langchain\agents\agent.py",第397行,在 plan 方法中引发异常 e for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
文件 "D:\ProgramData\anaconda3\lib\site-packages\langchain_core\runnables\base.py",第2875行,在 stream 方法中引发异常 e yield from self.transform(iter([input]), config, **kwargs)
文件 "D:\ProgramData\anaconda3\lib\site-packages\langchain_core\runnables\base.py",第2862行,在 transform 方法中引发异常 e yield from self._transform_stream_with_config(
文件 "D:\ProgramData\anaconda3\lib\site-packages\langchain_core\runnables\base.py",第1880行,在 _transform_stream_with_config 方法中引发异常 e chunk: Output = context.run(next, iterator) # type: ignore
文件 "D:\ProgramData\anaconda3\lib\site-packages\langchain_core\runnables\base.py",第2826行,在 _transform 方法中引发异常 e for output in final_pipeline:
文件 "D:\ProgramData\anaconda3\lib\site-packages\langchain_core\runnables\base.py",第4722行,在 transform 方法中引发异常 e yield from self.bound.transform(
文件 "D:\ProgramData\anaconda3\lib\site-packages\langchain_core\runnables\base.py",第1283行,在 transform 方法中引发异常 e for chunk in input:
文件 "D:\ProgramData\anaconda3\lib\site-packages

iyfjxgzm

iyfjxgzm1#

我完全同意,我在提示词中使用的JSON提示词将被f-string格式化程序错误地识别为变量,因此这个错误要求我填写变量,这简直就是胡说八道。

g6baxovj

g6baxovj2#

这个简单的示例将说明所有这些问题:

from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_openai import ChatOpenAI
chat = ChatOpenAI()
role_play_prompt = ChatPromptTemplate.from_messages(
    [
        (
            "system", "you are an assistant",
        ),
        ("assistant", "{'reply': 'ok'}"),
        MessagesPlaceholder(variable_name="messages"),
    ]
)

chain = role_play_prompt | chat
chain.invoke({"messages": [{"role": "user", "content": "hello"}]})

日志

Traceback (most recent call last):
  File "e:\2024\coaxing-bot\test.py", line 33, in <module>
    chain.invoke({"message": [{"role": "user", "content": "hello"}]})
  File "D:\ProgramData\Anaconda3\lib\site-packages\langchain_core\runnables\base.py", line 2499, in invoke
    input = step.invoke(
  File "D:\ProgramData\Anaconda3\lib\site-packages\langchain_core\prompts\base.py", line 128, in invoke
    return self._call_with_config(
  File "D:\ProgramData\Anaconda3\lib\site-packages\langchain_core\runnables\base.py", line 1626, in _call_with_config
    context.run(
  File "D:\ProgramData\Anaconda3\lib\site-packages\langchain_core\runnables\config.py", line 347, in call_func_with_variable_args
    return func(input, **kwargs)  # type: ignore[call-arg]
  File "D:\ProgramData\Anaconda3\lib\site-packages\langchain_core\prompts\base.py", line 111, in _format_prompt_with_error_handling
    _inner_input = self._validate_input(inner_input)
  File "D:\ProgramData\Anaconda3\lib\site-packages\langchain_core\prompts\base.py", line 103, in _validate_input
    raise KeyError(
KeyError: 'Input to ChatPromptTemplate is missing variables {"\'reply\'", \'messages\'}.  Expected: ["\'reply\'", \'messages\'] Received: [\'message\']'
6yt4nkrj

6yt4nkrj3#

我通过使用一个没有格式化器的类来解决这个问题。

相关问题