langchain 由create_react_agent()创建的代理不支持early_stopping_method='generate',

14ifxucb  于 3个月前  发布在  React
关注(0)|答案(2)|浏览(55)

检查其他资源

  • 为这个问题添加了一个非常描述性的标题。
  • 使用集成搜索在LangChain文档中进行了搜索。
  • 使用GitHub搜索查找类似的问题,但没有找到。
  • 我确信这是LangChain中的一个bug,而不是我的代码。
  • 通过更新到LangChain的最新稳定版本(或特定集成包)无法解决此bug。

示例代码

以下代码:

agent = create_react_agent(llm, tools, prompt)

agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=True,
    max_iterations=1,
    early_stopping_method="generate",
)

错误信息和堆栈跟踪(如果适用)

  • 无响应*

描述

我正在使用create_react_agent()创建一个代理。我想让LLM在达到最大迭代次数时根据现有信息获得最终答案。但是,似乎由create_react_agent()创建的代理不支持"early_stopping_method='generate'"。似乎代理类型是BaseSingleActionAgent,而return_stopped_response()无法处理"early_stopping_method='generate'"。

def return_stopped_response(
        self,
        early_stopping_method: str,
        intermediate_steps: List[Tuple[AgentAction, str]],
        **kwargs: Any,
    ) -> AgentFinish:
        """Return response when agent has been stopped due to max iterations."""
        if early_stopping_method == "force":
            # `force` just returns a constant string
            return AgentFinish(
                {"output": "Agent stopped due to iteration limit or time limit."}, ""
            )
        else:
            raise ValueError(
                f"Got unsupported early_stopping_method `{early_stopping_method}`"
            )

当我使用create_react_agent()时,我可以将其分配为Agent(BaseSingleActionAgent)吗?

系统信息

系统信息

操作系统:Windows
操作系统版本:10.0.19043
Python版本:3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)]

软件包信息

langchain_core: 0.2.13
langchain: 0.2.7
langchain_community: 0.2.7
langsmith: 0.1.85
langchain_google_alloydb_pg: 0.2.2
langchain_google_community: 1.0.6
langchain_google_vertexai: 1.0.6
langchain_openai: 0.1.8
langchain_text_splitters: 0.2.2
langchainhub: 0.1.20
langgraph: 0.1.7
langserve: 0.2.2

bvpmtnay

bvpmtnay1#

解:我们建议过渡到langgraph代理:

$x_{1}e^{0}f_{1}x$

$x_{1}e^{1}f_{1}x$

aiazj4mn

aiazj4mn2#

You can modify your source code to tackle this issue for now

def return_stopped_response(
        self,
        early_stopping_method: str,
        intermediate_steps: List[Tuple[AgentAction, str]],
        **kwargs: Any,
    ) -> AgentFinish:
        """Return response when agent has been stopped due to max iterations."""
        if early_stopping_method == "force":
            # `force` just returns a constant string
            return AgentFinish(
                {"output": "Agent stopped due to iteration limit or time limit."}, ""
            )
        elif early_stopping_method == "generate":
            # Generate does one final forward pass
            thoughts = ""
            for action, observation in intermediate_steps:
                thoughts += action.log
                thoughts += (
                    f"\n{self.observation_prefix}{observation}\n{self.llm_prefix}"
                )
            # Adding to the previous steps, we now tell the LLM to make a final pred
            thoughts += (
                "\n\nI now need to return a final answer based on the previous steps:"
            )
            new_inputs = {"agent_scratchpad": thoughts, "stop": self._stop}
            full_inputs = {**kwargs, **new_inputs}
            full_output = self.llm_chain.predict(**full_inputs)
            # We try to extract a final answer
            parsed_output = self.output_parser.parse(full_output)
            if isinstance(parsed_output, AgentFinish):
                # If we can extract, we send the correct stuff
                return parsed_output
            else:
                # If we can extract, but the tool is not the final tool,
                # we just return the full output
                return AgentFinish({"output": full_output}, full_output)
        else:
            raise ValueError(
                "early_stopping_method should be one of `force` or `generate`, "
                f"got {early_stopping_method}"
            )

相关问题