langchain agents.openai_assistant.base.OpenAIAssistantRunnable假定存在一个Optional字段,

svgewumm  于 2个月前  发布在  其他
关注(0)|答案(1)|浏览(29)

检查其他资源

  • 为这个问题添加了一个非常描述性的标题。
  • 使用集成搜索在LangChain文档中进行搜索。
  • 使用GitHub搜索查找类似的问题,但没有找到。
  • 我确信这是LangChain中的一个bug,而不是我的代码。
  • 通过更新到LangChain的最新稳定版本(或特定集成包)无法解决此bug。

示例代码

agents.openai_assistant.base.OpenAIAssistantRunnable 的代码如下:

required_tool_call_ids = {
    tc.id for tc in run.required_action.submit_tool_outputs.tool_calls
}

参见 https://github.com/langchain-ai/langchain/blob/langchain%3D%3D0.2.11/libs/langchain/langchain/agents/openai_assistant/base.py#L497。
required_action 是OpenAI的 Run 实体上的可选字段。参见 https://github.com/openai/openai-python/blob/v1.37.0/src/openai/types/beta/threads/run.py#L161。
run.required_actionNone 时,这会导致错误,这种情况有时会发生。

错误信息和堆栈跟踪(如果适用)

AttributeError: 'NoneType' object has no attribute 'submit_tool_outputs'

/SITE_PACKAGES/langchain/agents/openai_assistant/base.py:497 in _parse_intermediate_steps
495:
        run = self._wait_for_run(last_action.run_id, last_action.thread_id)
496:
        required_tool_call_ids = {
497:
            tc.id for tc in run.required_action.submit_tool_outputs.tool_calls
498:
        }
499:
        tool_outputs = [
/SITE_PACKAGES/langchain_community/agents/openai_assistant/base.py:312 in invoke
310:
            # Being run within AgentExecutor and there are tool outputs to submit.
311:
            if self.as_agent and input.get("intermediate_steps"):
312:
                tool_outputs = self._parse_intermediate_steps(
313:
                    input["intermediate_steps"]
314:
                )
/SITE_PACKAGES/langchain_community/agents/openai_assistant/base.py:347 in invoke
345:
        except BaseException as e:
346:
            run_manager.on_chain_error(e)
347:
            raise e
348:
        try:
349:
            response = self._get_response(run)
/SITE_PACKAGES/langchain_core/runnables/base.py:854 in stream
852:
            The output of the Runnable.
853:
        """
854:
        yield self.invoke(input, config, **kwargs)
855:
856:
    async def astream(
/SITE_PACKAGES/langchain/agents/agent.py:580 in plan
578:
            # Because the response from the plan is not a generator, we need to
579:
            # accumulate the output into final output and return that.
580:
            for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
581:
                if final_output is None:
582:
                    final_output = chunk
/SITE_PACKAGES/langchain/agents/agent.py:1346 in _iter_next_step
1344:
1345:
            # Call the LLM to see what to do.
1346:
            output = self.agent.plan(
1347:
                intermediate_steps,
1348:
                callbacks=run_manager.get_child() if run_manager else None,
/SITE_PACKAGES/langchain/agents/agent.py:1318 in <listcomp>
1316:
    ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1317:
        return self._consume_next_step(
1318:
            [
1319:
                a
1320:
                for a in self._iter_next_step(
/SITE_PACKAGES/langchain/agents/agent.py:1318 in _take_next_step
1316:
    ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1317:
        return self._consume_next_step(
1318:
            [
1319:
                a
1320:
                for a in self._iter_next_step(
/SITE_PACKAGES/langchain/agents/agent.py:1612 in _call
1610:
        # We now enter the agent loop (until it returns something).
1611:
        while self._should_continue(iterations, time_elapsed):
1612:
            next_step_output = self._take_next_step(
1613:
                name_to_tool_map,
1614:
                color_mapping,
/SITE_PACKAGES/langchain/chains/base.py:156 in invoke
154:
            self._validate_inputs(inputs)
155:
            outputs = (
156:
                self._call(inputs, run_manager=run_manager)
157:
                if new_arg_supported
158:
                else self._call(inputs)
/SITE_PACKAGES/langchain/chains/base.py:166 in invoke
164:
        except BaseException as e:
165:
            run_manager.on_chain_error(e)
166:
            raise e
167:
        run_manager.on_chain_end(outputs)
168:
/SITE_PACKAGES/langchain_core/runnables/base.py:5057 in invoke
5055:
        **kwargs: Optional[Any],
5056:
    ) -> Output:
5057:
        return self.bound.invoke(
5058:
            input,
5059:
            self._merge_configs(config),
PROJECT_ROOT/assistants/[openai_native_assistant.py](https://github.com/Shopximity/astrology/tree/master/PROJECT_ROOT/assistants/openai_native_assistant.py#L583):583 in _run
581:
                metadata=get_contextvars()
582:
            ) as manager:
583:
                result = agent_executor.invoke(run_args, config=dict(callbacks=manager))

描述

OpenAIAssistantRunnable._parse_intermediate_steps 假设每个OpenAI的 run 都会有一个 required_action ,但这是不正确的。

系统信息

System Information
------------------
> OS:  Darwin
> OS Version:  Darwin Kernel Version 23.5.0: Wed May  1 20:14:38 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6020
> Python Version:  3.11.7 (main, Jan  2 2024, 08:56:15) [Clang 15.0.0 (clang-1500.1.0.2.5)]

Package Information
-------------------
> langchain_core: 0.2.13
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.81
> langchain_anthropic: 0.1.19
> langchain_exa: 0.1.0
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.0

Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:

> langgraph
> langserve

相关问题