langchain 使用特定搜索工具的代理执行程序导致错误,

vulvrdjw  于 2个月前  发布在  其他
关注(0)|答案(4)|浏览(36)

检查其他资源

  • 为这个问题添加了一个非常描述性的标题。
  • 使用集成搜索在LangChain文档中进行搜索。
  • 使用GitHub搜索查找类似的问题,但没有找到。
  • 我确信这是LangChain中的一个bug,而不是我的代码。
  • 通过更新到LangChain的最新稳定版本(或特定集成包)无法解决此bug。

示例代码

# Defining the Bing Search Tool
from langchain_community.utilities import BingSearchAPIWrapper
from langchain_community.tools.bing_search import BingSearchResults
import os

BING_SUBSCRIPTION_KEY = os.getenv("BING_SUBSCRIPTION_KEY")

api_wrapper = BingSearchAPIWrapper(bing_subscription_key = BING_SUBSCRIPTION_KEY, bing_search_url = 'https://api.bing.microsoft.com/v7.0/search')
bing_tool = BingSearchResults(api_wrapper=api_wrapper)

# Defining the Agent elements
from langchain.agents import AgentExecutor
from langchain_openai import AzureChatOpenAI
from langchain_core.runnables import RunnablePassthrough
from langchain_core.utils.utils import convert_to_secret_str
from langchain.agents.format_scratchpad.openai_tools import (
    format_to_openai_tool_messages,
)
from langchain.agents.output_parsers.openai_tools import OpenAIToolsAgentOutputParser
from langchain_core.runnables import RunnablePassthrough
from langchain_core.utils.function_calling import convert_to_openai_tool

from langchain.agents.format_scratchpad.openai_tools import (
    format_to_openai_tool_messages,
)
from langchain import hub

instructions = """You are an assistant."""
base_prompt = hub.pull("langchain-ai/openai-functions-template")
prompt = base_prompt.partial(instructions=instructions)
llm = AzureChatOpenAI(
    azure_deployment=os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME"),
    azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
    api_key=convert_to_secret_str(os.getenv("AZURE_OPENAI_API_KEY")), # type: ignore
    api_version=os.getenv("AZURE_OPENAI_API_VERSION"), # type: ignore
    temperature=0,
)
bing_tools = [bing_tool]
bing_llm_with_tools = llm.bind(tools=[convert_to_openai_tool(tool) for tool in bing_tools])

# Defining the Agent
from langchain_core.runnables import RunnablePassthrough, RunnableSequence

bing_agent = RunnableSequence(
    RunnablePassthrough.assign(
        agent_scratchpad=lambda x: format_to_openai_tool_messages(
            x["intermediate_steps"]
        )
    ),
    # RunnablePassthrough()
    prompt,
    bing_llm_with_tools,
    OpenAIToolsAgentOutputParser(),
)

# Defining the Agent Executor
bing_agent_executor = AgentExecutor(
    agent=bing_agent,
    tools=bing_tools,
    verbose=True,
)

# Calling the Agent Executor
bing_agent_executor.invoke({"input":"tell me about the last version of angular"})

错误信息和堆栈跟踪(如果适用)

TypeError: Object of type CallbackManagerForToolRun is not JSON serializable

{
	"name": "TypeError",
	"message": "Object of type CallbackManagerForToolRun is not JSON serializable",
	"stack": "---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[31], line 1
----> 1 bing_agent_executor.invoke({\"input\":\"tell me about the last version of angular\"})
      3 print(\"done\")

File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain/chains/base.py:166, in Chain.invoke(self, input, config, **kwargs)
    164 except BaseException as e:
    165     run_manager.on_chain_error(e)
--> 166     raise e
    167 run_manager.on_chain_end(outputs)
    169 if include_run_info:

File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs)
    153 try:
    154     self._validate_inputs(inputs)
    155     outputs = (
--> 156         self._call(inputs, run_manager=run_manager)
    157         if new_arg_supported
    158         else self._call(inputs)
    159     )
    161     final_outputs: Dict[str, Any] = self.prep_outputs(
    162         inputs, outputs, return_only_outputs
    163     )
    164 except BaseException as e:

File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py:1612, in AgentExecutor._call(self, inputs, run_manager)
   1610 # We now enter the agent loop (until it returns something).
   1611 while self._should_continue(iterations, time_elapsed):
-> 1612     next_step_output = self._take_next_step(
   1613         name_to_tool_map,
   1614         color_mapping,
   1615         inputs,
   1616         intermediate_steps,
   1617         run_manager=run_manager,
   1618     )
   1619     if isinstance(next_step_output, AgentFinish):
   1620         return self._return(
   1621             next_step_output, intermediate_steps, run_manager=run_manager
   1622         )

File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py:1318, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
   1309 def _take_next_step(
   1310     self,
   1311     name_to_tool_map: Dict[str, BaseTool],
   (...)
   1315     run_manager: Optional[CallbackManagerForChainRun] = None,
   1316 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
   1317     return self._consume_next_step(
-> 1318         [
   1319             a
   1320             for a in self._iter_next_step(
   1321                 name_to_tool_map,
   1322                 color_mapping,
   1323                 inputs,
   1324                 intermediate_steps,
   1325                 run_manager,
   1326             )
   1327         ]
   1328     )

File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py:1318, in <listcomp>(.0)
   1309 def _take_next_step(
   1310     self,
   1311     name_to_tool_map: Dict[str, BaseTool],
   (...)
   1315     run_manager: Optional[CallbackManagerForChainRun] = None,
   1316 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
   1317     return self._consume_next_step(
-> 1318         [
   1319             a
   1320             for a in self._iter_next_step(
   1321                 name_to_tool_map,
   1322                 color_mapping,
   1323                 inputs,
   1324                 intermediate_steps,
   1325                 run_manager,
   1326             )
   1327         ]
   1328     )

File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py:1346, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
   1343     intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
   1345     # Call the LLM to see what to do.
-> 1346     output = self.agent.plan(
   1347         intermediate_steps,
   1348         callbacks=run_manager.get_child() if run_manager else None,
   1349         **inputs,
   1350     )
   1351 except OutputParserException as e:
   1352     if isinstance(self.handle_parsing_errors, bool):

File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py:580, in RunnableMultiActionAgent.plan(self, intermediate_steps, callbacks, **kwargs)
    572 final_output: Any = None
    573 if self.stream_runnable:
    574     # Use streaming to make sure that the underlying LLM is invoked in a
    575     # streaming
   (...)
    578     # Because the response from the plan is not a generator, we need to
    579     # accumulate the output into final output and return that.
--> 580     for chunk in self.runnable.stream(inputs, config={\"callbacks\": callbacks}):
    581         if final_output is None:
    582             final_output = chunk

File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py:3253, in RunnableSequence.stream(self, input, config, **kwargs)
   3247 def stream(
   3248     self,
   3249     input: Input,
   3250     config: Optional[RunnableConfig] = None,
   3251     **kwargs: Optional[Any],
   3252 ) -> Iterator[Output]:
-> 3253     yield from self.transform(iter([input]), config, **kwargs)

File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py:3240, in RunnableSequence.transform(self, input, config, **kwargs)
   3234 def transform(
   3235     self,
   3236     input: Iterator[Input],
   3237     config: Optional[RunnableConfig] = None,
   3238     **kwargs: Optional[Any],
   3239 ) -> Iterator[Output]:
-> 3240     yield from self._transform_stream_with_config(
   3241         input,
   3242         self._transform,
   3243         patch_config(config, run_name=(config or {}).get(\"run_name\") or self.name),
   3244         **kwargs,
   3245     )

File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py:2053, in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
   2051 try:
   2052     while True:
-> 2053         chunk: Output = context.run(next, iterator)  # type: ignore
   2054         yield chunk
   2055         if final_output_supported:

File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py:3202, in RunnableSequence._transform(self, input, run_manager, config, **kwargs)
   3199     else:
   3200         final_pipeline = step.transform(final_pipeline, config)
-> 3202 for output in final_pipeline:
   3203     yield output

File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py:1271, in Runnable.transform(self, input, config, **kwargs)
   1268 final: Input
   1269 got_first_val = False
-> 1271 for ichunk in input:
   1272     # The default implementation of transform is to buffer input and
   1273     # then call stream.
   1274     # It'll attempt to gather all input into a single chunk using
   1275     # the `+` operator.
   1276     # If the input is not addable, then we'll assume that we can
   1277     # only operate on the last chunk,
   1278     # and we'll iterate until we get to the last chunk.
   1279     if not got_first_val:
   1280         final = ichunk

File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py:5264, in RunnableBindingBase.transform(self, input, config, **kwargs)
   5258 def transform(
   5259     self,
   5260     input: Iterator[Input],
   5261     config: Optional[RunnableConfig] = None,
   5262     **kwargs: Any,
   5263 ) -> Iterator[Output]:
-> 5264     yield from self.bound.transform(
   5265         input,
   5266         self._merge_configs(config),
   5267         **{**self.kwargs, **kwargs},
   5268     )

File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py:1289, in Runnable.transform(self, input, config, **kwargs)
   1286             final = ichunk
   1288 if got_first_val:
-> 1289     yield from self.stream(final, config, **kwargs)

File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:365, in BaseChatModel.stream(self, input, config, stop, **kwargs)
    358 except BaseException as e:
    359     run_manager.on_llm_error(
    360         e,
    361         response=LLMResult(
    362             generations=[[generation]] if generation else []
    363         ),
    364     )
--> 365     raise e
    366 else:
    367     run_manager.on_llm_end(LLMResult(generations=[[generation]]))

File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:345, in BaseChatModel.stream(self, input, config, stop, **kwargs)
    343 generation: Optional[ChatGenerationChunk] = None
    344 try:
--> 345     for chunk in self._stream(messages, stop=stop, **kwargs):
    346         if chunk.message.id is None:
    347             chunk.message.id = f\"run-{run_manager.run_id}\"

File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:513, in BaseChatOpenAI._stream(self, messages, stop, run_manager, **kwargs)
    505 def _stream(
    506     self,
    507     messages: List[BaseMessage],
   (...)
    510     **kwargs: Any,
    511 ) -> Iterator[ChatGenerationChunk]:
    512     kwargs[\"stream\"] = True
--> 513     payload = self._get_request_payload(messages, stop=stop, **kwargs)
    514     default_chunk_class: Type[BaseMessageChunk] = AIMessageChunk
    515     if self.include_response_headers:

File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:604, in BaseChatOpenAI._get_request_payload(self, input_, stop, **kwargs)
    601 if stop is not None:
    602     kwargs[\"stop\"] = stop
    603 return {
--> 604     \"messages\": [_convert_message_to_dict(m) for m in messages],
    605     **self._default_params,
    606     **kwargs,
    607 }

File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:604, in <listcomp>(.0)
    601 if stop is not None:
    602     kwargs[\"stop\"] = stop
    603 return {
--> 604     \"messages\": [_convert_message_to_dict(m) for m in messages],
    605     **self._default_params,
    606     **kwargs,
    607 }

File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:198, in _convert_message_to_dict(message)
    196     message_dict[\"function_call\"] = message.additional_kwargs[\"function_call\"]
    197 if message.tool_calls or message.invalid_tool_calls:
--> 198     message_dict[\"tool_calls\"] = [
    199         _lc_tool_call_to_openai_tool_call(tc) for tc in message.tool_calls
    200     ] + [
    201         _lc_invalid_tool_call_to_openai_tool_call(tc)
    202         for tc in message.invalid_tool_calls
    203     ]
    204 elif \"tool_calls\" in message.additional_kwargs:
    205     message_dict[\"tool_calls\"] = message.additional_kwargs[\"tool_calls\"]

File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:199, in <listcomp>(.0)
    196     message_dict[\"function_call\"] = message.additional_kwargs[\"function_call\"]
    197 if message.tool_calls or message.invalid_tool_calls:
    198     message_dict[\"tool_calls\"] = [
--> 199         _lc_tool_call_to_openai_tool_call(tc) for tc in message.tool_calls
    200     ] + [
    201         _lc_invalid_tool_call_to_openai_tool_call(tc)
    202         for tc in message.invalid_tool_calls
    203     ]
    204 elif \"tool_calls\" in message.additional_kwargs:
    205     message_dict[\"tool_calls\"] = message.additional_kwargs[\"tool_calls\"]

File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:1777, in _lc_tool_call_to_openai_tool_call(tool_call)
   1771 def _lc_tool_call_to_openai_tool_call(tool_call: ToolCall) -> dict:
   1772     return {
   1773         \"type\": \"function\",
   1774         \"id\": tool_call[\"id\"],
   1775         \"function\": {
   1776             \"name\": tool_call[\"name\"],
-> 1777             \"arguments\": json.dumps(tool_call[\"args\"]),
   1778         },
   1779     }

File ~/.pyenv/versions/3.11.9/lib/python3.11/json/__init__.py:231, in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
    226 # cached encoder
    227 if (not skipkeys and ensure_ascii and
    228     check_circular and allow_nan and
    229     cls is None and indent is None and separators is None and
    230     default is None and not sort_keys and not kw):
--> 231     return _default_encoder.encode(obj)
    232 if cls is None:
    233     cls = JSONEncoder

File ~/.pyenv/versions/3.11.9/lib/python3.11/json/encoder.py:200, in JSONEncoder.encode(self, o)
    196         return encode_basestring(o)
    197 # This doesn't pass the iterator directly to ''.join() because the
    198 # exceptions aren't as detailed.  The list call should be roughly
    199 # equivalent to the PySequence_Fast that ''.join() would do.
--> 200 chunks = self.iterencode(o, _one_shot=True)
    201 if not isinstance(chunks, (list, tuple)):
    202     chunks = list(chunks)

File ~/.pyenv/versions/3.11.9/lib/python3.11/json/encoder.py:258, in JSONEncoder.iterencode(self, o, _one_shot)
    253 else:
    254     _iterencode = _make_iterencode(
    255         markers, self.default, _encoder, self.indent, floatstr,
    256         self.key_separator, self.item_separator, self.sort_keys,
    257         self.skipkeys, _one_shot)
--> 258 return _iterencode(o, 0)

File ~/.pyenv/versions/3.11.9/lib/python3.11/json/encoder.py:180, in JSONEncoder.default(self, o)
    161 def default(self, o):
    162     \"\"\"Implement this method in a subclass such that it returns
    163     a serializable object for ``o``, or calls the base implementation
    164     (to raise a ``TypeError``).
   (...)
    178 
    179     \"\"\"
--> 180     raise TypeError(f'Object of type {o.__class__.__name__} '
    181                     f'is not JSON serializable')

TypeError: Object of type CallbackManagerForToolRun is not JSON serializable"
}

描述

我正在尝试在Agent Executor中使用Bing搜索工具。
搜索工具本身可以正常工作,甚至代理也可以正常工作,问题出现在我在Agent Executor中使用它时。
当我使用来自langchain-google-community包的Google搜索工具时,也会出现相同的问题。
相反,它不会在使用DuckDuckGo时出现问题。

from langchain_google_community import GoogleSearchAPIWrapper, GoogleSearchResults

google_tool = GoogleSearchResults(api_wrapper=GoogleSearchAPIWrapper())

系统信息

python -m langchain_core.sys_info

System Information
------------------
> OS:  Linux
> OS Version:  #1 SMP Fri Mar 29 23:14:13 UTC 2024
> Python Version:  3.11.9 (main, Jun 27 2024, 21:37:40) [GCC 11.4.0]

Package Information
-------------------
> langchain_core: 0.2.23
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.93
> langchain_google_community: 1.0.7
> langchain_openai: 0.1.17
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
> langserve: 0.2.2

Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:

> langgraph
ghg1uchk

ghg1uchk1#

这里也存在同样的问题!
我认为这是#24621的重复问题。

vohkndzv

vohkndzv2#

似乎与PR #24038有关,它将run_manager添加到工具arg https://github.com/langchain-ai/langchain/blob/7dd6b32991e81582cb30588b84871af04ecdc76c/libs/core/langchain_core/tools.py#L603中,并使其无法序列化。回到pip install langchain-core==0.2.12似乎对我来说可以解决这个问题。

vu8f3i0k

vu8f3i0k3#

你好@wulifu2hao,感谢你的帮助
我通过回调函数挖掘了一些信息,并注意到了你所说的问题
具体来说,在代理执行器完成第一次迭代时创建的 AIMessageChunk :

  • 使用必应工具,在 tool_calls 部分,在 args 中,我看到了一个包含 run_manager 对象的 CallbackManagerForToolRun 键,这就是导致错误 TypeError: Object of type CallbackManagerForToolRun is not JSON serializable 的原因
tool_calls=[{'name': 'bing_search_results_json', 'args': {'query': 'latest version of Angular', 'run_manager': <langchain_core.callbacks.manager.CallbackManagerForToolRun object at 0x7f4cce5178d0>}, 'id': 'call_ybDKJDawNevJNo2af5BILeLb', 'type': 'tool_call'}]
  • 使用DuckDuckGo工具,tool_calls 部分不包含任何 run_manager 键,这就是为什么错误没有发生的原因
tool_calls=[{'name': 'duckduckgo_results_json', 'args': {'query': 'latest version of Angular'}, 'id': 'call_ihqRw2Ifs1wYPNcWQhUR1mgC', 'type': 'tool_call'}]

此外,我确认将 langchain-core 的版本锁定为 langchain-core<=0.2.12 可以解决这个问题,并且必应工具可以正常工作

ve7v8dk2

ve7v8dk24#

面临相同的问题

相关问题