langchain代理执行器抛出:Assert生成不是None

wlzqhblo  于 2个月前  发布在  其他
关注(0)|答案(4)|浏览(41)

检查其他资源

  • 为这个问题添加了一个非常描述性的标题。
  • 使用集成搜索在LangChain文档中进行了搜索。
  • 使用GitHub搜索找到了一个类似的问题,但没有找到。
  • 我确信这是LangChain中的一个错误,而不是我的代码。
  • 通过更新到LangChain的最新稳定版本(或特定集成包)无法解决此错误。

示例代码

from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain.prompts import ChatPromptTemplate

@tool
def multiply(first_int: int, second_int: int) -> int:
    """Multiply two integers together."""
    return first_int * second_int

@tool
def add(first_int: int, second_int: int) -> int:
    """Add two integers.
"""
    return first_int + second_int

tools = [multiply, add,]

if __name__ == '__main__':

    prompt = ChatPromptTemplate.from_messages(
        [
            ("system", "You are a helpful assistant"),
            ("placeholder", "{chat_history}"),
            ("human", "{input}"),
            ("placeholder", "{agent_scratchpad}"),
        ]
    )

    bind_tools = llm.bind_tools(tools)

    calling_agent = create_tool_calling_agent(llm, tools, prompt)

    agent_executor = AgentExecutor(agent=calling_agent, tools=tools, verbose=True)

    response = agent_executor.invoke({
        "input": "what is the value of multiply(5, 42)?",
    })

错误信息和堆栈跟踪(如果适用)

Traceback (most recent call last):
File "E:\PycharmProjects\agent-tool-demo\main.py", line 61, in
stream = agent_executor.invoke({
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain\chains\base.py", line 166, in invoke
raise e
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain\chains\base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain\agents\agent.py", line 1433, in _call
next_step_output = self._take_next_step(
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain\agents\agent.py", line 1139, in _take_next_step
[
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain\agents\agent.py", line 1139, in
[
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain\agents\agent.py", line 1167, in _iter_next_step
output = self.agent.plan(
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain\agents\agent.py", line 515, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain_core\runnables\base.py", line 2775, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain_core\runnables\base.py", line 2762, in transform
yield from self._transform_stream_with_config(
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain_core\runnables\base.py", line 1778, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain_core\runnables\base.py", line 2726, in _transform
for output in final_pipeline:
File "E:\conda

gkl3eglg

gkl3eglg1#

这似乎在langchain 0.2.2中已经修复了。

xv8emn3q

xv8emn3q2#

这似乎在langchain 0.2.2中已经修复了。
不,我升级到了0.2.3,并使用官方示例,Assert生成不是None。

from langchain.agents import create_tool_calling_agent,AgentExecutor
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI

if __name__ == '__main__':
    llm = ChatOpenAI(...)

    @tool
    def magic_function(input: int) -> int:
        """Applies a magic function to an input."""
        return input + 2

    tools = [magic_function]

    query = "what is the value of magic_function(3)?"

    prompt = ChatPromptTemplate.from_messages(
        [
            ("system", "You are a helpful assistant"),
            ("human", "{input}"),
            # Placeholders fill up a **list** of messages
            ("placeholder", "{agent_scratchpad}"),
        ]
    )

    agent = create_tool_calling_agent(llm, tools, prompt)
    agent_executor = AgentExecutor(agent=agent, tools=tools)

    agent_executor.invoke({"input": query})

相关版本

langchain==0.2.3
langchain-community==0.2.3
langchain-core==0.2.3
langchain-openai==0.1.8
anauzrmj

anauzrmj3#

我似乎无法复制错误,即使升级到0.2.3后。您是如何创建llm的?另外,您能否澄清一下“官方示例”是什么?

llycmphe

llycmphe4#

您好,我不太明白您的问题。您能否提供更多上下文或者解释一下您的问题?如果您需要帮助创建llm,我可以为您提供一些资源和教程。

相关问题