langchain 组件ChatZhipuAI功能丧失

nue99wik  于 6个月前  发布在  其他
关注(0)|答案(8)|浏览(112)

检查其他资源

  • 为这个问题添加了一个非常描述性的标题。
  • 使用集成搜索在LangChain文档中进行了搜索。
  • 使用GitHub搜索查找类似的问题,但没有找到。
  • 我确信这是LangChain中的一个bug,而不是我的代码。
  • 通过更新到LangChain的最新稳定版本(或特定集成包)无法解决此错误。

示例代码

import os
  
  from langchain_community.tools.tavily_search import TavilySearchResults
  from langchain_community.chat_models import ChatZhipuAI
  
  from langchain_core.messages import HumanMessage
  
  # .环境变量
  os.environ["LANGCHAIN_TRACING_V2"] = "true"
  os.environ["LANGCHAIN_API_KEY"] = "balabala"
  os.environ["ZHIPUAI_API_KEY"] = "balabala"
  os.environ["TAVILY_API_KEY"]="balabala"
  
  llm=ChatZhipuAI(model="glm-4")
  
  search = TavilySearchResults(max_results=2)
  
  tools = [search]
  
  llm_with_tools=llm.bind_tools(tools)   
  response=llm_with_tools.invoke([HumanMessage(content="hello")])
  print(response.content)
  
  pass

错误信息和堆栈跟踪(如果适用)

异常已发生:NotImplementedError
异常:无描述
文件 "E:\GitHub\langchain\1\agent_1.py",第33行,在
llm_with_tools=llm.bind_tools(tools)
^^^^^^^^^^^^^^^^^^^^^
NotImplementedError:

描述

"langchain_corelanguage_models\chat_models.py.BaseChatModel.bind_tools"中的代码不完整,如下所示。

def bind_tools(
        self,
        tools: Sequence[Union[Dict[str, Any], Type[BaseModel], Callable, BaseTool]],
        **kwargs: Any,
    ) -> Runnable[LanguageModelInput, BaseMessage]:
        raise NotImplementedError()

根据ChatOpenAI,代码应该是:

def bind_tools(
        self,
        tools: Sequence[Union[Dict[str, Any], Type[BaseModel], Callable, BaseTool]],
        *,
        tool_choice: Optional[
            Union[dict, str, Literal["auto", "none", "required", "any"], bool]
        ] = None,
        **kwargs: Any,
    ) -> Runnable[LanguageModelInput, BaseMessage]:
        """Bind tool-like objects to this chat model.

Assumes model is compatible with OpenAI tool-calling API.

Args:
tools: A list of tool definitions to bind to this chat model.
Can be  a dictionary, pydantic model, callable, or BaseTool. Pydantic
models, callables, and BaseTools will be automatically converted to
their schema dictionary representation.
tool_choice: Which tool to require the model to call.
Options are:
name of the tool (str): calls corresponding tool;
"auto": automatically selects a tool (including no tool);
"none": does not call a tool;
"any" or "required": force at least one tool to be called;
True: forces tool call (requires `tools` be length 1);
False: no effect;

or a dict of the form:
{"type": "function", "function": {"name": <<tool_name>>}}.
**kwargs: Any additional parameters to pass to the
:class:`~langchain.runnable.Runnable` constructor.
"""

        formatted_tools = [convert_to_openai_tool(tool) for tool in tools]
        if tool_choice:
            if isinstance(tool_choice, str):
                # tool_choice is a tool/function name
                if tool_choice not in ("auto", "none", "any", "required"):
                    tool_choice = {
                        "type": "function",
                        "function": {"name": tool_choice},
                    }
                # 'any' is not natively supported by OpenAI API.
                # We support 'any' since other models use this instead of 'required'.
                if tool_choice == "any":
                    tool_choice = "required"
            elif isinstance(tool_choice, bool):
                tool_choice = "required"
            elif isinstance(tool_choice, dict):
                tool_names = [
                    formatted_tool["function"]["name"]
                    for formatted_tool in formatted_tools
                ]
                if not any(
                    tool_name == tool_choice["function"]["name"]
                    for tool_name in tool_names
                ):
                    raise ValueError(
                        f"Tool choice {tool_choice} was specified, but the only "
                        f"provided tools were {tool_names}."
                    )
            else:
                raise ValueError(
                    f"Unrecognized tool_choice type. Expected str, bool or dict. "
                    f"Received: {tool_choice}"
                )
            kwargs["tool_choice"] = tool_choice
        return super().bind(tools=formatted_tools, **kwargs)

替换这部分后,代码运行正常。

系统信息

Python 3.12.3
langchain==0.2.6
langchain-chroma==0.1.2
langchain-community==0.2.6
langchain-core==0.2.10
langchain-huggingface==0.0.3
langchain-openai==0.1.14
langchain-text-splitters==0.2.2
langserve==0.2.2
langsmith==0.1.82

kyks70gy

kyks70gy1#

@boli-design 你说得对,根据他们的官方文档,ChatZhipuAI支持调用工具。我将添加这个目前缺失的功能。

z5btuh9x

z5btuh9x2#

此外,我还发现了另一个相关的错误。代码如下:

import os
import getpass
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_community.chat_models import ChatZhipuAI
from langchain_core.messages import HumanMessage
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI

.环境变量

os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = "balabala"
os.environ["ZHIPUAI_API_KEY"] = "balabala"
os.environ["TAVILY_API_KEY"]="balabala"

llm_1=ChatZhipuAI(model="glm-4")

llm_2 = ChatOpenAI(
temperature=0.95,
model="glm-4",
openai_api_key="balabala",
openai_api_base="https://open.bigmodel.cn/api/paas/v4/"
)#

search = TavilySearchResults(max_results=2)
tools = [search]

agent_1 = create_react_agent(llm_1, tools)
agent_2 = create_react_agent(llm_2, tools)

response = agent_2.invoke(
{"messages": [HumanMessage(content="whats the weather in sf?")]}
)
print("agent_2",response["messages"])

#here error occurs
response = agent_1.invoke(
{"messages": [HumanMessage(content="whats the weather in sf?")]}
)
print("agent_1",response["messages"])
'''
Exception has occurred: TypeError
Got unknown type 'ToolMessage'.
File "E:\GitHub\langchain\1\test.py", line 37, in
response = agent_1.invoke(
^^^^^^^^^^^^^^^
TypeError: Got unknown type 'ToolMessage'.
'''

错误信息:

Exception has occurred: TypeError
Got unknown type 'ToolMessage'.
File "E:\GitHub\langchain\1\test.py", line 37, in 
 response = agent_1.invoke(
 ^^^^^^^^^^^^^^^
TypeError: Got unknown type 'ToolMessage'.

当使用ChatOpenAI时,没有出现错误。我无法找到如何修复它,也许使用ChatOpenAI调用其他模型会更好?

hiz5n14c

hiz5n14c3#

你好,这个问题解决了吗?我尝试将bind_tools用于ChatZhipuAi,也遇到了相同的错误。更新到最新版本的langchain_community也无法解决问题。谢谢。

eh57zj3b

eh57zj3b4#

我建议使用ChatOpenAI代替,它也可以调用zhipuAI llm的功能。
你好,这个问题解决了吗?我也尝试使用bind_tools为ChatZhipuAi提供支持,但仍然遇到了相同的错误。即使更新到最新版本的langchain_community也无法解决问题。谢谢。

but5z9lq

but5z9lq5#

感谢您的建议。您的想法是在定义LLM时添加openai_api_base,以便使用ChatOpenAI调用GLM4。这是正确的吗?

lqfhib0f

lqfhib0f6#

是的,谢谢你的建议。你的想法是在定义LLM时添加openai_api_base,以便使用ChatOpenAI调用GLM4。

hl0ma9xz

hl0ma9xz7#

我刚刚在智谱AI的网站上找到了说明。谢谢。

u5rb5r59

u5rb5r598#

感谢您的建议。您的建议是在定义LLM时添加openai_api_base,以便使用ChatOpenAI调用GLM4。这是正确的吗?
是的
我发现ChatOpenAI只能支持GLM4的温度低至0.01。我不知道为什么会这样。任何低于0.01的温度都会导致错误。

相关问题