检查其他资源
- 我为这个问题添加了一个非常描述性的标题。
- 我使用集成搜索在 LangGraph /LangChain 文档中进行了搜索。
- 我使用 GitHub 搜索查找类似的问题,但没有找到。
- 我确信这是 LangGraph/LangChain 中的一个 bug,而不是我的代码。
- 我确信这应该作为一个问题 rather than a GitHub discussion 提出,因为这是一个 LangGraph bug 而不是设计问题。
示例代码
import getpass
api_endpoint = getpass.getpass("API Endpoint")
api_key = getpass.getpass("API Key")
from datetime import datetime
from langchain_core.messages import HumanMessage
from langchain_openai import AzureChatOpenAI
from langgraph.graph import END, MessageGraph
from langgraph.prebuilt import ToolExecutor
from langchain.tools import tool
from langchain_openai import AzureChatOpenAI
@tool
def file_saver(text: str) -> str:
"""Persist the given string to disk
"""
pass
model = AzureChatOpenAI(
deployment_name="cogdep-gpt-4o",
model_name="gpt-4o",
azure_endpoint=api_endpoint,
openai_api_key=api_key,
openai_api_type="azure",
openai_api_version="2024-05-01-preview",
streaming=True,
temperature=0.1
)
tools = [file_saver]
model = model.bind_tools(tools)
def get_agent_executor():
def should_continue(messages):
print(f"{datetime.now()}: Starting should_continue")
return "end"
async def call_model(messages):
response = await model.ainvoke(messages)
return response
workflow = MessageGraph()
workflow.add_node("agent", call_model)
workflow.set_entry_point("agent")
workflow.add_conditional_edges(
"agent",
should_continue,
{
"end": END,
},
)
return workflow.compile()
agent_executor = get_agent_executor()
messages = [HumanMessage(content="Think of a poem with 100 verses and save it to a file. Do not print it to me first.")]
async def run():
async for event in agent_executor.astream_events(messages, version="v1"):
kind = event["event"]
print(f"{datetime.now()}: Received event: {kind}")
await run()
错误信息和堆栈跟踪(如果适用)
This is part of the output (in this case, there is a 23s gap between `on_chat_model_stream` and `on_chat_model_end`)
(...)
2024-07-09 05:29:35.705573: Received event: on_chat_model_stream
2024-07-09 05:29:35.713679: Received event: on_chat_model_stream
2024-07-09 05:29:35.724480: Received event: on_chat_model_stream
2024-07-09 05:29:35.753143: Received event: on_chat_model_stream
2024-07-09 05:29:58.571740: Received event: on_chat_model_end
2024-07-09 05:29:58.574671: Received event: on_chain_start
2024-07-09 05:29:58.576026: Received event: on_chain_end
2024-07-09 05:29:58.577963: Received event: on_chain_start
2024-07-09 05:29:58.578214: Starting should_continue
描述
你好!
当我们收到一个导致参数中包含大量数据的工具调用的 llm 答案时,我们注意到尽管我们正在使用异步版本,但我们的程序被阻塞了。我的猜测是,最终的消息是在上一条消息流传输之后构建的,这在 CPU 上需要一些时间?另外,有没有我们可以使用的不同方法?
非常感谢!
系统信息
System Information
------------------
> OS: Linux
> OS Version: langchain-ai/langgraph#1 SMP PREEMPT Thu Nov 16 10:49:20 UTC 2023
> Python Version: 3.11.6 | packaged by conda-forge | (main, Oct 3 2023, 11:57:02) [GCC 12.3.0]
Package Information
-------------------
> langchain_core: 0.2.11
> langchain: 0.2.6
> langsmith: 0.1.84
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.2
> langgraph: 0.1.5
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve
2条答案
按热度按时间vsmadaxz1#
感谢指出问题。我能够重现延迟。
看起来我们在解析json值上花费了过多的时间,这是我们可以改进的地方(我认为通过避免重复工作,尽管还没有深入研究过)。感谢引起我们的注意。
之前的运行统计概要。
附带的cprofile文件。
profiles.tgz
重现脚本:
jchrr9hc2#
将要转移到langchain repo,但由于底层问题仍然存在。将向团队报告。