Langchain SQL代理与Azure SQL和Azure OpenAI一起使用时,调用方法返回内部服务器错误500,

ut6juiuv  于 3个月前  发布在  其他
关注(0)|答案(5)|浏览(65)

错误信息和堆栈跟踪(如果适用)

正在创建新的SQL代理执行器链...
Traceback (most recent call last):
File "test.py", line 62, in
agent_executor.invoke(final_prompt.format(
File "/home/user/.local/lib/python3.8/site-packages/langchain/chains/base.py", line 166, in invoke
raise e
File "/home/user/.local/lib/python3.8/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "/home/user/.local/lib/python3.8/site-packages/langchain/agents/agent.py", line 1636, in _call
next_step_output = self._take_next_step(
File "/home/user/.local/lib/python3.8/site-packages/langchain/agents/agent.py", line 1342, in _take_next_step
[
File "/home/user/.local/lib/python3.8/site-packages/langchain/agents/agent.py", line 1342, in
[
File "/home/user/.local/lib/python3.8/site-packages/langchain/agents/agent.py", line 1370, in _iter_next_step
output = self.agent.plan(
File "/home/user/.local/lib/python3.8/site-packages/langchain/agents/agent.py", line 463, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 3251, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 3238, in transform
yield from self._transform_stream_with_config(
File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 2052, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "/home/user/.local/lib/python3.8/site-packages/langchain_core/language_models

openai.InternalServerError: 错误代码:500 - {'状态码': 500, '消息': '内部服务器错误', '活动ID': 'xxx-yyy-zzz'}

#### 描述

* 我正在尝试使用Langchain通过Azure OpenAI查询Azure SQL
* 代码基于GitHub上提供的样本 - [Langchain to query Azure SQL using Azure OpenAI](https://github.com/Azure-Samples/SQL-AI-samples/blob/main/AzureSQLDatabase/LangChain/dbOpenAI.ipynb)
* 预期结果是以迭代方式返回带有Action、Observation和Thought的响应代码
* 实际结果是错误:内部服务器错误,500。完整的错误日志如下所示。

#### 系统信息

### Langchain版本

langchain==0.2.10
 langchain-community==0.2.9
 langchain-core==0.2.22
 langchain-openai==0.1.16
 langchain-text-splitters==0.2.2

### 平台

Windows 11

### Python版本

Python 3.8.10
sbdsn5lh

sbdsn5lh1#

完全相同的问题。上周可以正常工作,本周尝试测试时出现了500个内部服务器错误。我尝试了我们的聊天完成代理和独立模式下的完成代理,它们都可以正常工作。只有在我们整合SQL代理并将所有内容组合在一起时,才会出现错误。

lkaoscv7

lkaoscv72#

完全相同的问题。上周可以正常工作,本周尝试测试时出现了500个内部服务器错误。我尝试了我们的聊天完成代理和独立模式下的完成代理,它们都可以正常工作。只有在我们整合SQL代理并将所有内容组合在一起时,我们才会遇到错误。
这里也是一样。

px9o7tmv

px9o7tmv3#

24488 (评论)

我遇到了完全相同的问题。我分别测试了我的聊天LLM和数据库LLM,它们都能正常工作。但是当我将它们组合成一个代理链时,我们遇到了错误

openai.InternalServerError: Error code: 500 - {'statusCode': 500, 'message': 'Internal server error', 'activityId': 'xxx-xxx-xxx-xxx'}

我们已经按照所有API文档中关于如何构建SQL代理的说明进行了操作,一切都运行得非常顺利,直到上周四(2024年7月25日)。

def init_langchain_client(
    chat: AzureChatOpenAI,
    memory: ConversationSummaryBufferMemory,
    suffix: str,
    tools: List[ToolType],
    system_prefix: str,
) -> AgentExecutor:
    """
Initialise the Langchain client with necessary configurations.

Returns
-------
AgentExecutor
The agent executor configured with SQL database tools and Azure OpenAI models.
"""
    prompt = PromptTemplate.from_template(system_prefix + suffix)
    agent = create_react_agent(
        llm=chat,
        tools=tools,
        prompt=prompt,
    )
    agent_chain = AgentExecutor.from_agent_and_tools(
        agent=agent,
        tools=tools,
        verbose=True,
        memory=memory,
        handle_parsing_errors=True,
        return_intermediate_steps=True,
    )

    return agent_chain
langchain_agent_chain = init_langchain_client(
    chat=chat,
    memory=memory,
    suffix=SUFFIX_REACT,
    tools=tools,
    system_prefix=system_prefix,
)
langchain_agent_chain.invoke({"input":"Who are you?"})
---------------------------------------------------------------------------
InternalServerError                       Traceback (most recent call last)
Cell In[4], line 1
----> 1 langchain_agent_chain.invoke({"input":"Who are you?"})

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain/chains/base.py:163, in Chain.invoke(self, input, config, **kwargs)
    161 except BaseException as e:
    162     run_manager.on_chain_error(e)
--> 163     raise e
    164 run_manager.on_chain_end(outputs)
    166 if include_run_info:

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain/chains/base.py:153, in Chain.invoke(self, input, config, **kwargs)
    150 try:
    151     self._validate_inputs(inputs)
    152     outputs = (
--> 153         self._call(inputs, run_manager=run_manager)
    154         if new_arg_supported
    155         else self._call(inputs)
    156     )
    158     final_outputs: Dict[str, Any] = self.prep_outputs(
    159         inputs, outputs, return_only_outputs
    160     )
    161 except BaseException as e:

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain/agents/agent.py:1432, in AgentExecutor._call(self, inputs, run_manager)
   1430 # We now enter the agent loop (until it returns something).
   1431 while self._should_continue(iterations, time_elapsed):
-> 1432     next_step_output = self._take_next_step(
   1433         name_to_tool_map,
   1434         color_mapping,
   1435         inputs,
   1436         intermediate_steps,
   1437         run_manager=run_manager,
   1438     )
   1439     if isinstance(next_step_output, AgentFinish):
   1440         return self._return(
   1441             next_step_output, intermediate_steps, run_manager=run_manager
   1442         )

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain/agents/agent.py:1138, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
   1129 def _take_next_step(
   1130     self,
   1131     name_to_tool_map: Dict[str, BaseTool],
   (...)
   1135     run_manager: Optional[CallbackManagerForChainRun] = None,
   1136 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
   1137     return self._consume_next_step(
-> 1138         [
   1139             a
   1140             for a in self._iter_next_step(
   1141                 name_to_tool_map,
   1142                 color_mapping,
   1143                 inputs,
   1144                 intermediate_steps,
   1145                 run_manager,
   1146             )
   1147         ]
   1148     )

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain/agents/agent.py:1166, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
   1163     intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
   1165     # Call the LLM to see what to do.
-> 1166     output = self.agent.plan(
   1167         intermediate_steps,
   1168         callbacks=run_manager.get_child() if run_manager else None,
   1169         **inputs,
   1170     )
   1171 except OutputParserException as e:
   1172     if isinstance(self.handle_parsing_errors, bool):

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain/agents/agent.py:397, in RunnableAgent.plan(self, intermediate_steps, callbacks, **kwargs)
    389 final_output: Any = None
    390 if self.stream_runnable:
    391     # Use streaming to make sure that the underlying LLM is invoked in a
    392     # streaming
   (...)
    395     # Because the response from the plan is not a generator, we need to
    396     # accumulate the output into final output and return that.
--> 397     for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
    398         if final_output is None:
    399             final_output = chunk

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain_core/runnables/base.py:2875, in RunnableSequence.stream(self, input, config, **kwargs)
   2869 def stream(
   2870     self,
   2871     input: Input,
   2872     config: Optional[RunnableConfig] = None,
   2873     **kwargs: Optional[Any],
   2874 ) -> Iterator[Output]:
-> 2875     yield from self.transform(iter([input]), config, **kwargs)

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain_core/runnables/base.py:2862, in RunnableSequence.transform(self, input, config, **kwargs)
   2856 def transform(
   2857     self,
   2858     input: Iterator[Input],
   2859     config: Optional[RunnableConfig] = None,
   2860     **kwargs: Optional[Any],
   2861 ) -> Iterator[Output]:
-> 2862     yield from self._transform_stream_with_config(
   2863         input,
   2864         self._transform,
   2865         patch_config(config, run_name=(config or {}).get("run_name") or self.name),
   2866         **kwargs,
   2867     )

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain_core/runnables/base.py:1881, in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
   1879 try:
   1880     while True:
-> 1881         chunk: Output = context.run(next, iterator)  # type: ignore
   1882         yield chunk
   1883         if final_output_supported:

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain_core/runnables/base.py:2826, in RunnableSequence._transform(self, input, run_manager, config)
   2817 for step in steps:
   2818     final_pipeline = step.transform(
   2819         final_pipeline,
   2820         patch_config(
   (...)
   2823         ),
   2824     )
-> 2826 for output in final_pipeline:
   2827     yield output

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain_core/runnables/base.py:1282, in Runnable.transform(self, input, config, **kwargs)
   1279 final: Input
   1280 got_first_val = False
-> 1282 for ichunk in input:
   1283     # The default implementation of transform is to buffer input and
   1284     # then call stream.
   1285     # It'll attempt to gather all input into a single chunk using
   1286     # the `+` operator.
   1287     # If the input is not addable, then we'll assume that we can
   1288     # only operate on the last chunk,
   1289     # and we'll iterate until we get to the last chunk.
   1290     if not got_first_val:
   1291         final = ichunk

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain_core/runnables/base.py:4736, in RunnableBindingBase.transform(self, input, config, **kwargs)
   4730 def transform(
   4731     self,
   4732     input: Iterator[Input],
   4733     config: Optional[RunnableConfig] = None,
   4734     **kwargs: Any,
   4735 ) -> Iterator[Output]:
-> 4736     yield from self.bound.transform(
   4737         input,
   4738         self._merge_configs(config),
   4739         **{**self.kwargs, **kwargs},
   4740     )

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain_core/runnables/base.py:1300, in Runnable.transform(self, input, config, **kwargs)
   1297             final = ichunk
   1299 if got_first_val:
-> 1300     yield from self.stream(final, config, **kwargs)

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:249, in BaseChatModel.stream(self, input, config, stop, **kwargs)
    242 except BaseException as e:
    243     run_manager.on_llm_error(
    244         e,
     245         response=LLMResult(
    246             generations=[[generation]] if generation else []
    247         ),
    248     )
--> 249     raise e
    250 else:
    251     run_manager.on_llm_end(LLMResult(generations=[[generation]]))

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:229, in BaseChatModel.stream(self, input, config, stop, **kwargs)
    227 generation: Optional[ChatGenerationChunk] = None
    228 try:
--> 229     for chunk in self._stream(messages, stop=stop, **kwargs):
    230         if chunk.message.id is None:
    231             chunk.message.id = f"run-{run_manager.run_id}"

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/langchain_openai/chat_models/base.py:480, in BaseChatOpenAI._stream(self, messages, stop, run_manager, **kwargs)
    477 params = {**params, **kwargs, "stream": True}
    479 default_chunk_class = AIMessageChunk
--> 480 with self.client.create(messages=message_dicts, **params) as response:
    481     for chunk in response:
    482         if not isinstance(chunk, dict):

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/openai/_utils/_utils.py:277, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
    275             msg = f"Missing required argument: {quote(missing[0])}"
    276     raise TypeError(msg)
--> 277 return func(*args, **kwargs)

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/openai/resources/chat/completions.py:643, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, parallel_tool_calls, presence_penalty, response_format, seed, service_tier, stop, stream, stream_options, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
    609 @required_args(["messages", "model"], ["messages", "model", "stream"])
    610 def create(
    611     self,
   (...)
    641     timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
    642 ) -> ChatCompletion | Stream[ChatCompletionChunk]:
--> 643     return self._post(
    644         "/chat/completions",
    645         body=maybe_transform(
    646             {
    647                 "messages": messages,
    648                 "model": model,
    649                 "frequency_penalty": frequency_penalty,
    650                 "function_call": function_call,
    651                 "functions": functions,
    652                 "logit_bias": logit_bias,
    653                 "logprobs": logprobs,
    654                 "max_tokens": max_tokens,
    655                 "n": n,
    656                 "parallel_tool_calls": parallel_tool_calls,
    657                 "presence_penalty": presence_penalty,
    658                 "response_format": response_format,
    659                 "seed": seed,
    660                 "service_tier": service_tier,
    661                 "stop": stop,
    662                 "stream": stream,
    663                 "stream_options": stream_options,
    664                 "temperature": temperature,
    665                 "tool_choice": tool_choice,
    666                 "tools": tools,
    667                 "top_logprobs": top_logprobs,
    668                 "top_p": top_p,
    669                 "user": user,
    670             },
    671             completion_create_params.CompletionCreateParams,
    672         ),
    673         options=make_request_options(
    674             extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
    675         ),
    676         cast_to=ChatCompletion,
    677         stream=stream or False,
    678         stream_cls=Stream[ChatCompletionChunk],
    679     )

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/openai/_base_client.py:1250, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
   1236 def post(
   1237     self,
   1238     path: str,
   (...)
   1245     stream_cls: type[_StreamT] | None = None,
   1246 ) -> ResponseT | _StreamT:
   1247     opts = FinalRequestOptions.construct(
   1248         method="post", url=path, json_data=body, files=to_httpx_files(files), **options
   1249     )
-> 1250     return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/openai/_base_client.py:931, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
    922 def request(
    923     self,
    924     cast_to: Type[ResponseT],
   (...)
    929     stream_cls: type[_StreamT] | None = None,
    930 ) -> ResponseT | _StreamT:
--> 931     return self._request(
    932         cast_to=cast_to,
    933         options=options,
    934         stream=stream,
    935         stream_cls=stream_cls,
    936         remaining_retries=remaining_retries,
    937     )

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/openai/_base_client.py:1015, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
   1013 if retries > 0 and self._should_retry(err.response):
   1014     err.response.close()
-> 1015     return self._retry_request(
   1016         options,
   1017         cast_to,
   1018         retries,
   1019         err.response.headers,
   1020         stream=stream,
   1021         stream_cls=stream_cls,
   1022     )
   1024 # If the response is streamed then we need to explicitly read the response
   1025 # to completion before attempting to access the response text.
   1026 if not err.response.is_closed:

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/openai/_base_client.py:1063, in SyncAPIClient._retry_request(self, options, cast_to, remaining_retries, response_headers, stream, stream_cls)
   1059 # In a synchronous context we are blocking the entire thread. Up to the library user to run the client in a
   1060 # different thread if necessary.
   1061 time.sleep(timeout)
-> 1063 return self._request(
   1064     options=options,
   1065     cast_to=cast_to,
   1066     remaining_retries=remaining,
   1067     stream=stream,
   1068     stream_cls=stream_cls,
   1069 )

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/openai/_base_client.py:1015, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
   1013 if retries > 0 and self._should_retry(err.response):
   1014     err.response.close()
-> 1015     return self._retry_request(
   1016         options,
   1017         cast_to,
   1018         retries,
   1019         err.response.headers,
   1020         stream=stream,
   1021         stream_cls=stream_cls,
   1022     )
   1024 # If the response is streamed then we need to explicitly read the response
   1025 # to completion before attempting to access the response text.
   1026 if not err.response.is_closed:

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/openai/_base_client.py:1063, in SyncAPIClient._retry_request(self, options, cast_to, remaining_retries, response_headers, stream, stream_cls)
   1059 # In a synchronous context we are blocking the entire thread. Up to the library user to run the client in a
   1060 # different thread if necessary.
   1061 time.sleep(timeout)
-> 1063 return self._request(
   1064     options=options,
   1065     cast_to=cast_to,
   1066     remaining_retries=remaining,
   1067     stream=stream,
   1068     stream_cls=stream_cls,
   1069 )

File ~/xxx/xxx/virtualenvs/xxx/lib/python3.12/site-packages/openai/_base_client.py:1030, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
   1027         err.response.read()
   1029     log.debug("Re-raising status error")
-> 1030     raise self._make_status_error_from_response(err.response) from None
   1032 return self._process_response(
   1033     cast_to=cast_to,
   1034     options=options,
   (...)
   1037     stream_cls=stream_cls,
   1038 )

openai.InternalServerError: Error code: 500 - {'statusCode': 500, 'message': 'Internal server error', 'activityId': 'xxx-xxx-xxx-xxx'}
izj3ouym

izj3ouym4#

你好,关于这个问题有什么更新可以分享吗?

moiiocjp

moiiocjp5#

create_sql_agent()方法中,我没有看到'streaming'参数。请尝试删除它并再次检查。
参考:libs/community/langchain_community/agent_toolkits/sql/base.py。
此外,请通过运行以下命令重新检查连接性:

  1. db.run()
  2. llm(<user_query>)
    这将有助于验证连接和查询执行。

相关问题