promptflow [BUG] "(TypeError) expected string or buffer" when running a batch run a flow with langchain python node

olhwl3o2  于 2个月前  发布在  Python
关注(0)|答案(4)|浏览(32)

描述bug

在使用带有流程的langchain节点时,批量运行会在该节点上出现以下错误:

pf.run(
    flow=flow_path,
    data=data_path,  # data.jsonl path
)

# flow fails with `(TypeError) expected string or buffer` at the langchain node. If I have 20 lines in the data.jsonl, I see 20/20 failed and (TypeError) expected string or buffer errors for each.

如果我用单个输入json运行流程,它运行正常。

pf.test(flow=flow_path, inputs=one_line_of_data_jsonl)

# flow runs without error

如何重现bug

重现此行为的方法和您可以多频繁地遇到此bug:

  1. 在流程中创建一个langchain节点
  2. 制作data.jsonl以进行测试
  3. 运行批量运行流程

预期行为

带有langchain节点的流程应无错误运行。

截图

line_number: 1
        error:
          message: "Execution failure in 'langchain_node': (TypeError) expected string
            or buffer"
          messageFormat: Execution failure in '{node_name}'.
          messageParameters:
            node_name: langchain_node
          referenceCode: Tool/__pf_main__
          code: UserError
          innerError:
            code: ToolExecutionError
            innerError:
          additionalInfo:
          - type: ToolExecutionErrorDetails
            info:
              type: TypeError
              message: expected string or buffer
              traceback: "Traceback (most recent call last):\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/tracing/_trace.py\"\
                , line 380, in wrapped\n    output = func(*args, **kwargs)\n  File
                \"/home/junkimin/rag-patterns/flows/relational_data/langchain_node.py\"\
                , line 71, in langchain_tool\n    res = agent_executor.invoke({\n\
                \  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain/chains/base.py\"\
                , line 163, in invoke\n    raise e\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain/chains/base.py\"\
                , line 153, in invoke\n    self._call(inputs, run_manager=run_manager)\n\
                \  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain/agents/agent.py\"\
                , line 1432, in _call\n    next_step_output = self._take_next_step(\n\
                \  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain/agents/agent.py\"\
                , line 1138, in _take_next_step\n    [\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain/agents/agent.py\"\
                , line 1138, in <listcomp>\n    [\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain/agents/agent.py\"\
                , line 1166, in _iter_next_step\n    output = self.agent.plan(\n \
                \ File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain/agents/agent.py\"\
                , line 397, in plan\n    for chunk in self.runnable.stream(inputs,
                config={\"callbacks\": callbacks}):\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain_core/runnables/base.py\"\
                , line 2685, in stream\n    yield from self.transform(iter([input]),
                config, **kwargs)\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain_core/runnables/base.py\"\
                , line 2672, in transform\n    yield from self._transform_stream_with_config(\n\
                \  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain_core/runnables/base.py\"\
                , line 1743, in _transform_stream_with_config\n    chunk: Output =
                context.run(next, iterator)  # type: ignore\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain_core/runnables/base.py\"\
                , line 2636, in _transform\n    for output in final_pipeline:\n  File
                \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain_core/runnables/base.py\"\
                , line 1209, in transform\n    for chunk in input:\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain_core/runnables/base.py\"\
                , line 4532, in transform\n    yield from self.bound.transform(\n\
                \  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain_core/runnables/base.py\"\
                , line 1226, in transform\n    yield from self.stream(final, config,
                **kwargs)\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py\"\
                , line 239, in stream\n    raise e\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py\"\
                , line 222, in stream\n    for chunk in self._stream(\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain_openai/chat_models/base.py\"\
                , line 395, in _stream\n    for chunk in self.client.create(messages=message_dicts,
                **params):\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/tracing/contracts/generator_proxy.py\"\
                , line 27, in generate_from_proxy\n    yield from proxy\n  File \"\
                /home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/tracing/contracts/generator_proxy.py\"\
                , line 17, in __next__\n    item = next(self._generator)\n  File \"\
                /home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/tracing/_trace.py\"\
                , line 168, in traced_generator\n    token_collector.collect_openai_tokens_for_streaming(span,
                inputs, generator_output, parser.is_chat)\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/tracing/_trace.py\"\
                , line 50, in collect_openai_tokens_for_streaming\n    tokens = calculator.get_openai_metrics_for_chat_api(inputs,
                output)\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/tracing/_openai_utils.py\"\
                , line 102, in get_openai_metrics_for_chat_api\n    metrics[\"prompt_tokens\"\
                ] = self._get_prompt_tokens_from_messages(\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/tracing/_openai_utils.py\"\
                , line 138, in _get_prompt_tokens_from_messages\n    prompt_tokens
                += len(enc.encode(value))\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/tiktoken/core.py\"\
                , line 116, in encode\n    if match := _special_token_regex(disallowed_special).search(text):\n
                TypeError: expected string or buffer\n"
              filename: 
                /home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/tiktoken/core.py
              lineno: 116
              name: encode
          debugInfo:
            type: ToolExecutionError
            message: "Execution failure in 'langchain_node': (TypeError) expected
              string or buffer"
            stackTrace: "\nThe above exception was the direct cause of the following
              exception:\n\nTraceback (most recent call last):\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/executor/flow_executor.py\"\
              , line 926, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n\
              \  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/executor/flow_executor.py\"\
              , line 836, in _exec_inner_with_trace\n    output, aggregation_inputs
              = self._exec_inner(\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/executor/flow_executor.py\"\
              , line 865, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs,
              context)\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/executor/flow_executor.py\"\
              , line 1106, in _traverse_nodes\n    nodes_outputs, bypassed_nodes =
              self._submit_to_scheduler(context, inputs, batch_nodes)\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/executor/flow_executor.py\"\
              , line 1143, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n\
              \  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/executor/_flow_nodes_scheduler.py\"\
              , line 89, in execute\n    raise e\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/executor/_flow_nodes_scheduler.py\"\
              , line 72, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n\
              \  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/executor/_flow_nodes_scheduler.py\"\
              , line 113, in _collect_outputs\n    each_node_result = each_future.result()\n\
              \  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/concurrent/futures/_base.py\"\
              , line 451, in result\n    return self.__get_result()\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/concurrent/futures/_base.py\"\
              , line 403, in __get_result\n    raise self._exception\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/concurrent/futures/thread.py\"\
              , line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n\
              \  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/executor/_flow_nodes_scheduler.py\"\
              , line 134, in _exec_single_node_in_thread\n    result = context.invoke_tool(node,
              f, kwargs=kwargs)\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/_core/flow_execution_context.py\"\
              , line 87, in invoke_tool\n    result = self._invoke_tool_inner(node,
              f, kwargs)\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/_core/flow_execution_context.py\"\
              , line 202, in _invoke_tool_inner\n    raise ToolExecutionError(node_name=node_name,
              module=module) from e\n"
            innerException:
              type: TypeError
              message: expected string or buffer
              stackTrace: "Traceback (most recent call last):\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/_core/flow_execution_context.py\"\
                , line 178, in _invoke_tool_inner\n    return f(**kwargs)\n  File
                \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/tracing/_trace.py\"\
                , line 380, in wrapped\n    output = func(*args, **kwargs)\n  File
                \"/home/junkimin/rag-patterns/flows/relational_data/langchain_node.py\"\
                , line 71, in langchain_tool\n    res = agent_executor.invoke({\n\
                \  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain/chains/base.py\"\
                , line 163, in invoke\n    raise e\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain/chains/base.py\"\
                , line 153, in invoke\n    self._call(inputs, run_manager=run_manager)\n\
                \  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain/agents/agent.py\"\
                , line 1432, in _call\n    next_step_output = self._take_next_step(\n\
                \  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain/agents/agent.py\"\
                , line 1138, in _take_next_step\n    [\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain/agents/agent.py\"\
                , line 1138, in <listcomp>\n    [\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain/agents/agent.py\"\
                , line 1166, in _iter_next_step\n    output = self.agent.plan(\n \
                \ File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain/agents/agent.py\"\
                , line 397, in plan\n    for chunk in self.runnable.stream(inputs,
                config={\"callbacks\": callbacks}):\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain_core/runnables/base.py\"\
                , line 2685, in stream\n    yield from self.transform(iter([input]),
                config, **kwargs)\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain_core/runnables/base.py\"\
                , line 2672, in transform\n    yield from self._transform_stream_with_config(\n\
                \  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain_core/runnables/base.py\"\
                , line 1743, in _transform_stream_with_config\n    chunk: Output =
                context.run(next, iterator)  # type: ignore\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain_core/runnables/base.py\"\
                , line 2636, in _transform\n    for output in final_pipeline:\n  File
                \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain_core/runnables/base.py\"\
                , line 1209, in transform\n    for chunk in input:\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain_core/runnables/base.py\"\
                , line 4532, in transform\n    yield from self.bound.transform(\n\
                \  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain_core/runnables/base.py\"\
                , line 1226, in transform\n    yield from self.stream(final, config,
                **kwargs)\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py\"\
                , line 239, in stream\n    raise e\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py\"\
                , line 222, in stream\n    for chunk in self._stream(\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/langchain_openai/chat_models/base.py\"\
                , line 395, in _stream\n    for chunk in self.client.create(messages=message_dicts,
                **params):\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/tracing/contracts/generator_proxy.py\"\
                , line 27, in generate_from_proxy\n    yield from proxy\n  File \"\
                /home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/tracing/contracts/generator_proxy.py\"\
                , line 17, in __next__\n    item = next(self._generator)\n  File \"\
                /home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/tracing/_trace.py\"\
                , line 168, in traced_generator\n    token_collector.collect_openai_tokens_for_streaming(span,
                inputs, generator_output, parser.is_chat)\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/tracing/_trace.py\"\
                , line 50, in collect_openai_tokens_for_streaming\n    tokens = calculator.get_openai_metrics_for_chat_api(inputs,
                output)\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/tracing/_openai_utils.py\"\
                , line 102, in get_openai_metrics_for_chat_api\n    metrics[\"prompt_tokens\"\
                ] = self._get_prompt_tokens_from_messages(\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/promptflow/tracing/_openai_utils.py\"\
                , line 138, in _get_prompt_tokens_from_messages\n    prompt_tokens
                += len(enc.encode(value))\n  File \"/home/junkimin/miniconda3/envs/openai/lib/python3.10/site-packages/tiktoken/core.py\"\
                , line 116, in encode\n    if match := _special_token_regex(disallowed_special).search(text):\n"
              innerException:
  debugInfo:
    type: BulkRunException
    message: "Failed to run 2/2 lines. First error message is: Execution failure in
      'langchain_node': (TypeError) expected string or buffer"
    stackTrace: "Traceback (most recent call last):\n"
    innerException:

运行信息(请填写以下信息):

  • 使用pf -v提示flow包版本:1.7.0
  • Langchain版本:0.1.13
  • 操作系统:Ubuntu
  • 使用python --version的Python版本:3.10
xjreopfe

xjreopfe1#

仅供参考,我深入研究了错误并找到了问题所在:
promptflow.tracing._openai_utils.py.OpenAIMetricsCalculator._get_prompt_tokens_from_messages ,具体在
prompt_tokens += len(enc.encode(value)) 。如果我注解掉这个,批处理运行正常。
我正在使用langchain的Agent API,这是我的Python节点代码供您调试。

from typing import Dict

from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain_benchmarks import registry
from langchain_core.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder, PromptTemplate
from langchain_openai import AzureChatOpenAI

from promptflow.core import tool
from promptflow.connections import AzureOpenAIConnection

@tool
def langchain_tool(
    question: str,
    prompt_template: str,
    chat_model: str,
    conn: AzureOpenAIConnection,
) -> Dict:
    """
    A tool that uses the language model to answer a question based on a prompt template
    
    Args:
        question: User question
        prompt_template: Template for the prompt
        chat_model: Deployment name to use
        conn: Azure OpenAI connection object
    """
    prompt = ChatPromptTemplate.from_messages([
        SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template=prompt_template)),
        HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['question'], template='{question}')),
        MessagesPlaceholder(variable_name='agent_scratchpad')
    ])

    llm = AzureChatOpenAI(
        model=chat_model,
        temperature=0,
        azure_endpoint=conn.api_base,
        api_version=conn.api_version,
        api_key=conn.api_key,
    )

    task = registry["Tool Usage - Relational Data"]
    env = task.create_environment()
    tools = env.tools

    agent = create_openai_functions_agent(llm=llm, tools=tools, prompt=prompt)
    agent_executor = AgentExecutor(
        agent=agent,
        tools=tools,
        verbose=True,
        handle_parsing_errors=True,
        return_intermediate_steps=True,
    )

    res = agent_executor.invoke({
        "question": question,
    })
    # We only want to know the step (tool) names
    res['intermediate_steps'] = [agent_action[0].tool for agent_action in res['intermediate_steps']]

    return res
jw5wzhpr

jw5wzhpr2#

@loomlike 感谢您提供的信息,这是一个在这种情况下的错误,我们会修复它。

avwztpqn

avwztpqn3#

你好,我们发送这个友好的提醒是因为我们在30天内没有收到你的回复。我们需要更多关于这个问题的信息来帮助解决它。请务必给我们提供你的反馈。如果我们在7天内没有收到你的回复,问题将自动关闭。谢谢!

brccelvz

brccelvz4#

你好,我们发送这个友好的提醒是因为我们在30天内没有收到你的回复。我们需要更多关于这个问题的信息来帮助解决它。请务必给我们提供你的反馈。如果我们在7天内没有收到你的回复,问题将自动关闭。谢谢!

相关问题