llama_index 如何从多文档代理中获取source_node?

qqrboqgw  于 2个月前  发布在  其他
关注(0)|答案(5)|浏览(58)

问题验证

  • 我在文档和Discord上寻找答案。

问题

我按照这个 article 进行实现,但是当我使用query()时,日志只显示 "=== LLM Response ===",而 response.source_nodes 都为空。我想了解出了什么问题,以及如何打印出所有步骤的 source_nodeschunk_text
主代码:

为每个文档构建文档代理

from llama_index.agent.openai import OpenAIAgent
from llama_index.core import load_index_from_storage, StorageContext
from llama_index.core.node_parser import SentenceSplitter
import os

node_parser = SentenceSplitter()

# Build agents dictionary
agents = {}
query_engines = {}

# this is for the baseline
all_nodes = []

for idx, wiki_title in enumerate(wiki_titles):
    nodes = node_parser.get_nodes_from_documents(city_docs[wiki_title])
    all_nodes.extend(nodes)

    if not os.path.exists(f"./data/{wiki_title}"):
        # build vector index
        vector_index = VectorStoreIndex(nodes)
        vector_index.storage_context.persist(
            persist_dir=f"./data/{wiki_title}"
        )
    else:
        vector_index = load_index_from_storage(
            StorageContext.from_defaults(persist_dir=f"./data/{wiki_title}"),
        )

    # build summary index
    summary_index = SummaryIndex(nodes)
    # define query engines
    vector_query_engine = vector_index.as_query_engine(llm=Settings.llm)
    summary_query_engine = summary_index.as_query_engine(llm=Settings.llm)

    # define tools
    query_engine_tools = [
        QueryEngineTool(
            query_engine=vector_query_engine,
            metadata=ToolMetadata(
                name="vector_tool",
                description=(
                    "Useful for questions related to specific aspects of"
                    f" {wiki_title} (e.g. the history, arts and culture,"
                    " sports, demographics, or more)."
                ),
            ),
        ),
        QueryEngineTool(
            query_engine=summary_query_engine,
            metadata=ToolMetadata(
                name="summary_tool",
                description=(
                    "Useful for any requests that require a holistic summary"
                    f" of EVERYTHING about {wiki_title}. For questions about"
                    " more specific sections, please use the vector_tool."
                ),
            ),
        ),
    ]

    # build agent
    function_llm = OpenAI(model="gpt-4")
    agent = OpenAIAgent.from_tools(
        query_engine_tools,
        llm=function_llm,
        verbose=True,
        system_prompt=f"""\
You are a specialized agent designed to answer queries about {wiki_title}.
You must ALWAYS use at least one of the tools provided when answering a question; do NOT rely on prior knowledge.\
""",
    )

    agents[wiki_title] = agent
    query_engines[wiki_title] = vector_index.as_query_engine(
        similarity_top_k=2
    )

为每个文档构建启用检索器的OpenAI代理

# define tool for each document agent
all_tools = []
for wiki_title in wiki_titles:
    wiki_summary = (
        f"This content contains Wikipedia articles about {wiki_title}. Use"
        f" this tool if you want to answer any questions about {wiki_title}.\n"
    )
    doc_tool = QueryEngineTool(
        query_engine=agents[wiki_title],
        metadata=ToolMetadata(
            name=f"tool_{wiki_title}",
            description=wiki_summary,
        ),
    )
    all_tools.append(doc_tool)
# define an "object" index and retriever over these tools
from llama_index.core import VectorStoreIndex
from llama_index.core.objects import ObjectIndex

obj_index = ObjectIndex.from_objects(
    all_tools,
    index_cls=VectorStoreIndex,
)
from llama_index.agent.openai import OpenAIAgent

top_agent = OpenAIAgent.from_tools(
    tool_retriever=obj_index.as_retriever(similarity_top_k=3),
    system_prompt=""" \
You are an agent designed to answer queries about a set of given cities.
Please always use the tools provided to answer a question. Do not rely on prior knowledge.\

""",
    verbose=True,
)
response = top_agent.query(
    "Give me a summary of all the positive aspects of Houston"
)
3yhwsihp

3yhwsihp1#

要从多文档代理中获取 source_node 并为所有步骤打印出 source_nodeschunk_text,您可以使用 llama_index.legacy.response.pprint_utils 模块中提供的 pprint_response 函数。该函数已经包含了打印 source_nodes 及其内容的逻辑。
以下是如何使用此函数的示例:

from llama_index.legacy.response.pprint_utils import pprint_response

# Assuming `response` is an instance of `Response` that you have obtained from the multi-document agent
pprint_response(response, show_source=True)

这将为所有步骤打印出 source_nodes 及其 chunk_textshow_source=True 参数确保显示源节点。
pprint_response 函数内部调用 pprint_source_node 以打印每个 source_node 及其内容。以下是相关代码片段:

def pprint_response(
    response: Response,
    source_length: int = 350,
    wrap_width: int = 70,
    show_source: bool = False,
) -> None:
    """Pretty print response for jupyter notebook."""
    if response.response is None:
        response_text = "None"
    else:
        response_text = response.response.strip()

    response_text = f"Final Response: {response_text}"
    print(textwrap.fill(response_text, width=wrap_width))

    if show_source:
        for ind, source_node in enumerate(response.source_nodes):
            print("_" * wrap_width)
            print(f"Source Node {ind + 1}/{len(response.source_nodes)}")
            pprint_source_node(
                source_node, source_length=source_length, wrap_width=wrap_width
            )

此函数将遍历 response 中的所有 source_nodes 并打印它们的详细信息,包括 chunk_text1(https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/response/pprint_utils.py)2(https://github.com/run-llama/llama_index/blob/main/llama-index-legacy/llama_index/legacy/response/pprint_utils.py)。

详细信息

关于 Dosu

此响应旨在提供帮助并节省您的时间。它不是精确解决方案,而是您自己研究的起点。
请帮助我变得更有用!如果这有帮助,请留下一个 👍,如果无关紧要,请留下一个 👎。

  • 要继续对话,请提及 @dosu。*
bsxbgnwa

bsxbgnwa2#

你好,@dosu ,为什么我的 response.source_nodes 是空的?

pxq42qpu

pxq42qpu3#

要解决 response.source_nodes 为空的问题,您可以修改 RetrySourceQueryEngine 类中的 _query 方法,添加记录语句以打印出 source_nodes 及其 chunk_text。以下是修改后的代码:

def `_query`():
    # 在这里添加日志记录语句,例如:
    print("`source_nodes`: ", `source_nodes`)
    print("`chunk_text`: ", `chunk_text`)

此外,您还可以使用 pprint_utils.py 模块中的 pprint_response 方法来确保响应中包含可读的源节点信息。此方法将响应格式化为包含每个源节点的元数据(如 node_idscore)。以下是相关代码的一部分:

def `pprint_response`():
    # 在这里添加代码,例如:
    formatted_response = format_response(source_nodes)
    return formatted_response

通过引入这些更改,您可以确保 source_nodes 及其 chunk_text 被记录并包含在响应中,从而帮助您调试和解决 response.source_nodes 为空的问题。

l7mqbcuq

l7mqbcuq4#

你好@dosu,

我尝试了你提到的RetrySourceQueryEngine方法,但是得到了ValueError: No source nodes passed evaluation.。为什么呢?

这是我的代码:

from llama_index.agent.openai import OpenAIAgent
from llama_index.core import load_index_from_storage, StorageContext
from llama_index.core.node_parser import SentenceSplitter
import os

node_parser = SentenceSplitter()

# Build agents dictionary
agents = {}
query_engines = {}

# this is for the baseline
all_nodes = []

for idx, wiki_title in enumerate(wiki_titles):
    nodes = node_parser.get_nodes_from_documents(city_docs[wiki_title])
    all_nodes.extend(nodes)

    if not os.path.exists(f"./data/{wiki_title}"):
        # build vector index
        vector_index = VectorStoreIndex(nodes)
        vector_index.storage_context.persist(
            persist_dir=f"./data/{wiki_title}"
        )
    else:
        vector_index = load_index_from_storage(
            StorageContext.from_defaults(persist_dir=f"./data/{wiki_title}"),
        )

    # build summary index
    summary_index = SummaryIndex(nodes)
    # define query engines
    vector_query_engine = vector_index.as_query_engine(llm=Settings.llm)
    summary_query_engine = summary_index.as_query_engine(llm=Settings.llm)

    # define tools
    query_engine_tools = [
        QueryEngineTool(
            query_engine=vector_query_engine,
            metadata=ToolMetadata(
                name="vector_tool",
                description=(
                    "Useful for questions related to specific aspects of"
                    f" {wiki_title} (e.g. the history, arts and culture,"
                    " sports, demographics, or more)."
                ),
            ),
        ),
        QueryEngineTool(
            query_engine=summary_query_engine,
            metadata=ToolMetadata(
                name="summary_tool",
                description=(
                    "Useful for any requests that require a holistic summary"
                    f" of EVERYTHING about {wiki_title}. For questions about"
                    " more specific sections, please use the vector_tool."
                ),
            ),
        ),
    ]

    # build agent
    function_llm = OpenAI(model="gpt-4")
    agent = OpenAIAgent.from_tools(
        query_engine_tools,
        llm=function_llm,
        verbose=True,
        system_prompt=f"""\
You are a specialized agent designed to answer queries about {wiki_title}.
You must ALWAYS use at least one of the tools provided when answering a question; do NOT rely on prior knowledge.\
""",
    )

    agents[wiki_title] = agent
    query_engines[wiki_title] = vector_index.as_query_engine(
        similarity_top_k=2
    )
# define tool for each document agent
all_tools = []
for wiki_title in wiki_titles:
    wiki_summary = (
        f"This content contains Wikipedia articles about {wiki_title}. Use"
        f" this tool if you want to answer any questions about {wiki_title}.\n"
    )
    doc_tool = QueryEngineTool(
        query_engine=agents[wiki_title],
        metadata=ToolMetadata(
            name=f"tool_{wiki_title}",
            description=wiki_summary,
        ),
    )
    all_tools.append(doc_tool)
# define an "object" index and retriever over these tools
from llama_index.core import VectorStoreIndex
from llama_index.core.objects import ObjectIndex

obj_index = ObjectIndex.from_objects(
    all_tools,
    index_cls=VectorStoreIndex,
)
from llama_index.agent.openai import OpenAIAgent

top_agent = OpenAIAgent.from_tools(
    tool_retriever=obj_index.as_retriever(similarity_top_k=3),
    system_prompt=""" \
You are an agent designed to answer queries about a set of given cities.
Please always use the tools provided to answer a question. Do not rely on prior knowledge.\

""",
    verbose=True,
)
import logging
from typing import Optional

from llama_index.core.base.base_query_engine import BaseQueryEngine
from llama_index.core.base.response.schema import RESPONSE_TYPE, Response
from llama_index.core.callbacks.base import CallbackManager
from llama_index.core.evaluation import BaseEvaluator
from llama_index.core.indices.list.base import SummaryIndex
from llama_index.core.llms.llm import LLM
from llama_index.core.prompts.mixin import PromptMixinType
from llama_index.core.query_engine.retriever_query_engine import (
    RetrieverQueryEngine,
)
from llama_index.core.schema import Document, QueryBundle
from llama_index.core.service_context import ServiceContext
from llama_index.core.settings import (
    Settings,
    callback_manager_from_settings_or_context,
    llm_from_settings_or_context,
)

logger = logging.getLogger(__name__)

class RetrySourceQueryEngine(BaseQueryEngine):
    """Retry with different source nodes."""

    def __init__(
        self,
        query_engine: RetrieverQueryEngine,
        evaluator: BaseEvaluator,
        llm: Optional[LLM] = None,
        max_retries: int = 3,
        callback_manager: Optional[CallbackManager] = None,
        # deprecated
        service_context: Optional[ServiceContext] = None,
    ) -> None:
        """Run a BaseQueryEngine with retries."""
        self._query_engine = query_engine
        self._evaluator = evaluator
        self._llm = llm or llm_from_settings_or_context(Settings, service_context)
        self.max_retries = max_retries
        super().__init__(
            callback_manager=callback_manager
            or callback_manager_from_settings_or_context(Settings, service_context)
        )

    def _get_prompt_modules(self) -> PromptMixinType:
        """Get prompt sub-modules."""
        return {"query_engine": self._query_engine, "evaluator": self._evaluator}

    def _query(self, query_bundle: QueryBundle) -> RESPONSE_TYPE:
        response = self._query_engine._query(query_bundle)
        if self.max_retries <= 0:
            return response
        typed_response = (
            response if isinstance(response, Response) else response.get_response()
        )
        query_str = query_bundle.query_str
        eval = self._evaluator.evaluate_response(query_str, typed_response)
        if eval.passing:
            logger.debug("Evaluation returned True.")
            return response
        else:
            logger.debug("Evaluation returned False.")
            # Test source nodes
            source_evals = [
                self._evaluator.evaluate(
                    query=query_str,
                    response=typed_response.response,
                    contexts=[source_node.get_content()],
                )
                for source_node in typed_response.source_nodes
            ]
            orig_nodes = typed_response.source_nodes

            # Log source_nodes and chunk_text
            for i, source_node in enumerate(orig_nodes):
                logger.debug(f"Source Node {i}: {source_node}")
                logger.debug(f"Chunk Text {i}: {source_node.node.get_content()}")

            assert len(source_evals) == len(orig_nodes)
            new_docs = []
            for node, eval_result in zip(orig_nodes, source_evals):
                if eval_result:
                    new_docs.append(Document(text=node.node.get_content()))
            if len(new_docs) == 0:
                raise ValueError("No source nodes passed evaluation.")
            new_index = SummaryIndex.from_documents(
                new_docs,
            )
            new_retriever_engine = RetrieverQueryEngine(new_index.as_retriever())
            new_query_engine = RetrySourceQueryEngine(
                new_retriever_engine,
                self._evaluator,
                self._llm,
                self.max_retries - 1,
            )
            return new_query_engine.query(query_bundle)

    async def _aquery(self, query_bundle: QueryBundle) -> RESPONSE_TYPE:
        """Not supported."""
        return self._query(query_bundle)
from llama_index.core.evaluation import RelevancyEvaluator

query_response_evaluator = RelevancyEvaluator()
retry_source_query_engine = RetrySourceQueryEngine(
    query_engine = top_agent , evaluator = query_response_evaluator, max_retries=5
)
retry_source_response = retry_source_query_engine.query(test_queries[query_num])
print(retry_source_response)
lokaqttq

lokaqttq5#

ValueError: No source nodes passed evaluation. 错误发生在没有一个源节点满足评估器定义的标准时。要解决这个问题,你需要确保至少有一个源节点通过评估。以下是一些步骤,可以帮助你调试并解决这个问题:

  1. 检查评估标准:确保你的 RelevancyEvaluator 中的评估标准不太严格,导致所有源节点都失败。
  2. 调试源节点:添加日志以检查源节点的内容和评估结果,了解为什么它们会失败。
  3. 回退机制:实现一个回退机制来处理没有源节点通过评估的情况。
    以下是如何添加日志以调试问题的例子:
def _query(self, query_bundle: QueryBundle) -> RESPONSE_TYPE:
    response = self._query_engine._query(query_bundle)
    if self.max_retries <= 0:
        return response
    typed_response = (
        response if isinstance(response, Response) else response.get_response()
    )
    query_str = query_bundle.query_str
    eval = self._evaluator.evaluate_response(query_str, typed_response)
    if eval.passing:
        logger.debug("Evaluation returned True.")
        return response
    else:
        logger.debug("Evaluation returned False.")
        # Test source nodes
        source_evals = [
            self._evaluator.evaluate(
                query=query_str,
                response=typed_response.response,
                contexts=[source_node.get_content()],
            )
            for source_node in typed_response.source_nodes
        ]
        orig_nodes = typed_response.source_nodes

        # Log source_nodes and chunk_text
        for i, source_node in enumerate(orig_nodes):
            logger.debug(f"Source Node {i}: {source_node}")
            logger.debug(f"Chunk Text {i}: {source_node.node.get_content()}")

        assert len(source_evals) == len(orig_nodes)
        new_docs = []
        for node, eval_result in zip(orig_nodes, source_evals):
            if eval_result:
                new_docs.append(Document(text=node.get_content()))
        if len(new_docs) == 0:
            logger.error("No source nodes passed evaluation. Original nodes: %s", orig_nodes)
            raise ValueError("No source nodes passed evaluation.")
        new_index = SummaryIndex.from_documents(
            new_docs,
        )
        new_retriever_engine = RetrieverQueryEngine(new_index.as_retriever())
        new_query_engine = RetrySourceQueryEngine(
            new_retriever_engine,
            self._evaluator,
            self._llm,
            self.max_retries - 1,
        )
        return new_query_engine.query(query_bundle)

通过添加这些调试语句,你可以获得更多关于为什么源节点在评估中失败的见解,并相应地调整你的评估器或源节点。
此外,确保你的 RelevancyEvaluator 已正确配置。RelevancyEvaluator 使用模板来确定响应是否与上下文信息一致,如果一致则回答“YES”,否则回答“NO”。如果认为响应相关(“YES”),则认为结果通过。
通过确保你的评估标准适当并为调试添加日志,你可以更好地理解和解决没有源节点通过评估的问题 1(https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/evaluation/relevancy.py)。

详细信息

关于Dosu

这个响应旨在帮助你节省时间并提高效率。它不是精确的解决方案,而是你自己研究的一个起点。
请帮助我变得更有用!如果你觉得有帮助,请给个👍;如果不相关,请给个👎。

  • 要继续对话,请提及@dosu。*

相关问题