llama_index [问题]:聊天引擎会访问互联网提供答案,如何严格限制它在向量空间内?

1wnzp6jl  于 5个月前  发布在  其他
关注(0)|答案(4)|浏览(61)

问题验证

  • 我已在文档和Discord中寻找答案。

问题

你好,@logan-markewich,
我想严格限制我的聊天机器人在矢量空间中摄取信息,我该如何实现?
我尝试使用响应response_synthesizerchat_mode

response_synthesizer = get_response_synthesizer(response_mode=ResponseMode.SIMPLE_SUMMARIZE,
                                                    structured_answer_filtering=True,
                                                    verbose=True,
                                                    text_qa_template=DEFAULT_TEXT_QA_PROMPT_TMPL)

    chat_engine = vector_index.as_chat_engine(
        chat_mode=ChatMode.CONTEXT,
        memory=memory,
        chat_store=chat_store,
        node_postprocessors=[SimilarityPostprocessor(similarity_cutoff=0.81)],
        response_synthesizer=response_synthesizer,
    )

你能在这里帮助我吗?

2wnc66cl

2wnc66cl1#

为了严格限制聊天机器人在摄取的向量空间中运行,并防止它访问互联网提供答案,你可以自定义 text_qa_templaterefine_template 以提供特定指令。这确保了聊天机器人只根据摄取的文档上下文进行响应。
以下是如何自定义模板的示例:

text_qa_template = """
You are a helpful assistant. Answer the question based on the provided context. If the answer is not in the context, respond with "I don't have that information."

Context: {context}
Question: {question}
Answer:
"""

refine_template = """
You are a helpful assistant. Refine the following answer based on the provided context. If the answer is not in the context, respond with "I don't have that information."

Existing Answer: {existing_answer}
Context: {context}
Refined Answer:
"""

# Use these templates in your LLMChain or other relevant components
question_generator = LLMChain(
    llm=question_gen_llm, 
    prompt=text_qa_template, 
    callback_manager=manager
)

doc_chain = load_qa_chain(
    streaming_llm, 
    chain_type="stuff", 
    prompt=refine_template, 
    callback_manager=manager
)

这种设置确保了聊天机器人的响应完全基于摄取的向量空间,不依赖于外部互联网资源。此外,你还可以进一步优化聊天机器人的行为,通过将其与矢量存储集成并采用检索方法,如将 LlamaIndex 与 LangChain 集成的示例设置所示:

def get_chain(vectorstore: VectorStore, question_handler, stream_handler) -> ConversationalRetrievalChain:
    manager = AsyncCallbackManager([])
    question_manager = AsyncCallbackManager([question_handler])
    stream_manager = AsyncCallbackManager([stream_handler])

    question_gen_llm = OpenAI(
        temperature=0,
        verbose=True,
        callback_manager=question_manager,
    )
    streaming_llm = OpenAI(
        streaming=True,
        callback_manager=stream_manager,
        verbose=True,
        temperature=0,
    )

    question_generator = LLMChain(
        llm=question_gen_llm, prompt=text_qa_template, callback_manager=manager
    )
    doc_chain = load_qa_chain(
        streaming_llm, chain_type="stuff", prompt=refine_template, callback_manager=manager
    )

    qa = ConversationalRetrievalChain(
        retriever=vectorstore.as_retriever(),
        combine_docs_chain=doc_chain,
        question_generator=question_generator,
        callback_manager=manager,
    )
    return qa

遵循这些步骤,可以确保你的聊天机器人的响应仅限于摄取的向量空间,不包含来自外部来源的信息 1(https://github.com/run-llama/llama_index/blob/main/docs/docs/examples/discover_llamaindex/document_management/discord_dumps/help_channel_dump_05_25_23.json)2(https://github.com/run-llama/llama_index/blob/main/docs/docs/examples/discover_llamaindex/document_management/discord_dumps/help_channel_dump_06_02_23.json)3(https://github.com/run-llama/llama_index/blob/main/docs/docs/understanding/putting_it_all_together/chatbots/building_a_chatbot.md)。

  • 要继续与 Dosu 对话,请提及 @dosu。*
rbpvctlc

rbpvctlc2#

我相信llama-index没有使用LLMChain和ConversationalRetrievalChain。

14ifxucb

14ifxucb3#

它没有访问互联网吗?要么是使用内部知识,要么是幻觉。
你可以尝试添加一个额外的系统提示,提供更多关于llm应该如何行动的指令。

nlejzf6q

nlejzf6q4#

尝试了这个,似乎不起作用

DEFAULT_TEXT_QA_PROMPT_TMPL = (
    "Context information is below.\n"
    "---------------------\n"
    "{context_str}\n"
    "---------------------\n"
    "You are a code chatbot\n"
    "Given the context information and not prior knowledge, "
    "answer the query.\n"
    "If the query is generic, do not provide answer \n"
    "with your knowledge, just say, I cannot provide answer.\n"
    "If no context is retrieved, do not synthesis any answer "
    "with previous history or context.\n"
    "Straight up say, 'I am not sure.'\n"
    "Query: {query_str}\n"
    "Answer: "
)

system_prompt = """
You are a helpful code chat assistant. 
Your responses should be based solely on the retrieved context provided to you. 
If no relevant context is retrieved or if the retrieved context does not contain the necessary information to answer the question, respond with "I'm not sure" or "I don't have enough information to answer that question."
Do not synthesize or generate answers based on general knowledge or information from the internet.
Stick strictly to the information provided in the retrieved context.
"""

chat_engine = vector_index.as_chat_engine(
        chat_mode=ChatMode.CONTEXT,
        memory=memory,
        chat_store=chat_store,
        node_postprocessors=[SimilarityPostprocessor(similarity_cutoff=0.81)],
        text_qa_template=PromptTemplate(DEFAULT_TEXT_QA_PROMPT_TMPL),
        system_prompt=system_prompt,
        verbose=True,
        response_mode="no_text"
)
"""

你能帮忙吗?
一个例子
问题:"印度在哪里?"
答案:"印度位于南亚。它是世界上面积第七大的国家;截至2023年6月,它是人口最多的国家;从1947年独立以来,它一直是世界上最多人口的民主国家。"

相关问题