llama_index [问题]:羊驼索引多文档代理

sqyvllje  于 2个月前  发布在  其他
关注(0)|答案(2)|浏览(38)

问题验证

  • 我已经在文档和Discord上寻找答案。

问题

你好,我正在按照的文档进行操作,但是我没有使用OpenAI,而是使用了HuggingFaceLLM通过Mistral 7b。

context_window=4096,
    max_new_tokens=256,
    generate_kwargs={"temperature": 0.0, "do_sample": False},
    query_wrapper_prompt=query_wrapper_prompt,
    tokenizer_name="mistralai/Mistral-7B-Instruct-v0.3",
    model_name="mistralai/Mistral-7B-Instruct-v0.3",
    device_map="auto",
    stopping_ids=[50278, 50279, 50277, 1, 0],
    tokenizer_kwargs={"max_length": 4096},
    # uncomment this if using CUDA to reduce memory usage
    model_kwargs={"torch_dtype": torch.float16}
)

Settings.llm = llm
Settings.chunk_size = 1024

现在在最后调用response = top_agent.chat("Tell me about the status of Confezioni Simmy 2014 company")时,我收到了以下错误:
TemplateError: Conversation roles must alternate user/assistant/user/assistant/...
我使用的提示是

from llama_index.core import PromptTemplate

SYSTEM_PROMPT = """You are an AI assistant that answers questions in a friendly manner, based on the given source documents. Here are some rules you always follow:
- Generate human readable output, avoid creating output with gibberish text.
- Generate only the requested output, don't include any other language before or after the requested output.
- Never say thank you, that you are happy to help, that you are an AI agent, etc. Just answer directly.
- Generate professional language typically used in business documents in North America.
- Never generate offensive or foul language.
"""

query_wrapper_prompt = PromptTemplate(
    "[INST]<<SYS>>\n" + SYSTEM_PROMPT + "<</SYS>>\n\n{query_str}[/INST] "
)

所以用户正确地开始了它。
你知道我该怎么修复它吗?
谢谢!

ee7vknir

ee7vknir1#

你好,@dragoa!我是一个机器人,可以帮助你解决bug、问题等,当你等待人类维护者时。让我们一起解决这个问题!🤖
要修复 TemplateError: Conversation roles must alternate user/assistant/user/assistant/... 错误,你需要确保 ChatMessage 对象在 userassistant 角色之间正确交替。以下是一个代码片段,演示了如何正确构建消息以避免此错误:

from llama_index.core.base.llms.types import ChatMessage, MessageRole
from llama_index.llms.mistralai import MistralAI
from transformers import BitsAndBytesConfig
import torch

# Define the function to convert messages to prompt
def messages_to_prompt(messages):
    prompt = ""
    for message in messages:
        if message.role == 'system':
            prompt += f"<|system|>\n{message.content}</s>\n"
        elif message.role == 'user':
            prompt += f"<|user|>\n{message.content}</s>\n"
        elif message.role == 'assistant':
            prompt += f"<|assistant|>\n{message.content}</s>\n"

    if not prompt.startswith("<|system|>\n"):
        prompt = "<|system|>\n</s>\n" + prompt

    prompt = prompt + "<|assistant|>\n"
    return prompt

# Define the function to convert completion to prompt
def completion_to_prompt(completion):
    return f"<|system|>\n</s>\n<|user|>\n{completion}</s>\n<|assistant|>\n"

# Quantize to save memory
quantization_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_compute_dtype=torch.float16,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_use_double_quant=True,
)

# Initialize the MistralAI instance
llm = MistralAI(
    model="mistral-tiny",
    temperature=0.7,
    max_tokens=256,
    messages_to_prompt=messages_to_prompt,
    completion_to_prompt=completion_to_prompt,
    additional_kwargs={"quantization_config": quantization_config},
)

# Example of correctly alternating messages
messages = [
    ChatMessage(role=MessageRole.USER, content="Hello, how are you?"),
    ChatMessage(role=MessageRole.ASSISTANT, content="I'm good, thank you! How can I assist you today?"),
    ChatMessage(role=MessageRole.USER, content="Can you tell me about the weather?"),
    ChatMessage(role=MessageRole.ASSISTANT, content="Sure, the weather today is sunny with a high of 75°F."),
]

# Ensure the messages are passed in the correct order to the chat method
response = llm.chat(messages=messages)
print(response.message.content)

确保每个具有 MessageRole.USER 角色的 ChatMessage 后面都有一个具有 MessageRole.ASSISTANT 角色的 ChatMessage,反之亦然。这将确保对话角色正确交替,防止出现 TemplateError1(https://github.com/run-llama/llama_index/blob/main/llama-index-legacy/llama_index/legacy/llms/mistral.py)2(https://github.com/run-llama/llama_index/blob/main/llama-index-integrations/llms/llama-index-llms-mistral-rs/llama_index/llms/mistral_rs/base.py)3(https://github.com/run-llama/llama_index/blob/main/llama-index-integrations/llms/llama-index-llms-huggingface/llama_index/llms/huggingface/base.py)4(https://github.com/run-llama/llama_index/blob/main/llama-index-integrations/llms/llama-index-llms-mistralai/llama_index/llms/mistralai/base.py)。

详细信息

关于 Dosu

这个响应旨在帮助你,节省时间。它不是 精确的解决方案,而是你自己研究的一个起点。
帮助我变得更有用!如果这个有帮助,请留下一个👍,如果不相关,请留下一个👎。

  • 要继续对话,请提及 @dosu。*
cu6pst1q

cu6pst1q2#

你好,当你调用聊天方法时,它会应用聊天模板(在tokenizer中)。出于某种原因,HuggingFaces tokenizer对于Mistral-7B-Instruct-v0.2 / Mistral-7B-Instruct-v0.3(我相信Mistral-7B-Instruct-v0.1确实不支持系统提示)有一个旧的模板。为了解决这个问题,你可以(a)编辑tokenizer_config.json,(b)在llama_index中编写自己的messages_to_prompt并传递给llm构造函数,或者(c)加载一个可以工作的tokenizer,例如llama-2聊天。🤷
我选择了(b),因为我不太擅长Jinga

自定义messages_to_prompt

def mistralv2_messages_to_prompt(messages):
        conversation = ""
        bos_token = "<s>"
        eos_token= "</s>"
        if messages[0].role ==  MessageRole.SYSTEM:
            loop_messages = messages[1:]
            system_message = messages[0].content
        else:
            loop_messages = messages
            system_message = False

        for index, message in enumerate(loop_messages):
            if (message.role == MessageRole.USER) != (index % 2 == 0):
                raise Exception('HFI Conversation roles must alternate user/assistant/user/assistant/...')
            if index == 0 and system_message != False:
                content = '<<SYS>>\n' + system_message + '\n<</SYS>>\n\n' + message.content
            else:
                content = message.content
            if message.role == MessageRole.USER:
                conversation += bos_token + '[INST] ' + content.strip() + ' [/INST]'
            elif message.role == MessageRole.ASSISTANT:
                conversation += ' ' + content.strip() + ' ' + eos_token

        return (conversation)

相关问题