llama_index [Question]: How to do a text-based RAG using AzureOpenAIMultiModal and gpt-4o

relj7zay  于 2个月前  发布在  其他
关注(0)|答案(4)|浏览(46)

问题验证

  • 我已经在文档和discord上寻找答案。

问题

我正在尝试将gpt-4o用作我的模型,用于处理PDF/txt文档的RAG。有人能请提供一个示例吗?
我需要使用AzureOpenAIMultiModal类还是AzureOpenAI?
当我使用AzureOpenAI时,它抱怨说模型不受支持。以下是我的代码:

return AzureOpenAI(
            model="gpt-4o-2024-05-13",
            deployment_name="gpt4o",
            azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
            api_key=os.getenv("AZURE_OPENAI_API_KEY"),
            openai_api_type="azure",
            openai_api_version='2024-05-13',
            temperature=0
        )

还有错误信息:

ValueError: Unknown model 'gpt-4o-2024-05-13'. Please provide a valid OpenAI model name in: gpt-4, gpt-4-32k, gpt-4-1106-preview, gpt-4-0125-preview, gpt-4-turbo-
preview, gpt-4-vision-preview, gpt-4-1106-vision-preview, gpt-4-turbo-2024-04-09, gpt-4-turbo, gpt-4-0613, gpt-4-32k-0613, gpt-4-0314, gpt-4-32k-0314, gpt-3.5-turbo, gpt-3.5-turbo-16k,
 gpt-3.5-turbo-0125, gpt-3.5-turbo-1106, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k-0613, gpt-3.5-turbo-0301, text-davinci-003, text-davinci-002, gpt-3.5-turbo-instruct, text-ada-001, text-babbage-001, text-curie-001, ada, babbage, curie, davinci, gpt-35-turbo-16k, gpt-35-turbo, gpt-35-turbo-0125, gpt-35-turbo-1106, gpt-35-turbo-0613, gpt-35-turbo-16k-0613
1tuwyuhd

1tuwyuhd1#

要使用 gpt-4o 模型在 PDF/txt 文档上进行检索增强生成(RAG),您需要使用 AzureOpenAIMultiModal 类。AzureOpenAI 类不支持 gpt-4o 模型,这就是您遇到错误的原因。
以下是如何为您的任务配置和使用 AzureOpenAIMultiModal 类的示例:

  • 设置环境变量:

确保设置以下环境变量:

  • OPENAI_API_VERSION : 2023-05-15
  • AZURE_OPENAI_ENDPOINT : 您的 Azure 终端URL,例如, https://YOUR_RESOURCE_NAME.openai.azure.com/
  • AZURE_OPENAI_API_KEY : 您的 Azure API密钥
  • 初始化 AzureOpenAIMultiModal 类:

使用以下代码初始化 AzureOpenAIMultiModal 类:

from llama_index.legacy.multi_modal_llms.azure_openai import AzureOpenAIMultiModal

azure_openai_multi_modal = AzureOpenAIMultiModal(
    model="gpt-4o",
    engine="your_deployment_name",  # Replace with your deployment name
    azure_endpoint="https://YOUR_RESOURCE_NAME.openai.azure.com/",  # Replace with your endpoint URL
    api_key="your_api_key",  # Replace with your API key
    api_version="2023-05-15"
)
  • 构建和查询多模态向量存储:

使用 MultiModalVectorStoreIndexSimpleMultiModalQueryEngine 构建和查询多模态向量存储。以下是一个示例:

from llama_index.core.indices import MultiModalVectorStoreIndex
from llama_index.vector_stores.qdrant import QdrantVectorStore
from llama_index.core import SimpleDirectoryReader, StorageContext
import qdrant_client

# Create a local Qdrant vector store
client = qdrant_client.QdrantClient(path="qdrant_mm_db")

text_store = QdrantVectorStore(client=client, collection_name="text_collection")
image_store = QdrantVectorStore(client=client, collection_name="image_collection")

storage_context = StorageContext.from_defaults(vector_store=text_store, image_store=image_store)

# Load text and image documents from local folder
documents = SimpleDirectoryReader("./data_folder/").load_data()

# Create the MultiModal index
index = MultiModalVectorStoreIndex.from_documents(documents, storage_context=storage_context)

# Use MultiModal Retriever and Query Engine
from llama_index.core import PromptTemplate
from llama_index.core.query_engine import SimpleMultiModalQueryEngine

retriever_engine = index.as_retriever(similarity_top_k=3, image_similarity_top_k=3)
retrieval_results = retriever_engine.retrieve(response)

qa_tmpl_str = (
    "Context information is below.\n"
    "---------------------\n"
    "{context_str}\n"
    "---------------------\n"
    "Given the context information and not prior knowledge, "
    "answer the query.\n"
    "Query: {query_str}\n"
    "Answer: "
)
qa_tmpl = PromptTemplate(qa_tmpl_str)

query_engine = index.as_query_engine(multi_modal_llm=azure_openai_multi_modal, text_qa_template=qa_tmpl)

query_str = "Tell me more about the Porsche"
response = query_engine.query(query_str)

此设置将允许您使用 gpt-4o 模型在 PDF/txt 文档上进行 Azure OpenAI 1(https://github.com/run-llama/llama_index/blob/main/llama-index-legacy/llama_index/legacy/multi_modal_llms/azure_openai.py)2(https://github.com/run-llama/llama_index/blob/main/llama-index-integrations/multi_modal_llms/llama-index-multi-modal-llms-azure-openai/llama_index/multi_modal_llms/azure_openai/base.py)3(https://github.com/run-llama/llama_index/blob/main/docs/docs/module_guides/models/multi_modal.md)4(https://github.com/run-llama/llama_index/blob/main/llama-index-legacy/llama_index/legacy/multi_modal_llms/openai.py) 的 RAG。

详细信息

关于 Dosu

此响应旨在帮助您并节省时间。它不是精确解决方案,而是您自己研究的起点。
请帮助我变得更有用!如果这有帮助,请留下一个👍,如果无关紧要,请留下一个👎。

  • 要继续对话,请提及 @dosu。*
rjjhvcjd

rjjhvcjd2#

如果我要从节点创建MultiModalVectorStoreIndex,我应该怎么做?

vi4fp9gy

vi4fp9gy3#

@mballav,它在LLM上受支持,但您可能需要更新。
pip install -U llama-index-llms-azure-openai llama-index-llms-openai

qcuzuvrc

qcuzuvrc4#

非常感谢,@logan-markewich!你救了我的一天。

相关问题