我试着写一段代码,从Pinecone中找到topK匹配向量,并根据topK结果返回所问问题的摘要。
为此,我使用LangChain和OpenAI(在typescript中创建嵌入)。
这就是代码:
import { PineconeClient } from "@pinecone-database/pinecone";
import { VectorDBQAChain } from "langchain/chains";
import { PineconeStore } from "langchain/vectorstores/pinecone";
import { OpenAIEmbeddings } from "langchain/embeddings/openai";
import { OpenAI } from "langchain/llms/openai";
const embeddings = new OpenAIEmbeddings({
openAIApiKey: process.env.OPENAI_API_KEY,
});
const pinecone = new PineconeClient();
const model = new OpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
});
async function main(namespaceID: string) {
await pinecone.init({
apiKey: process.env.PINECONE_API_KEY,
environment: "us-west4-gcp-free",
});
const pineconeIndex = pinecone.Index("index-test");
const vectorStore = await PineconeStore.fromExistingIndex(
embeddings,
{ pineconeIndex }
);
vectorStore.namespace = namespaceID
const chain = VectorDBQAChain.fromLLM(model, vectorStore, {
k: 5,
returnSourceDocuments: true,
verbose: true,
});
const response = await chain.call({ query: "What is the status of Project X" });
console.log(response);
}
main("TEST_NAMESPACE")
这段代码运行良好。现在,我希望能够将响应从chain.call()流式传输到我的控制台。
对此有什么解决方案或变通办法吗?
1条答案
按热度按时间ctrmrzij1#
以下是您提供的代码的最后一部分的修改版本,它启用了响应流:
您需要添加一个新的导入:
LangChain关于Streaming的文档