如何在启动NextJS应用程序时运行Python脚本

eufgjt7s  于 12个月前  发布在  Python
关注(0)|答案(1)|浏览(205)

我正在使用PDF项目进行流行的聊天。我的设置方式是将所有LLM代码放在Python脚本中,并使用Flask将其设置为可以从前端调用的API。现在,当我启动应用程序时,我会执行正常的“npm run dev”,然后我还必须执行“python llmpython.py“来启动我的Python脚本。有什么有效的方法可以做到这一点,以便当我的应用程序如果我在网上做这个,它将如何工作?
以下是我的llmpython.py文件:

from langchain.text_splitter import RecursiveCharacterTextSplitter

from langchain.vectorstores import FAISS
from langchain.embeddings.openai import OpenAIEmbeddings

from langchain.chat_models import ChatOpenAI
from langchain.chains.question_answering import load_qa_chain

from flask import Flask, jsonify, request
from flask_cors import CORS, cross_origin

#Setting Environment variables
from dotenv import load_dotenv
import os
load_dotenv()
OPENAI_API_KEY = os.getenv('OPENAI_API_KEY')


# app instance
app = Flask(__name__)
CORS(app)
@cross_origin()
@app.route("/api/home", methods=['POST'])
def chat_document():
    data = request.get_json()
    pdfUrl = data['url']
    query = data['chat']

    #Load PDF
    #The url should be coming from the front end through a post request
    loader = PyPDFLoader(pdfUrl)
    if loader:
        data = loader.load_and_split()
    else:
        return "Error loading PDF"

    #Text Splitting
    text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
    texts = text_splitter.split_documents(data)

    #Embedding and vector storage
    embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY)
    vectorstore = FAISS.from_documents(texts, embeddings)

    #query
    # query = "What's the main point of the document?"
    docs = vectorstore.similarity_search(query)

    #Load LLM and chatchain
    llm = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY)
    chain = load_qa_chain(llm, chain_type="stuff")
    llmresponse = chain.run(input_documents=docs, question=query)

    response = jsonify({
        'message': llmresponse,
        'role': 'ai'
    })
    response.headers.add('Access-Control-Allow-Origin', '*')

    return response

@app.route("/api/guest", methods=['POST'])
def guest_document():
    data = request.get_json()
    pdfUrl = data['url']
    query1 = data['chat1']
    query2 = data['chat2']

    #Load PDF
    #The url should be coming from the front end through a post request
    loader = PyPDFLoader(pdfUrl)
    if loader:
        data = loader.load_and_split()
    else:
        return "Error loading PDF"

    #Text Splitting
    text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
    texts = text_splitter.split_documents(data)

    #Embedding and vector storage
    embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY)
    vectorstore = FAISS.from_documents(texts, embeddings)

    #query
    # query = "What's the main point of the document?"
    docs1 = vectorstore.similarity_search(query1)
    docs2 = vectorstore.similarity_search(query2)

    #Load LLM and chatchain
    llm = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY)
    chain = load_qa_chain(llm, chain_type="stuff")
    llmresponse1 = chain.run(input_documents=docs1, question=query1)
    llmresponse2 = chain.run(input_documents=docs2, question=query2)
    response = jsonify({
        'message1': llmresponse1,
        'message2': llmresponse2,
        'role': 'ai'
    })
    response.headers.add('Access-Control-Allow-Origin', '*')
    
    return response

if __name__ == "__main__":
    app.run(debug=True, port=8080)

字符串
这是我调用flask应用程序的组件之一:

import { fileName, guestpdfUrl } from "@/components/Hero";
import { Button } from "@/components/ui/button";
import { useState } from "react";
import TabsSec from "./TabsSec";

const Guest = () => {
  const [summary, setSummary] = useState<string>(""); // <-- specify type here
  const [bulletSummary, setBulletSummary] = useState<string>(""); // <-- specify type here
  const [isLoading, setIsLoading] = useState<boolean>(false); // <-- specify type here

  const processDocument = (event: React.FormEvent) => {
    event.preventDefault();
    setIsLoading(true);

    fetch("http://localhost:8080/api/guest", {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
      },
      body: JSON.stringify({
        url: guestpdfUrl,
        chat1: "Create a summary of this text",
        chat2: "Create a 10 bullet point summary of this text",
      }),
    })
      .then((response) => response.json())
      .then((data) => {
        console.log(data.message2);
        setSummary(data.message1);
        setBulletSummary(data.message2);
        setIsLoading(false);
      });
  };


  return (
    <div className="flex items-center justify-center flex-col">
      <div className=" text-[#202942] mb-4 text-4xl md:text-5xl tracking-tight font-extrabold">
        Welcome Guest
      </div>
      <div className=" text-[#202942]  my-4 text-center text-xl md:text-2xl tracking-tight font-extrabold">
        You&apos;ve uploaded a PDF called {fileName}
      </div>
      <div className="mb-8">
        <Button
          className="rounded-full bg-[#202942] text-[#dfeff4] 
           hover:bg-[#3a435e]
         font-bold text-sm md:text-base py-2 px-3"
          onClick={processDocument}
        >
          Process Document
        </Button>
      </div>
      <div></div>
      <TabsSec
        summary={summary}
        bulletSummary={bulletSummary}
        isLoading={isLoading}
      />{" "}
    </div>
  );
};

export default Guest;


你会建议我如何实现这一点.我正在考虑使用一个exec子进程运行“python llmpython.py“在useEffect,但它看起来像是不可能的.我会感谢任何建议

tvokkenx

tvokkenx1#

有一个叫supervisor的工具,它允许你管理后台进程,并获取错误和输出日志。我建议你用gunicorn+supervisor运行你的python应用程序。我不太了解NextJS,但https://nextjs.org/learn-pages-router/basics/deploying-nextjs-app可以看看这个。
另外,不要忘记在不同的端口上运行它们,不要直接使用端口80443。而是使用像NGINX这样的代理。还要确保在设置后,flask API在localhost之外可用,这意味着您需要使用host=yourdomain.com才能从nextjs访问。
这里有一个视频,可能会有所帮助:https://www.youtube.com/watch?v=goToXTC96Co&list=PL-osiE80TeTs4UjLw5MM6OjgkjFeUxCYH&index=13

相关问题