next.js 如何在AWS Lambda和API Gateway上运行Vercel AI SDK

wvt8vs2t  于 12个月前  发布在  其他
关注(0)|答案(1)|浏览(109)

我正在尝试通过Cloudfront、Lambda和API Gateway在AWS上托管我的NextJS Vercel AI SDK应用程序。
我想修改useChat()函数,以包含Lambda函数的API,该函数执行连接并从OpenAI返回StreamingTextResponse。
不过,StreamingTextResponse主体一定会有未定义的数据流。
我可以做些什么来解决这个问题?
任何帮助都是感激不尽的,谢谢。
page.tsx

"use client";

import { useChat } from "ai/react";
import { useState, useEffect } from "react";

export default function Chat() {
  
  const { messages, input, handleInputChange, handleSubmit, data } = useChat({api: '/myAWSAPI'});
 
...

字符串
Lambda函数

const OpenAI = require('openai')
const { OpenAIStream, StreamingTextResponse } = require('ai');
const prompts = require('./prompts')
const { roleplay_prompt } = prompts

// Create an OpenAI API client (that's edge friendly!)
const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY || '',
});


exports.handler = async function(event, context, callback) {
  
  
  // Extract the `prompt` from the body of the request
  
  const { messages } = event.body;
    
  const messageWithSystem = [

    {role: 'system', content: roleplay_prompt},
    ...messages // Add user and assistant messages after the system message
  ]

  console.log(messageWithSystem)

  // Ask OpenAI for a streaming chat completion given the prompt
  const response = await openai.chat.completions.create({
    model: 'gpt-3.5-turbo',
    stream: true,
    messages: messageWithSystem,
  });
  
  // Convert the response into a friendly text-stream
  const stream = OpenAIStream(response);
  // Respond with the stream
  const chatResponse = new StreamingTextResponse(stream);

  // body's stream is always undefined

 
  console.log(chatResponse)
  
  return chatResponse
}

tv6aics1

tv6aics11#

在我自己对此进行了一番挖掘之后,我发现与Lambda相比,NextJS处理程序的工作方式存在根本性的差异,至少就流而言(至少截至2024年1月)。
这是我发现的最简单的指南,可以让你从0->1使用Lambda上的流处理程序:https://docs.aws.amazon.com/lambda/latest/dg/response-streaming-tutorial.html
这是2023年4月的文章,介绍了https://aws.amazon.com/blogs/compute/introducing-aws-lambda-response-streaming/功能。
简而言之,有几个方法,你缺少的主要方法是awslambda.streamifyResponse,因为流式Lambdas需要这个魔法来转换处理程序并传递responseStream。
下面是我对Vercel的聊天响应处理程序(类似于https://github.com/vercel/ai/blob/main/examples/next-openai/app/api/chat/route.ts)的lambda转换,它似乎可以工作:

import { OpenAIStream } from 'ai';
import OpenAI from 'openai';

import stream from 'stream';
import util from 'util';
const { Readable } = stream;
const pipeline = util.promisify(stream.pipeline);

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

export const chatHandler: awslambda.StreamifyHandler = async (event, responseStream, _context) => {
  console.log(`chat processing event: ${JSON.stringify(event)}`);

  const { messages } = JSON.parse(event.body || '');

  console.log(`chat processing messages: ${JSON.stringify(messages)}`);

  // Ask OpenAI for a streaming chat completion given the prompt
  const response = await openai.chat.completions.create({
    model: 'gpt-3.5-turbo',
    stream: true,
    messages,
  });

  // Convert the response into a friendly text-stream
  const stream = OpenAIStream(response, {
    onStart: async () => {
      // This callback is called when the stream starts
      // You can use this to save the prompt to your database
      // await savePromptToDatabase(prompt);
      console.log(`Started stream, latest message: ${JSON.stringify(messages.length > 0 ? messages[messages.length - 1] : '<none>')}`);
    },
    onToken: async (token: string) => {
      console.log(`chat got token: ${token}`);
      // This callback is called for each token in the stream
      // You can use this to debug the stream or save the tokens to your database
      // console.log(token);
    },
    onCompletion: async (completion: string) => {
      // This callback is called when the stream completes
      // You can use this to save the final completion to your database
      // await saveCompletionToDatabase(completion);
      console.log(`Completed stream with completion: ${completion}`);
    },
    onFinal: async (final: string) => {
      console.log(`chat got final: ${final}`);
    },
  });

  // Respond with the stream
  // NOPE! Not in a lambda
  //return new StreamingTextResponse(stream);

  // this is how we chain things together in lambda
  // @ts-expect-error this seems to be ok, but i'd like to do this safely
  await pipeline(stream, responseStream);
};

// see https://github.com/astuyve/lambda-stream for better support of this
export const handler = awslambda.streamifyResponse(chatHandler);

字符串
我在上面提到了一个帮助器类型lib,我在我的源代码中使用了下面的.d.ts,它至少为那些awslambda. node类型做了这项工作。我很想知道Vercel ReadableStream为什么只是一个Readable,因为当我问copilot时,它说它不工作。

import { APIGatewayProxyEventV2, Context, Handler } from 'aws-lambda';
import { Writable } from 'stream';

declare global {
  namespace awslambda {
    export namespace HttpResponseStream {
      function from(writable: Writable, metadata: any): Writable;
    }

    export type ResponseStream = Writable & {
      setContentType(type: string): void;
    };

    export type StreamifyHandler = (event: APIGatewayProxyEventV2, responseStream: ResponseStream, context: Context) => Promise<any>;

    export function streamifyResponse(handler: StreamifyHandler): Handler<APIGatewayProxyEventV2>;
  }
}


我相信AWS会在某个时候改进这一点,但现在感觉很边缘。如果有人在生产中这样做,我很想知道,缺乏APIGW支持是非常有限的。

相关问题