DeepSpeed-MII 运行时错误:在与FastAPI一起运行时,此事件循环已经在运行,

bjg7j2ky  于 6个月前  发布在  其他
关注(0)|答案(2)|浏览(79)

你好,团队

我正在尝试将deepspeed-mii集成到fastapi服务中。我遇到了以下错误:

{"detail":"This event loop is already running"}

这是我的代码供参考

from fastapi import FastAPI, HTTPException, Body
from pydantic import BaseModel
import mii
import logging
from typing import Union, List

logging.basicConfig(
    level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)

app = FastAPI()

class UserParams(BaseModel):
    prompt: str
    model: str
    tensor: int

@app.post("/deploy/")
async def deploy_model(input_data: UserParams = Body(...)):
    deployment_name = input_data.model + "_deployment"
    logging.info(f"Received request to deploy '{deployment_name}'")
    mii_configs = {
        "tensor_parallel": input_data.tensor,
        "enable_restful_api": True,
    }
    logging.info(f"Deploying '{deployment_name}'")
    mii.deploy(task="text-generation",
               model=input_data.model,
               model_path = "/app/multi-gpu/cache",
               deployment_name=deployment_name,
               mii_config=mii_configs
               )
    try:
        logging.info(f"Creating generator for deployment '{deployment_name}'")
        generator = mii.mii_query_handle(deployment_name)
        logging.info(f"Deployment '{deployment_name}' successful")
        response = {"message": f"Deployment '{deployment_name}' successful"}

        logging.info(f"Received request for deployment '{deployment_name}'")
        generator = mii.mii_query_handle(deployment_name)
        logging.info(f"Generating text for deployment '{deployment_name}'")
        logging.info(f"Type of generator: {type(generator)}")
        logging.info(f"Input: {input_data.prompt}")
        response = generator.query({'query': input_data.prompt})
        logging.info(f"Text generated for deployment '{deployment_name}'")
        logging.info(f"Response: {response}")
        return response.response
    except Exception as e:
        logging.error(f"Deployment '{deployment_name}' failed")
        raise HTTPException(status_code=500, detail=str(e))

如果你们能提供任何帮助或指导来解决这个问题,我将非常感激。

pwuypxnk

pwuypxnk1#

看起来FastAPI服务器和MII创建的gRPC服务器之间可能存在一些奇怪的交互。我认为你应该重新考虑你如何使用MII。MII本身会创建一个gRPC服务器,你可以向其发送查询,因此将多个服务器进程与fastapi + grpc嵌套在一起并没有太大意义。我认为有两个潜在的解决方案:

  1. 利用MII在fastapi之外提供的RESTful API。这仍然允许你通过curl发送和接收查询,并取代fastapi。
  2. 利用MII的非持久化模式。因为fastapi已经提供了一个持久化的服务器进程,所以你可以利用MII的非持久化部署模式来避免fastapi和grpc之间的奇怪交互。以下是一个工作示例:

我们使用uvicorn main:app启动服务器,然后我可以发送请求并查看响应:

from fastapi import FastAPI, HTTPException, Body
from pydantic import BaseModel
import mii
import logging
from typing import Union, List

app = FastAPI()

class UserParams(BaseModel):
prompt: str
model: str
tensor: int

@app.post("/deploy/")
async def deploy_model(input_data: UserParams = Body(...)):
deployment_name = input_data.model + "_deployment"
mii_configs = {
"tensor_parallel": input_data.tensor,
}
mii.deploy(
task="text-generation",
model=input_data.model,
deployment_name=deployment_name,
mii_config=mii_configs,
deployment_type=mii.DeploymentType.NON_PERSISTENT,
)
try:
generator = mii.mii_query_handle(deployment_name)
response = generator.query({"query": input_data.prompt})
return response
except Exception as e:
logging.error(f"Deployment '{deployment_name}' failed")
raise HTTPException(status_code=500, detail=str(e))

6rvt4ljy

6rvt4ljy2#

感谢您的回复。

相关问题