pytorch triton推理服务器:使用输入图形BxN config.pbtxt部署模型

1yjd4xko  于 2022-11-09  发布在  其他
关注(0)|答案(1)|浏览(141)

我已经安装了带有Docker的Triton推理服务器,

docker run --gpus=1 --rm -p8000:8000 -p8001:8001 -p8002:8002 -v /mnt/data/nabil/triton_server/models:/models nvcr.io/nvidia/tritonserver:22.08-py3 tritonserver --model-repository=/models

我还从我的pytorch模型创建了torchscript模型,使用

from model_ecapatdnn import ECAPAModel
import soundfile as sf
import torch

model_1 = ECAPAModel.ECAPAModel(lr = 0.001, lr_decay = 0.97, C = 1024, n_class = 18505, m = 0.2, s = 30, test_step = 3, gpu = -1)
model_1.load_parameters("/ecapatdnn/model.pt")

model = model_1.speaker_encoder

# Switch the model to eval model

model.eval()

# An example input you would normally provide to your model's forward() method.

example = torch.rand(1, 48000)

# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.

traced_script_module = torch.jit.trace(model, example)

# Save the TorchScript model

traced_script_module.save("traced_ecapatdnn_bangasianeng.pt")

现在,如您所见,我的模型采用(BxN)形状的Tensor,其中B是批次大小。
如何编写此模型的config.pbtxt

ogq8wdun

ogq8wdun1#

所以,找到了答案。只需要在config文件中指定形状。下面是适合我的config

name: "ecapatdnn_bangasianeng"
platform: "pytorch_libtorch"
max_batch_size: 1

input[
{
name: "INPUT__0"
data_type:  TYPE_FP32
dims: [-1]
}
]

output:[
{
name: "OUTPUT__0"
data_type:  TYPE_FP32
dims: [512]
}
]

相关问题