Paddle 的取值赋值操作巨慢无比,是 numpy 的 1000 倍,torch 的 50 倍

i2loujxw  于 2022-10-20  发布在  其他
关注(0)|答案(1)|浏览(162)

复现代码

import numpy as np
import paddle
import torch
import time

def expand_numpy(encodings: paddle.Tensor, durations: paddle.Tensor) -> paddle.Tensor:
    """
encodings: (B, T, C)
durations: (B, T)
"""    
    batch_size, t_enc = durations.shape
    durations = durations.numpy()
    slens = np.sum(durations, -1)
    t_dec = np.max(slens)
    M = np.zeros([batch_size, t_dec, t_enc])
    start = time.time()
    for i in range(batch_size):
        k = 0
        for j in range(t_enc):
            d = durations[i, j] 
            M[i, k:k + d, j] = 1
            k += d
    print("cost time of numpy:",time.time()-start)
    M = paddle.to_tensor(M, dtype=encodings.dtype)
    encodings = paddle.matmul(M, encodings)
    return encodings

def expand(encodings: paddle.Tensor, durations: paddle.Tensor) -> paddle.Tensor:
    """
encodings: (B, T, C)
durations: (B, T)
"""
    batch_size, t_enc = paddle.shape(durations)
    slens = paddle.sum(durations, -1)
    t_dec = paddle.max(slens)
    M = paddle.zeros([batch_size, t_dec, t_enc])
    start = time.time()
    for i in range(batch_size):
        k = 0
        for j in range(t_enc):
            d = durations[i, j]
            if d >= 1:
                M[i, k:k + d, j] = 1
            k += d
    print("cost time of paddle:",time.time()-start)
    encodings = paddle.matmul(M, encodings)
    return encodings

def expand_torch(encodings, durations):
    """
encodings: (B, T, C)
durations: (B, T)
"""
    batch_size, t_enc = durations.shape
    slens = torch.sum(durations, -1)
    t_dec = torch.max(slens)
    M = torch.zeros([batch_size, t_dec, t_enc])
    start = time.time()
    for i in range(batch_size):
        k = 0
        for j in range(t_enc):
            d = durations[i, j]
            if d >= 1:
                M[i, k:k + d, j] = 1
            k += d
    print("cost time of torch:",time.time()-start)
    encodings = torch.matmul(M, encodings)
    return encodings

B, T, C = 8, 50, 80
max_d = 20
encodings_numpy = np.random.rand(B, T, C)
durations_numpy = np.random.randint(1, max_d, size=(B, T) )

encodings = paddle.to_tensor(encodings_numpy,dtype='float32')
durations = paddle.to_tensor(durations_numpy,dtype='int64')

encodings_torch = torch.tensor(encodings_numpy,dtype=torch.float32)
durations_torch = torch.tensor(durations_numpy,dtype=torch.int64)

expand_numpy(encodings, durations)
print("-----------------------------")
expand(encodings, durations)
print("-----------------------------")
expand_torch(encodings_torch, durations_torch)

结果

cost time of numpy: 0.0004203319549560547
-----------------------------
cost time of paddle: 0.4768214225769043
-----------------------------
cost time of torch: 0.010885000228881836

我把对于 numpy 的操作换成对于 paddle.Tensor 的操作之后,导致我模型的 ips 变为原来的 1/2

qlvxas9a

qlvxas9a1#

您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看 官网API文档常见问题历史IssueAI社区 来寻求解答。祝您生活愉快~

Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the APIFAQGithub Issue and AI community to get the answer.Have a nice day!

相关问题