python 特定Tensor分解

5hcedyr0  于 2022-11-21  发布在  Python
关注(0)|答案(2)|浏览(193)

我想用SVD分解一个三维Tensor。
我不太确定是否以及如何实现以下分解。

从本教程中,我已经知道如何水平分割Tensor:tensors.org Figure 2.2b

d = 10; A = np.random.rand(d,d,d)
Am = A.reshape(d**2,d)
Um,Sm,Vh = LA.svd(Am,full_matrices=False)
U = Um.reshape(d,d,d); S = np.diag(Sm)
gojuced7

gojuced71#

矩阵方法可以自然地扩展到更高阶。例如,SVD可以推广到Tensor,例如Tucker decomposition,有时称为高阶SVD。
我们维护了一个用于Tensor方法TensorLy的Python库,它可以让你轻松地完成这个任务。在这个例子中,你需要一个部分Tucker,因为你想让其中一个模式不被压缩。
让我们导入必要的零件:

import tensorly as tl
from tensorly import random
from tensorly.decomposition import partial_tucker

为了进行测试,让我们创建大小为(10,10,10)的三阶Tensor:

size = 10
order = 3
shape = (size, )*order
tensor = random.random_tensor(shape)

现在,你可以使用Tensor分解来分解Tensor。在你的例子中,你想保留其中一个维度不变,所以你只有两个因子(U和V)和一个核心Tensor(S):

core, factors = partial_tucker(tensor, rank=size, modes=[0, 2])

您可以使用一系列n模式乘积从近似值中重构原始Tensor,以使核心与因子收缩:

from tensorly import tenalg
rec = tenalg.multi_mode_dot(core, factors, modes=[0, 2])
rec_error = tl.norm(rec - tensor)/tl.norm(tensor)
print(f'Relative reconstruction error: {rec_error}')

在我的情况下,我得到

Relative reconstruction error: 9.66027176805661e-16
hzbexzde

hzbexzde2#

您也可以使用Python中的“tensorlearn”包,例如使用Tensor训练(TT)SVD算法。https://github.com/rmsolgi/TensorLearn/tree/main/Tensor-Train%20Decomposition

import numpy as np
import tensorlearn as tl

#lets generate an arbitrary array 
tensor = np.arange(0,1000) 

#reshaping it into a higher (3) dimensional tensor

tensor = np.reshape(tensor,(10,20,5)) 
epsilon=0.05 
#decompose the tensor to its factors
tt_factors=tl.auto_rank_tt(tensor, epsilon) #epsilon is the error bound

#tt_factors is a list of three arrays which are the tt-cores

#rebuild (estimating) the tensor using the factors again as tensor_hat

tensor_hat=tl.tt_to_tensor(tt_factors)

#lets see the error

error_tensor=tensor-tensor_hat

error=tl.tensor_frobenius_norm(error_tensor)/tl.tensor_frobenius_norm(tensor)

print('error (%)= ',error*100) #which is less than epsilon
# one usage of tensor decomposition is data compression
# So, lets calculate the compression ratio
data_compression_ratio=tl.tt_compression_ratio(tt_factors)

#data saving
data_saving=1-(1/data_compression_ratio)

print('data_saving (%): ', data_saving*100)

相关问题