以下代码显示了我面临的问题:“”
def fakeDataGenerator(chanNum=31):
# This function generates the data I want to recover and it shows the characters of the data I am working on. It's continuous and differentiable.
peaks = random.sample(range(chanNum), random.choice(range(3,10)))
peaks.append(chanNum)
peaks.sort()
out = [random.choice(range(-5, 5))]
delta = 1
while len(out) < chanNum:
if len(out) < peaks[0]:
out.append(out[-1]+delta)
elif len(out) == peaks[0]:
delta *= -1
peaks.pop(0)
return out
originalData = torch.tensor(fakeDataGenerator(31)).reshape(1, 31).float()
encoder = torch.rand((31, 9)).float() #encoder here is something that messed the data up
code = torch.matmul(originalData, encoder) #here we get the code which is messed up by the encoder
decoder = torch.pinverse(encoder) #We can make use of the encoder matrix to decode the data.
# For example, here I apply pinverse to recover the data, but...
decoded = torch.matmul(code, decoder)
print(decoded - originalData) #the result is no good.
我能否利用原始数据和编码器的特性更好地恢复原始数据?这个程序工作的环境不允许像神经网络这样的复杂模型。
暂无答案!
目前还没有任何答案,快来回答吧!