运行自动编码器拟合时,keras自动编码器模型中的Conv1D输出形状不正确。
我尝试使用keras自动编码器模型来压缩和解压缩我的时间序列数据。但当我用Conv1D
更改图层时,输出形状不正确。
我有一些形状为(4000,689)的时间序列数据,其中表示4000个样本,每个样本有689个特征。我想使用Conv1D
来压缩数据,但Upsampling
层和最后一个Conv1D
层的输出形状(?,688,1)不等于输入形状(,689,1)。
我应该如何设置这些图层的参数?
x_train = data[0:4000].values
x_test = data[4000:].values
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
x_train形状:(四千,六百八十九)
x_测试图形:(第二百零二章,六百八十九)
我将x_train,x_test重新调整为3dim,如下所示。
x_tr = x_train.reshape(4000,689,1)
x_te = x_test.reshape(202,689,1)
print('x_tr shape:', x_tr.shape)
print('x_te shape:', x_te.shape)
x_tr图形:(4000,689,1)
x_te图形:(202,689,1)
input_img = Input(shape=(689,1))
x = Conv1D(16, 3, activation='relu', padding='same')(input_img)
print(x)
x = MaxPooling1D(2, padding='same')(x)
print(x)
x = Conv1D(8, 3, activation='relu', padding='same')(x)
print(x)
x = MaxPooling1D(2, padding='same')(x)
print(x)
x = Conv1D(8, 3, activation='relu', padding='same')(x)
print(x)
encoded = MaxPooling1D(2)(x)
print(encoded)
print('--------------')
x = Conv1D(8, 3, activation='relu', padding='same')(encoded)
print(x)
x = UpSampling1D(2)(x)
print(x)
x = Conv1D(8, 3, activation='relu', padding='same')(x)
print(x)
x = UpSampling1D(2)(x)
print(x)
x = Conv1D(16, 3, activation='relu', padding='same')(x)
print(x)
x = UpSampling1D(2)(x)
print(x)
decoded = Conv1D(1, 3, activation='sigmoid', padding='same')(x)
print(decoded)
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adam', loss='mse')
当我导入这些模型并在Jupyter中运行上面的单元格时,它看起来不错。也许吧。但是当我运行autoencoder.fit
时,我在下一个代码中得到了错误。
autoencoder.fit(x_tr, x_tr, epochs=50, batch_size=128, shuffle=True, validation_data=(x_te, x_te))
所以我每层print
。
各层' print
'结果如下。
Tensor("conv1d_166/Relu:0", shape=(?, 689, 16), dtype=float32)
Tensor("max_pooling1d_71/Squeeze:0", shape=(?, 345, 16), dtype=float32)
Tensor("conv1d_167/Relu:0", shape=(?, 345, 8), dtype=float32)
Tensor("max_pooling1d_72/Squeeze:0", shape=(?, 173, 8), dtype=float32)
Tensor("conv1d_168/Relu:0", shape=(?, 173, 8), dtype=float32)
Tensor("max_pooling1d_73/Squeeze:0", shape=(?, 86, 8), dtype=float32)
Tensor("conv1d_169/Relu:0", shape=(?, 86, 8), dtype=float32)
Tensor("up_sampling1d_67/concat:0", shape=(?, 172, 8), dtype=float32)
Tensor("conv1d_170/Relu:0", shape=(?, 172, 8), dtype=float32)
Tensor("up_sampling1d_68/concat:0", shape=(?, 344, 8), dtype=float32)
Tensor("conv1d_171/Relu:0", shape=(?, 344, 16), dtype=float32)
Tensor("up_sampling1d_69/concat:0", shape=(?, 688, 16), dtype=float32)
Tensor("conv1d_172/Sigmoid:0", shape=(?, 688, 1), dtype=float32)
以下值错误:
ValueError Traceback (most recent call last)
<ipython-input-74-56836006a800> in <module>
3 batch_size=128,
4 shuffle=True,
----> 5 validation_data=(x_te, x_te)
6 )
~/anaconda3/envs/keras/lib/python3.6/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
950 sample_weight=sample_weight,
951 class_weight=class_weight,
--> 952 batch_size=batch_size)
953 # Prepare validation data.
954 do_validation = False
~/anaconda3/envs/keras/lib/python3.6/site-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
787 feed_output_shapes,
788 check_batch_axis=False, # Don't enforce the batch size.
--> 789 exception_prefix='target')
790
791 # Generate sample-wise weight values given the `sample_weight` and
~/anaconda3/envs/keras/lib/python3.6/site-packages/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
136 ': expected ' + names[i] + ' to have shape ' +
137 str(shape) + ' but got array with shape ' +
--> 138 str(data_shape))
139 return data
140
ValueError: Error when checking target: expected conv1d_172 to have shape (688, 1) but got array with shape (689, 1)
是floor
函数使这种情况发生的吗?
如何正确修复错误和autoencoder.fit
?
先谢谢你。
1条答案
按热度按时间gojuced71#
使用卷积层时,您需要根据输入大小、内核大小和其他参数来推断输出大小。最简单的方法是通过网络输入数据样本,并查看最后一个卷积层之后的最终矢量大小。然后,您可以根据该大小定义更多的层。
下面是我的pytorch项目的一个例子: