我试着运行一个transformer代码,如下所示:https://github.com/iamrakesh28/Video-Prediction
当我训练模型时:
shifted_movies = tf.convert_to_tensor(generate_movies(n_samples=1200), dtype=tf.float32)
print(shifted_movies.shape)
X = shifted_movies[:, :10, :, :, :]
Y = shifted_movies[:, 10:, :, :, :]
# defines the model
model = VideoPrediction(
num_layers=3, d_model=64, num_heads=16, dff=128,
filter_size=(3, 3), image_shape=(40, 40), pe_input=10,
pe_target=20, out_channel=1, loss_function='bin_cross'
)
model.train(X[:1000, :5], X[:1000, 5:], None, None, 1, 8)
我得到了这个
UnimplementedError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_7704/3895242283.py in <module>
----> 1 model.train(X[:1000, :5], X[:1000, 5:], None, None, 1, 8)
~\OneDrive\LBL\all_code3\Video-Prediction-master\Video-Prediction-master\transformer_video\video_prediction.py in train(self, inp, tar, inp_val, tar_val, epochs, batch_size, epoch_print)
50 dec_inp = tar[index:index + batch_size, :, :, :]
51
---> 52 batch_loss = self.train_step(enc_inp, dec_inp)
53 total_loss += batch_loss
54
……
……
UnimplementedError: Exception encountered when calling layer "conv2d" (type Conv2D).
DNN library is not found. [Op:Conv2D]
Call arguments received:
• inputs=tf.Tensor(shape=(8, 5, 40, 40, 1), dtype=float32)
顺便说一下,我的环境中的配置是:
tensorflow 2.8.0
tensorflow-io-gcs-filesystem 0.24.0
tensorflow-probability 0.16.0
cudnn 6.0
cudatoolkit 11.3.1
输入的形状
(1200, 20, 40, 40, 1)
1条答案
按热度按时间qnzebej01#
根据测试的构建配置,Tensorflow 2.8.0兼容
cudnn 8.1
和cudatoolkit 11.2
。请使用以下代码安装兼容版本后重试。谢谢!