keras 为什么使用自定义图层的模型无法正常工作?

xlpyo6sf  于 2022-11-13  发布在  其他
关注(0)|答案(1)|浏览(159)

我正在自定义一个层来使用我的模型。核心部分是“调用”功能,

class Custom_Layer(Layer):
// some code

def call(self, inputs, **kwargs):
  kernel = mul(self.base, self.diag_start - self.diag_end) 
  outputs = matmul(a=inputs, b=kernel)

  if self.use_bias:
    outputs = tf.nn.bias_add(outputs, self.bias)

  if self.activation is not None:
    outputs = self.activation(outputs)

  return outputs    
// some code

并在一个简单的模型中得到了应用。

inputs = tf.keras.layers.Input(shape=(784,),dtype='float32') 
layer1 = Custom_layer(2000, **Custom_layer_config, activation='tanh')(inputs)
layer2 = Custom_layer(200, **Custom_layer_config, activation='tanh')(layer1)
output_lay = Custom_layer(10, **Custom_layer_config, activation='softmax')(layer2)

model = tf.keras.models.Model(inputs=inputs, outputs=output_lay)

opt = tf.keras.optimizers.Adamax(learning_rate=0.02)
model.compile(optimizer=opt,
     loss='sparse_categorical_crossentropy',
     metrics=['accuracy'])
model.summary()

它应该打印如下:

Model: "functional_13"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_8 (InputLayer)         [(None, 784)]             0         
_________________________________________________________________
CustomLayer_18 (Custom_Layer)       (None, 2000)              1570784   
_________________________________________________________________
CustomLayer_19 (Custom_Layer)       (None, 200)               402200    
_________________________________________________________________
CustomLayer_20 (Custom_Layer)       (None, 10)                2210      
=================================================================
Total params: 1,975,194
Trainable params: 5,194
Non-trainable params: 1,970,000
_________________________________________________________________

但打印以下内容:

Model: "model_1"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_2 (InputLayer)        [(None, 784)]             0         
                                                                 
 tf.linalg.matmul_3 (TFOpLam  (None, 2000)             0         
 bda)                                                            
                                                                 
 tf.math.tanh_2 (TFOpLambda)  (None, 2000)             0         
                                                                 
 tf.linalg.matmul_4 (TFOpLam  (None, 200)              0         
 bda)                                                            
                                                                 
 tf.math.tanh_3 (TFOpLambda)  (None, 200)              0         
                                                                 
 tf.linalg.matmul_5 (TFOpLam  (None, 10)               0         
 bda)                                                            
                                                                 
 tf.compat.v1.nn.softmax_1 (  (None, 10)               0         
 TFOpLambda)                                                     
                                                                 
=================================================================
Total params: 0
Trainable params: 0
Non-trainable params: 0

第一个摘要是我从作者的存储库中得到的,第二个摘要是我运行相同的代码时没有更改任何内容。
代码并不复杂,但奇怪的是为什么没有参数。我的问题是这里出了什么问题。

dauxcl2d

dauxcl2d1#

尝试将其作为此示例中的继承类。
示例:自定义LSTM类

import tensorflow as tf

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Class / Definition
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
class MyLSTMLayer( tf.keras.layers.LSTM ):
    def __init__(self, units, return_sequences, return_state):
        super(MyLSTMLayer, self).__init__( units, return_sequences=True, return_state=False )
        self.num_units = units

    def build(self, input_shape):
        self.kernel = self.add_weight("kernel",
        shape=[int(input_shape[-1]),
        self.num_units])

    def call(self, inputs):
        return tf.matmul(inputs, self.kernel)

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Variables
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
start = 3
limit = 12
delta = 3
sample = tf.range( start, limit, delta )
sample = tf.cast( sample, dtype=tf.float32 )
sample = tf.constant( sample, shape=( 1, 1, 3 ) )
layer = MyLSTMLayer( 3, True, False )

model = tf.keras.Sequential([
    tf.keras.Input(shape=(1, 3)),
    layer,
])

model.summary()

print( sample )
print( model.predict(sample) )

输出量:

Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #
=================================================================
 my_lstm_layer (MyLSTMLayer)  (None, 1, 3)             9

=================================================================
Total params: 9
Trainable params: 9
Non-trainable params: 0
_________________________________________________________________

tf.Tensor([[[3. 6. 9.]]], shape=(1, 1, 3), dtype=float32)
1/1 [==============================] - 1s 575ms/step
[[[-2.8894916 -2.146874  13.688236 ]]]

相关问题