keras TypeError:model()获取了意外的关键字参数“batch_size”

qcbq4gxm  于 12个月前  发布在  其他
关注(0)|答案(1)|浏览(222)

我为CNN做了输入,但是我得到了错误
TypeError:model()获取了意外的关键字参数“batch_size”
让我们把所有的函数粘贴在这里:

def model(x_train, num_labels, LSTM_units, num_conv_filters, batch_size, F, D):
"""
The proposed model with CNN layer, LSTM RNN layer and self attention layers.
Inputs:
- x_train: required for creating input shape for RNN layer in Keras
- num_labels: number of output classes (int)
- LSTM_units: number of RNN units (int)
- num_conv_filters: number of CNN filters (int)
- batch_size: number of samples to be processed in each batch
- F: the attention length (int)
- D: the length of the output (int) 
Returns
- model: A Keras model
"""
cnn_inputs = Input(shape=(x_train.shape[1], x_train.shape[2], 1), batch_size=batch_size, name='rnn_inputs')   # Getting error on this  line 
cnn_layer = Conv2D(num_conv_filters, kernel_size = (1, x_train.shape[2]), strides=(1, 1), padding='valid', data_format="channels_last")
cnn_out = cnn_layer(cnn_inputs)

sq_layer = Lambda(lambda x: K.squeeze(x, axis = 2))
sq_layer_out = sq_layer(cnn_out)

rnn_layer = LSTM(LSTM_units, return_sequences=True, name='lstm', return_state=True) #return_state=True
rnn_layer_output, _, _ = rnn_layer(sq_layer_out)

encoder_output, attention_weights = SelfAttention(size=F, num_hops=D, use_penalization=False, batch_size = batch_size)(rnn_layer_output)
dense_layer = Dense(num_labels, activation = 'softmax')
dense_layer_output = dense_layer(encoder_output)

model = Model(inputs=cnn_inputs, outputs=dense_layer_output)
print (model.summary())

return model

字符串
我已经把评论放在了我“获取错误”的那一行。这里是返回的轨道:Traceback(最近的调用):

File "C:\Program Files\JetBrains\PyCharm 2020.2.1\plugins\python\helpers\pydev\pydevd.py", line 1448, in _exec
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "C:\Program Files\JetBrains\PyCharm 2020.2.1\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "C:/Users/Nafees Ahmed/PycharmProjects/encodingHumanActivity-master/codes/model_proposed/model_with_self_attn.py", line 133, in <module>
    rnn_model = model(x_train = X_train_, num_labels = NUM_LABELS, LSTM_units = LSTM_UNITS, \
TypeError: model() got an unexpected keyword argument 'batch_size'
python-BaseException

q3aa0525

q3aa05251#

batch_size参数需要在model.fit()函数中声明,而不是作为输入定义的一部分。
https://www.tensorflow.org/api_docs/python/tf/keras/Model

相关问题