keras ValueError:形状必须至少为等级3,但对于具有输入形状的“{{node BiasAdd}} = BiasAdd[T=DT_FLOAT,data_format=“NCHW”](add,bias)“,其等级为2:

mnowg1ta  于 2022-11-13  发布在  其他
关注(0)|答案(3)|浏览(153)

已完成

我正在尝试运行并复制以下项目:https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/。基本上直到这一点,我已经做了一切,因为它是在链接的项目,但比我得到了以下问题:

我自己的数据集-我已尝试使用 Dataframe :

  • 我已经尝试与他的原始数据集完全100%他的代码,但我仍然有同样的错误
  • A.)具有2列(第1列日期和第2列目标值),
  • B.)将时间码写入到仅包含目标值的索引和 Dataframe 。
    输入代码:
# reshape into X=t and Y=t+1
look_back = 1
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)

# reshape input to be [samples, time steps, features]
trainX = numpy.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))
testX = numpy.reshape(testX, (testX.shape[0], 1, testX.shape[1]))

# create and fit the LSTM network
model = Sequential()
model.add(LSTM(4, input_shape=(1, look_back)))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(trainX, trainY, epochs=100, batch_size=1, verbose=2)

输出错误:

---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs, op_def)
   1879   try:
-> 1880     c_op = pywrap_tf_session.TF_FinishOperation(op_desc)
   1881   except errors.InvalidArgumentError as e:

InvalidArgumentError: Shape must be at least rank 3 but is rank 2 for '{{node BiasAdd}} = BiasAdd[T=DT_FLOAT, data_format="NCHW"](add, bias)' with input shapes: [?,16], [16].

During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
<ipython-input-146-278c5358bee6> in <module>
      1 # create and fit the LSTM network
      2 model = Sequential()
----> 3 model.add(LSTM(4, input_shape=(1, look_back)))
      4 model.add(Dense(1))
      5 model.compile(loss='mean_squared_error', optimizer='adam')

~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
    520     self._self_setattr_tracking = False  # pylint: disable=protected-access
    521     try:
--> 522       result = method(self, *args, **kwargs)
    523     finally:
    524       self._self_setattr_tracking = previous_value  # pylint: disable=protected-access

~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/engine/sequential.py in add(self, layer)
    206           # and create the node connecting the current layer
    207           # to the input layer we just created.
--> 208           layer(x)
    209           set_inputs = True
    210 

~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/layers/recurrent.py in __call__(self, inputs, initial_state, constants, **kwargs)
    658 
    659     if initial_state is None and constants is None:
--> 660       return super(RNN, self).__call__(inputs, **kwargs)
    661 
    662     # If any of `initial_state` or `constants` are specified and are Keras

~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
    944     if _in_functional_construction_mode(self, inputs, args, kwargs, input_list):
    945       return self._functional_construction_call(inputs, args, kwargs,
--> 946                                                 input_list)
    947 
    948     # Maintains info about the `Layer.call` stack.

~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/engine/base_layer.py in _functional_construction_call(self, inputs, args, kwargs, input_list)
   1082       # Check input assumptions set after layer building, e.g. input shape.
   1083       outputs = self._keras_tensor_symbolic_call(
-> 1084           inputs, input_masks, args, kwargs)
   1085 
   1086       if outputs is None:

~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/engine/base_layer.py in _keras_tensor_symbolic_call(self, inputs, input_masks, args, kwargs)
    814       return tf.nest.map_structure(keras_tensor.KerasTensor, output_signature)
    815     else:
--> 816       return self._infer_output_signature(inputs, args, kwargs, input_masks)
    817 
    818   def _infer_output_signature(self, inputs, args, kwargs, input_masks):

~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/engine/base_layer.py in _infer_output_signature(self, inputs, args, kwargs, input_masks)
    854           self._maybe_build(inputs)
    855           inputs = self._maybe_cast_inputs(inputs)
--> 856           outputs = call_fn(inputs, *args, **kwargs)
    857 
    858         self._handle_activity_regularization(inputs, outputs)

~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/layers/recurrent_v2.py in call(self, inputs, mask, training, initial_state)
   1250         else:
   1251           (last_output, outputs, new_h, new_c,
-> 1252            runtime) = lstm_with_backend_selection(**normal_lstm_kwargs)
   1253 
   1254       states = [new_h, new_c]

~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/layers/recurrent_v2.py in lstm_with_backend_selection(inputs, init_h, init_c, kernel, recurrent_kernel, bias, mask, time_major, go_backwards, sequence_lengths, zero_output_for_mask)
   1645     # Call the normal LSTM impl and register the CuDNN impl function. The
   1646     # grappler will kick in during session execution to optimize the graph.
-> 1647     last_output, outputs, new_h, new_c, runtime = defun_standard_lstm(**params)
   1648     _function_register(defun_gpu_lstm, **params)
   1649 

~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/eager/function.py in __call__(self, *args, **kwargs)
   3020     with self._lock:
   3021       (graph_function,
-> 3022        filtered_flat_args) = self._maybe_define_function(args, kwargs)
   3023     return graph_function._call_flat(
   3024         filtered_flat_args, captured_inputs=graph_function.captured_inputs)  # pylint: disable=protected-access

~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
   3442 
   3443           self._function_cache.missed.add(call_context_key)
-> 3444           graph_function = self._create_graph_function(args, kwargs)
   3445           self._function_cache.primary[cache_key] = graph_function
   3446 

~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
   3287             arg_names=arg_names,
   3288             override_flat_arg_shapes=override_flat_arg_shapes,
-> 3289             capture_by_value=self._capture_by_value),
   3290         self._function_attributes,
   3291         function_spec=self.function_spec,

~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
    997         _, original_func = tf_decorator.unwrap(python_func)
    998 
--> 999       func_outputs = python_func(*func_args, **func_kwargs)
   1000 
   1001       # invariant: `func_outputs` contains only Tensors, CompositeTensors,

~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/layers/recurrent_v2.py in standard_lstm(inputs, init_h, init_c, kernel, recurrent_kernel, bias, mask, time_major, go_backwards, sequence_lengths, zero_output_for_mask)
   1386       input_length=(sequence_lengths
   1387                     if sequence_lengths is not None else timesteps),
-> 1388       zero_output_for_mask=zero_output_for_mask)
   1389   return (last_output, outputs, new_states[0], new_states[1],
   1390           _runtime(_RUNTIME_CPU))

~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)
    204     """Call target, and fall back on dispatchers if there is a TypeError."""
    205     try:
--> 206       return target(*args, **kwargs)
    207     except (TypeError, ValueError):
    208       # Note: convert_to_eager_tensor currently raises a ValueError, not a

~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/backend.py in rnn(step_function, inputs, initial_states, go_backwards, mask, constants, unroll, input_length, time_major, zero_output_for_mask)
   4341     # the value is discarded.
   4342     output_time_zero, _ = step_function(
-> 4343         input_time_zero, tuple(initial_states) + tuple(constants))
   4344     output_ta = tuple(
   4345         tf.TensorArray(

~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/layers/recurrent_v2.py in step(cell_inputs, cell_states)
   1364     z = backend.dot(cell_inputs, kernel)
   1365     z += backend.dot(h_tm1, recurrent_kernel)
-> 1366     z = backend.bias_add(z, bias)
   1367 
   1368     z0, z1, z2, z3 = tf.split(z, 4, axis=1)

~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)
    204     """Call target, and fall back on dispatchers if there is a TypeError."""
    205     try:
--> 206       return target(*args, **kwargs)
    207     except (TypeError, ValueError):
    208       # Note: convert_to_eager_tensor currently raises a ValueError, not a

~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/backend.py in bias_add(x, bias, data_format)
   5961   if len(bias_shape) == 1:
   5962     if data_format == 'channels_first':
-> 5963       return tf.nn.bias_add(x, bias, data_format='NCHW')
   5964     return tf.nn.bias_add(x, bias, data_format='NHWC')
   5965   if ndim(x) in (3, 4, 5):

~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)
    204     """Call target, and fall back on dispatchers if there is a TypeError."""
    205     try:
--> 206       return target(*args, **kwargs)
    207     except (TypeError, ValueError):
    208       # Note: convert_to_eager_tensor currently raises a ValueError, not a

~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/ops/nn_ops.py in bias_add(value, bias, data_format, name)
   3376     else:
   3377       return gen_nn_ops.bias_add(
-> 3378           value, bias, data_format=data_format, name=name)
   3379 
   3380 

~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/ops/gen_nn_ops.py in bias_add(value, bias, data_format, name)
    689   data_format = _execute.make_str(data_format, "data_format")
    690   _, _, _op, _outputs = _op_def_library._apply_op_helper(
--> 691         "BiasAdd", value=value, bias=bias, data_format=data_format, name=name)
    692   _result = _outputs[:]
    693   if _execute.must_record_gradient():

~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py in _apply_op_helper(op_type_name, name, **keywords)
    748       op = g._create_op_internal(op_type_name, inputs, dtypes=None,
    749                                  name=scope, input_types=input_types,
--> 750                                  attrs=attr_protos, op_def=op_def)
    751 
    752     # `outputs` is returned as a separate return value so that the output

~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in _create_op_internal(self, op_type, inputs, dtypes, input_types, name, attrs, op_def, compute_device)
    599     return super(FuncGraph, self)._create_op_internal(  # pylint: disable=protected-access
    600         op_type, captured_inputs, dtypes, input_types, name, attrs, op_def,
--> 601         compute_device)
    602 
    603   def capture(self, tensor, name=None, shape=None):

~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in _create_op_internal(self, op_type, inputs, dtypes, input_types, name, attrs, op_def, compute_device)
   3563           input_types=input_types,
   3564           original_op=self._default_original_op,
-> 3565           op_def=op_def)
   3566       self._create_op_helper(ret, compute_device=compute_device)
   3567     return ret

~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in __init__(self, node_def, g, inputs, output_types, control_inputs, input_types, original_op, op_def)
   2040         op_def = self._graph._get_op_def(node_def.op)
   2041       self._c_op = _create_c_op(self._graph, node_def, inputs,
-> 2042                                 control_input_ops, op_def)
   2043       name = compat.as_str(node_def.name)
   2044 

~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs, op_def)
   1881   except errors.InvalidArgumentError as e:
   1882     # Convert to ValueError for backwards compatibility.
-> 1883     raise ValueError(str(e))
   1884 
   1885   return c_op

ValueError: Shape must be at least rank 3 but is rank 2 for '{{node BiasAdd}} = BiasAdd[T=DT_FLOAT, data_format="NCHW"](add, bias)' with input shapes: [?,16], [16].

尝试过的解决方案

atmip9wb

atmip9wb1#

我在2022年使用带有conda_tensorflow2_p38内核的Sagemaker中的LSTM或GRU时继续看到这个问题。
在您的笔记本中,在定义模型之前,设置

tf.keras.backend.set_image_data_format("channels_last")

我知道当你不处理图片时设置图像数据格式看起来很奇怪,但是这在某种程度上可以解决尺寸误差。
为了证明这不仅仅是默认内核中的库不匹配,我有时会在笔记本的开头添加一些东西,以更新到最新的库版本(目前是TF 2.9.0)。

import sys
!{sys.executable} -m pip install --upgrade pip tensorflow numpy scikit-learn pandas
kgsdhlau

kgsdhlau2#

我在使用aws sagemaker时遇到了同样的问题。将lstm更改为tf.compat.v1.keras.layers.CuDNNLSTM对我很有效。
在您情况下:添加(LSTM(4,输入形状=(1,查看_返回)))到模型.add(tf.compat.v1.keras.layers.CuDNNLSTM(4,输入形状=(1,查看_返回)))

gfttwv5a

gfttwv5a3#

解决方案

  • 我切换到AWS EC2 SageMaker“Python [conda env:tensorflow2_p36]“,因此这是确切的预制作环境“tensorflow2_p36”
  • 正如我在其他一些地方读到的,它可能是图书馆冲突,也许是与NumPy。

相关问题