keras 使用深度学习调整时间序列分析中的参数

lkaoscv7  于 2023-04-30  发布在  其他
关注(0)|答案(1)|浏览(120)

我需要帮助调整窗口大小,批量大小和我的模型的学习率。
注:这是DeepLearning的最后一个作业。人工智能序列,时间序列和预测课程在coursera。目标是使mae的值低于2,mse的值低于6。
链接到数据集:https://github.com/jbrownlee/Datasets/blob/master/daily-min-temperatures.csv
我从csv文件中解析数据:

def parse_data_from_file(filename):
  times = []
  temperatures = []

  with open(filename) as csvfile:
    
    ### START CODE HERE
    
    reader = csv.reader(csvfile, delimiter=',')
    next(reader)
    step =0
    for row in reader:
        temperatures.append(float(row[1]))
        times.append(step)
        step = step+1

    ### END CODE HERE
        
return times, temperatures

下一个单元格将使用函数计算时间和温度,并将其保存为G数据类中的numpy数组:

class G:
     TEMPERATURES_CSV = './data/daily-min-temperatures.csv'
     times, temperatures = parse_data_from_file(TEMPERATURES_CSV)
     TIME = np.array(times)
     SERIES = np.array(temperatures)
     SPLIT_TIME = 2500
     WINDOW_SIZE = 64
     BATCH_SIZE = 256
     SHUFFLE_BUFFER_SIZE = 1000

预处理:

def train_val_split(time, series, time_step=G.SPLIT_TIME):

   time_train = time[:time_step]
   series_train = series[:time_step]
   time_valid = time[time_step:]
   series_valid = series[time_step:]

 return time_train, series_train, time_valid, series_valid

 # Split the dataset
 time_train, series_train, time_valid, series_valid = train_val_split(G.TIME, G.SERIES)

 def windowed_dataset(series, window_size=G.WINDOW_SIZE, batch_size=G.BATCH_SIZE,
 shuffle_buffer=G.SHUFFLE_BUFFER_SIZE):
      ds = tf.data.Dataset.from_tensor_slices(series)
      ds = ds.window(window_size + 1, shift=1, drop_remainder=True)
      ds = ds.flat_map(lambda w: w.batch(window_size + 1))
      ds = ds.shuffle(shuffle_buffer)
      ds = ds.map(lambda w: (w[:-1], w[-1]))
      ds = ds.batch(batch_size).prefetch(1)
  return ds

  # Apply the transformation to the training set
  train_set = windowed_dataset(series_train, window_size=G.WINDOW_SIZE, batch_size=G.BATCH_SIZE, shuffle_buffer=G.SHUFFLE_BUFFER_SIZE)

下面是我的模型:

def create_uncompiled_model():
  tf.keras.backend.clear_session()

  model = tf.keras.models.Sequential([
            tf.keras.layers.Conv1D(filters=64, kernel_size=5,
                          strides=1, padding="causal",
                          activation="relu",
                          input_shape=[window_size, 1]),
            tf.keras.layers.LSTM(64, return_sequences=True),
            tf.keras.layers.LSTM(64, return_sequences=True),
            tf.keras.layers.Dense(30, activation="relu"),
            tf.keras.layers.Dense(10, activation="relu"),
            tf.keras.layers.Dense(1),
            tf.keras.layers.Lambda(lambda x: x * 400)
        ]) 
  return model

def create_model():

   model = create_uncompiled_model()

   model.compile(loss=tf.keras.losses.Huber(),
              optimizer=tf.keras.optimizers.SGD(lr=1e-5, momentum=0.9),
              metrics=['mae', 'mse'])  

   return model

我调整了学习率,批量大小,有时窗口大小,但无论如何我不能让mae低于2!!!还尝试添加层和改变我的模型的架构,但也没有运气!

57hvy0tb

57hvy0tb1#

def create_uncompiled_model():

### START CODE HERE

model = tf.keras.models.Sequential([tf.keras.layers.Conv1D(filters=32, kernel_size=5,
strides=1, padding="causal",
activation="relu",
input_shape=[None, 1]),
tf.keras.layers.LSTM(64, return_sequences=True),
        tf.keras.layers.LSTM(64, activation="relu"),
        tf.keras.layers.Dense(30, activation="relu"),
        tf.keras.layers.Dense(10, activation="relu"),
        tf.keras.layers.Dense(1),
        tf.keras.layers.Lambda(lambda x: x * 200) 
]) 

### END CODE HERE

return model

相关问题