如何通过Keras Tuner函数传递多个参数

j0pj023g  于 2022-11-13  发布在  其他
关注(0)|答案(2)|浏览(199)

我很难弄清楚如何通过keras调谐器函数传递多个参数。我找遍了所有available documentation和与此相关的问题,我找不到任何关于这个特定问题的东西。
我只想通过这个函数传递额外的参数:

def build_model(hp, some_val_1, some_val_2)

总体代码(简化):

import kerastuner as kt

def build_model(hp, some_val_1, some_val_2):
    print(some_val_1)
    print(some_val_2)
    
    conv1d_val_1 = hp.Int("1-input_units", min_value=32, max_value=1028, step=64)
    conv1d_filt_1 = hp.Int("1b-filter_units", min_value=2, max_value=10, step=1)
    model.add(Conv1D(conv1d_val_1, conv1d_filt_1, activation='relu', input_shape=input_shape, padding='SAME'))
    model.add(Dense(1))
    model.compile(loss='mae', optimizer='adam')
    return model

model = kt.Hyperband(build_model, objective="val_loss", max_epochs = 10, factor = 3, directory=os.path.normpath(path_save_dir))
model.search(x=x_train, y=y_train, epochs=10, batch_size=500, validation_data=(x_test, y_test), shuffle=True)

尝试#1(我尝试了许多变化)-不起作用:

model = kt.Hyperband(build_model(kt.HyperParameters(), some_val_1, some_val_2), objective="val_loss", max_epochs = 10, factor = 3, directory=os.path.normpath(path_save_dir))

尝试#2(我尝试了许多变化)-不起作用:

model = kt.Hyperband(build_model, some_val_1='1', some_val_2='2',objective="val_loss", max_epochs = 10, factor = 3, directory=os.path.normpath(path_save_dir))

尝试#3(我尝试了许多变化)-不起作用:

model = kt.Hyperband(build_model, args=(some_val_1, some_val_2,),objective="val_loss", max_epochs = 10, factor = 3, directory=os.path.normpath(path_save_dir))

请发送帮助

neskvpey

neskvpey1#

您可以创建自己的HyperModel子类来实现这一点,请查看此链接。
示例实现,它将做你正在尝试做的事情:-

import kerastuner as kt

class MyHyperModel(kt.HyperModel):

    def __init__(self, some_val_1, some_val_2):
        self.some_val_1 = some_val_1
        self.some_val_2 = some_val_2

    def build(self, hp):
        ## You can use self.some_val_1 and self.some_val_2 here
        conv1d_val_1 = hp.Int("1-input_units", min_value=32, max_value=1028, step=64)
        conv1d_filt_1 = hp.Int("1b-filter_units", min_value=2, max_value=10, step=1)
        model.add(Conv1D(conv1d_val_1, conv1d_filt_1, activation='relu', input_shape=input_shape, padding='SAME'))
        model.add(Dense(1))
        model.compile(loss='mae', optimizer='adam')
        
        return model

some_val_1 = 10
some_val_2 = 20
my_hyper_model = MyHyperModel(some_val_1 = some_val_1, some_val_2 = some_val_2)
model = kt.Hyperband(my_hyper_model, objective="val_loss", max_epochs = 10, 
                     factor = 3, directory=os.path.normpath(path_save_dir))
swvgeqrz

swvgeqrz2#

添加一个调整了HyperModel的完整示例(我使用input_shapeoutput_shape表示some_val_1some_val_2)。

## The hypermodel
class MyHyperModel(keras_tuner.HyperModel):
    def __init__(self, input_shape, output_shape):
        self.input_shape = input_shape
        self.output_shape = output_shape
        
    def build(self, hp):
        model = keras.Sequential()
        model.add(keras.Input(shape=(self.input_shape,))) 
        model.add(
            layers.Dense(
                units=hp.Int("units", min_value=32, max_value=64, step=32),
                activation="relu"
            )
        ) # tuning number of layers
        model.add(layers.Dense(self.output_shape, activation='softmax'))
        model.compile(loss='categorical_crossentropy', metrics=['accuracy'])
        return model

## The tuner
tuner = keras_tuner.RandomSearch(
    hypermodel=CustomHyperModel(input_shape, output_shape),
    objective='val_accuracy',
    max_trials=3,
    overwrite=True
)
tuner.search(X_train, y_train, epochs=3, validation_data=(X_val, y_val))

## The final model
model = tuner.get_best_models()[0]
model.summary()

相关问题