在TensorFlow对象检测API中使用的不平衡数据集的类权重添加到哪里,我可以在配置文件中添加吗?[duplicate]

yzuktlbb  于 2023-03-19  发布在  其他
关注(0)|答案(1)|浏览(68)

此问题在此处已有答案

Class weights for balancing data in TensorFlow Object Detection API(2个答案)
去年关闭了。
我正在使用TensorFlow对象检测API在我的自定义数据集上训练EfficienDet D4,我有一个不平衡的数据集,因此,我计划添加类权重,以便为计数较少的类给予额外权重。我了解了这个概念,但我被添加位置卡住了。我可以将其添加到管道配置文件中吗?如果可以,添加位置在哪里?否则,www.example.com所在的文件是什么model.fit。请给予建议。或者如果有更好的方法,请让我知道。where does class_weights or weighted loss penalize the network?实际上我想实现上面链接的解决方案

nnt7mjpx

nnt7mjpx1#

不平衡的数据集是类样本的不平衡,导致过度拟合问题,我们通常在它们使用早期回调之前通过自定义回调来处理此问题。(在特定用途上)
您可以对on_epoch_开始或使用目标编号应用指数衰减**。
还可以将权重应用回比率的目标层条件。

施加的重量:

layer_2 = tf.keras.layers.LSTM(32, kernel_initializer=tf.constant_initializer(1.))
b_out =  layer_2(group_1_ShoryuKen_Left)
layer_2.set_weights(layer_1.get_weights())

回拨电话:

class custom_callback(tf.keras.callbacks.Callback):

def on_epoch_end(self, epoch, logs={}):
    if ( logs['loss'] <= 0.4 ) :
        self.model.stop_training = True
            
    if(round(logs['loss'], 6) == round(logs['val_loss'], 6) and round(logs['loss'], 6) > 0 and round(logs['loss'], 6) == round(previous_loss_number, 6) and i_same_loss_number > 50):
        self.model.stop_training = True
                
    if(logs['accuracy'] >= 0.9 and logs['loss'] < 0.2 and int(epoch) == 5 ):
        self.model.stop_training = True

def on_epoch_begin(self, epoch, logs={}):
    global globalstep
    globalstep = globalstep + 1
    previous_loss_number = self.previous_loss_number 
    i_same_loss_number = self.i_same_loss_number
    i_radious = self.i_radious
    lr = self.lr
        
    if not hasattr(self.model.optimizer, "lr"):
        raise ValueError('Optimizer must have a "lr" attribute.')

    if globalstep >  pow(10, i_radious) and ( history.history['loss'][len(history.history['loss']) - 1] == history.history['val_loss'][len(history.history['loss']) - 1] ) and ( history.history['loss'][len(history.history['loss']) - 1] == previous_loss_number ) and ( i_same_loss_number > 100 ) :
        i_radious = i_radious + 1

    lr = round(model.optimizer.lr.numpy(), i_radious)

    ################################################################
    scheduled_lr = self.scheduler(epoch, lr)
    # Set the value back to the optimizer before this epoch starts
    tf.keras.backend.set_value(self.model.optimizer.lr, scheduled_lr)
    print("\nEpoch %05d: Learning rate is %6.7f." % (epoch, scheduled_lr))

custom_callback = custom_callback()

Screenshot

相关问题