keras 属性错误:“Accumulator Optimizer”对象没有属性“lr”

w7t8yxp5  于 2023-08-06  发布在  其他
关注(0)|答案(1)|浏览(107)

我正在尝试在Keras中实现累积梯度优化器的代码。我从这个github repo https://github.com/bojone/accum_optimizer_for_keras获得了这个代码

class AccumOptimizer(Optimizer):
    def __init__(self,optimizer,steps_per_update=1,**kwargs):
        self.name=kwargs['name']
        super(AccumOptimizer,self).__init__(**kwargs)
        self.optimizer=optimizer
        print(self.optimizer.lr)
        #setattr(self,'lr',self.lr)
        with tf.name_scope(self.__class__.__name__):
            self.steps_per_update=steps_per_update
            self.iterations=tf.Variable(0,dtype='int64',name='iterations')
            self.cond=tf.equal(self.iterations%self.steps_per_update,0)
            self.lr=self.optimizer.lr
            self.optimizer.lr=tf.cond(self.cond,lambda:self.optimizer.lr.value(), lambda:0.)
            for attr in ['momentum', 'rho', 'beta_1', 'beta_2']:
                if hasattr(self.optimizer,attr):
                    value=getattr(self.optimizer,attr)
                    setattr(self, attr, value)
                    setattr(self.optimizer, attr, tf.cond(self.cond, lambda:value.value(), lambda:1 - 1e-7))
            for attr in self.optimizer.get_config():
                #print(attr)
                if not hasattr(self, attr):
                    value = getattr(self.optimizer, attr)
                    setattr(self, attr, value)
            self._create_slots=self.optimizer._create_slots
            self._resource_apply_dense=self.optimizer._resource_apply_dense
            
            def get_gradients(loss,params):
                return [ag / self.steps_per_update for ag in self.accum_grads]
            self.optimizer.get_gradients = get_gradients
    def get_updates(self,loss,params):
        self.iterations=tf.add(self.iterations, 1)
        self.optimizer.iterations=tf.add(self.optimizer.iterations, tf.cast(self.cond, 'int64'))
        self.updates=[
            self.iterations,
            self.optimizer.iterations
        ]
        # (gradient accumulation)
        self.accum_grads = [tf.zeros(p.shape,dtype=p.dtype) for p in params]
        grads = self.get_gradients(loss, params)
                                     
        for g, ag in zip(grads, self.accum_grads):
            self.updates.append(ag=tf.cond(self.cond,lambda:g,lambda:ag+g))
        
        # optimizer (inheriting updates of original optimizer)
        self.updates.extend(self.optimizer.get_updates(loss, params)[1:])
        self.weights.extend(self.optimizer.weights)
        return self.updates     
    
    def get_config(self):
        config = self.optimizer.get_config()
        return config

字符串
当我在Jupyter Notebook中的单元格中运行此代码时,我没有得到任何错误
当我编译模型时

model.compile(optimizer=AccumOptimizer(tf.keras.optimizers.RMSprop(learning_rate = 0.0001),16,name = 'RMSProp_Accum'), 
loss=tf.keras.losses.SparseCategoricalCrossentropy(), 
metrics=tf.keras.metrics.SparseCategoricalAccuracy())


我在类中使用print(self.optimizer.lr)的print语句的输出为
<tf.Variable 'learning_rate:0' shape=() dtype=float32, numpy=1e-04>
但是self.lr = self.optimizer.lr行之后的print(self.lr)再次给出错误。
当我运行model.fit()时,我使用了LearningRateScheduler回调,我得到了这个错误
AttributeError: 'AccumOptimizer' object has no attribute 'lr'
为什么我在设置self.lr = self.optimizer.lr时仍出现此错误?
另外,在编译模型之后,当我在一个单独的单元格中运行model.optimizer.lr时,我得到了相同的错误。

编辑

我在训练时使用tf.keras.callbacks.LearningRateScheduler,此回调包含以下代码

def on_epoch_begin(self, epoch, logs=None):
    if not hasattr(self.model.optimizer, 'lr'):
      raise ValueError('Optimizer must have a "lr" attribute.')


这就是问题的根源
如果不使用这个回调函数,我将不会得到AttributeError。
现在,要将此回调与fit()沿着使用,需要解决AttributeError。

pu82cl6c

pu82cl6c1#

您可以尝试使用以下代码设置lrlearning_rate

self._set_hyper('lr', optimizer.lr)
self._set_hyper('learning_rate', optimizer.learning_rate)

字符串

相关问题