深度学习中的优化算法之Adadelta

x33g5p2x  于2022-05-22 转载在 其他  
字(5.0k)|赞(0)|评价(0)|浏览(365)

**     **之前在https://blog.csdn.net/fengbingchun/article/details/124766283 介绍过深度学习中的优化算法AdaGrad,这里介绍下深度学习的另一种优化算法Adadelta。论文名字为:《ADADELTA: AN ADAPTIVE LEARNING RATE METHOD》,论文地址:https://arxiv.org/pdf/1212.5701.pdf

**      Adadelta一种自适应学习率方法,是AdaGrad的扩展,建立在AdaGrad****的基础上,旨在减少其过激的、单调递减的学习率**。Adadelta不是积累所有过去的平方梯度,而是将积累的过去梯度的窗口限制为某个固定大小。如下图所示,截图来自:https://arxiv.org/pdf/1609.04747.pdf

**     

**

     使用Adadelta,我们甚至不需要设置默认学习率这一超参数,因为它已从更新规则中消除,它使用参数本身的变化率来调整学习率。

**      Adadelta可被认为是梯度下降的进一步扩展,它建立在AdaGrad和RMSProp的基础上,并改变了自定义步长的计算(changes the calculation of the custom step size)****,进而不再需要初始学习率超参数**。

**     **Adadelta旨在加速优化过程,例如减少达到最优值所需的迭代次数,或提高优化算法的能力,例如获得更好的最终结果。

**     **最好将Adadelta理解为AdaGrad和RMSProp算法的扩展。Adadelta是RMSProp的进一步扩展,旨在提高算法的收敛性并消除对手动指定初始学习率的需要。

**     **与RMSProp一样,Adadelta为每个参数计算平方偏导数的衰减移动平均值。关键区别在于使用delta的衰减平均值或参数变化来计算参数的步长。(The decaying moving average of the squared partial derivative is calculated for each parameter, as with RMSProp. The key difference is in the calculation of the step size for a parameter that uses the decaying average of the delta or change in parameter.)

**     **以上内容主要参考:https://machinelearningmastery.com

**     **以下是与AdaGrad不同的代码片段:

**     **1.在原有枚举类Optimizaiton的基础上新增Adadelta:

  1. enum class Optimization {
  2. BGD, // Batch Gradient Descent
  3. SGD, // Stochastic Gradient Descent
  4. MBGD, // Mini-batch Gradient Descent
  5. SGD_Momentum, // SGD with Momentum
  6. AdaGrad, // Adaptive Gradient
  7. RMSProp, // Root Mean Square Propagation
  8. Adadelta // an adaptive learning rate method
  9. };

**     **2.calculate_gradient_descent函数:

  1. void LogisticRegression2::calculate_gradient_descent(int start, int end)
  2. {
  3. switch (optim_) {
  4. case Optimization::Adadelta: {
  5. int len = end - start;
  6. std::vector<float> g(feature_length_, 0.), p(feature_length_, 0.);
  7. std::vector<float> z(len, 0.), dz(len, 0.);
  8. for (int i = start, x = 0; i < end; ++i, ++x) {
  9. z[x] = calculate_z(data_->samples[random_shuffle_[i]]);
  10. dz[x] = calculate_loss_function_derivative(calculate_activation_function(z[x]), data_->labels[random_shuffle_[i]]);
  11. for (int j = 0; j < feature_length_; ++j) {
  12. float dw = data_->samples[random_shuffle_[i]][j] * dz[x];
  13. g[j] = mu_ * g[j] + (1. - mu_) * (dw * dw); // formula 10
  14. float alpha = (eps_ + std::sqrt(p[j])) / (eps_ + std::sqrt(g[j]));
  15. float change = alpha * dw;
  16. p[j] = mu_ * p[j] + (1. - mu_) * (change * change); // formula 15
  17. w_[j] = w_[j] - change;
  18. }
  19. b_ -= (eps_ * dz[x]);
  20. }
  21. }
  22. break;
  23. case Optimization::RMSProp: {
  24. int len = end - start;
  25. std::vector<float> g(feature_length_, 0.);
  26. std::vector<float> z(len, 0), dz(len, 0);
  27. for (int i = start, x = 0; i < end; ++i, ++x) {
  28. z[x] = calculate_z(data_->samples[random_shuffle_[i]]);
  29. dz[x] = calculate_loss_function_derivative(calculate_activation_function(z[x]), data_->labels[random_shuffle_[i]]);
  30. for (int j = 0; j < feature_length_; ++j) {
  31. float dw = data_->samples[random_shuffle_[i]][j] * dz[x];
  32. g[j] = mu_ * g[j] + (1. - mu_) * (dw * dw); // formula 18
  33. w_[j] = w_[j] - alpha_ * dw / (std::sqrt(g[j]) + eps_);
  34. }
  35. b_ -= (alpha_ * dz[x]);
  36. }
  37. }
  38. break;
  39. case Optimization::AdaGrad: {
  40. int len = end - start;
  41. std::vector<float> g(feature_length_, 0.);
  42. std::vector<float> z(len, 0), dz(len, 0);
  43. for (int i = start, x = 0; i < end; ++i, ++x) {
  44. z[x] = calculate_z(data_->samples[random_shuffle_[i]]);
  45. dz[x] = calculate_loss_function_derivative(calculate_activation_function(z[x]), data_->labels[random_shuffle_[i]]);
  46. for (int j = 0; j < feature_length_; ++j) {
  47. float dw = data_->samples[random_shuffle_[i]][j] * dz[x];
  48. g[j] += dw * dw;
  49. w_[j] = w_[j] - alpha_ * dw / (std::sqrt(g[j]) + eps_);
  50. }
  51. b_ -= (alpha_ * dz[x]);
  52. }
  53. }
  54. break;
  55. case Optimization::SGD_Momentum: {
  56. int len = end - start;
  57. std::vector<float> change(feature_length_, 0.);
  58. std::vector<float> z(len, 0), dz(len, 0);
  59. for (int i = start, x = 0; i < end; ++i, ++x) {
  60. z[x] = calculate_z(data_->samples[random_shuffle_[i]]);
  61. dz[x] = calculate_loss_function_derivative(calculate_activation_function(z[x]), data_->labels[random_shuffle_[i]]);
  62. for (int j = 0; j < feature_length_; ++j) {
  63. float new_change = mu_ * change[j] - alpha_ * (data_->samples[random_shuffle_[i]][j] * dz[x]);
  64. w_[j] += new_change;
  65. change[j] = new_change;
  66. }
  67. b_ -= (alpha_ * dz[x]);
  68. }
  69. }
  70. break;
  71. case Optimization::SGD:
  72. case Optimization::MBGD: {
  73. int len = end - start;
  74. std::vector<float> z(len, 0), dz(len, 0);
  75. for (int i = start, x = 0; i < end; ++i, ++x) {
  76. z[x] = calculate_z(data_->samples[random_shuffle_[i]]);
  77. dz[x] = calculate_loss_function_derivative(calculate_activation_function(z[x]), data_->labels[random_shuffle_[i]]);
  78. for (int j = 0; j < feature_length_; ++j) {
  79. w_[j] = w_[j] - alpha_ * (data_->samples[random_shuffle_[i]][j] * dz[x]);
  80. }
  81. b_ -= (alpha_ * dz[x]);
  82. }
  83. }
  84. break;
  85. case Optimization::BGD:
  86. default: // BGD
  87. std::vector<float> z(m_, 0), dz(m_, 0);
  88. float db = 0.;
  89. std::vector<float> dw(feature_length_, 0.);
  90. for (int i = 0; i < m_; ++i) {
  91. z[i] = calculate_z(data_->samples[i]);
  92. o_[i] = calculate_activation_function(z[i]);
  93. dz[i] = calculate_loss_function_derivative(o_[i], data_->labels[i]);
  94. for (int j = 0; j < feature_length_; ++j) {
  95. dw[j] += data_->samples[i][j] * dz[i]; // dw(i)+=x(i)(j)*dz(i)
  96. }
  97. db += dz[i]; // db+=dz(i)
  98. }
  99. for (int j = 0; j < feature_length_; ++j) {
  100. dw[j] /= m_;
  101. w_[j] -= alpha_ * dw[j];
  102. }
  103. b_ -= alpha_*(db/m_);
  104. }
  105. }

**     **执行结果如下图所示:测试函数为test_logistic_regression2_gradient_descent,多次执行每种配置,最终结果都相同。图像集使用MNIST,其中训练图像总共10000张,0和1各5000张,均来自于训练集;预测图像总共1800张,0和1各900张,均来自于测试集。在AdaGrad中设置学习率为0.01,eps为1e-8及其它配置参数相同的情况下,AdaGrad耗时为17秒;在Adadelta中设置eps为1e-3时,Adadelta耗时为26秒;它们的识别率均为100%。

    GitHub: https://github.com/fengbingchun/NN_Test

相关文章