Adaptive Gradient Methods with Dynamic Bound of Learning Rate

北大和浙大本科生的这篇ICLR论文所提出的优化算法被媒体称为”拳打Adam,脚踢SGD“,该工作为他们在滴滴AI实验室实习时完成。这篇论文提出了两种优化算法,分别是Adabound和AMSBound,两个算法分别是Adam和AMSGrad的变体。在概括这篇论文的研究之前,首先介绍一下Adam和AMSGrad。


本文传送机

回顾之Adam

回顾之AMSGrad

论文内容

Abstract

Introduction

Notations and Preliminaries

Notations

Online convex programming

A generic overview of optimization methods

The Non-Convergence Caused by Extreme Learning Rate

Theorem 1

Theorem 2

Theorem 3

Adaptice Mement Estimation with Dynamic Bound

Theorem 4 & Corollary 4.1

Experiments

Feedforward Neural Network

Convolutional Neural Network

Recurrent Neural Network


 

回顾之Adam

Adam算法是两种优化算法优点的结合:

  • 适应性梯度算法(AdaGrad)为每一个参数保留一个学习率以提升在稀疏梯度(自然语言处理和计算机视觉问题)上的性能
  • 均方根传播(RMSProp)基于权值梯度最近的均值为每一个参数适应性地保留学习率。这意味着算法在非稳态和在线学习问题上具有很优秀的性能

Adaptive Gradient Methods with Dynamic Bound of Learning Rate

其中Adaptive Gradient Methods with Dynamic Bound of Learning Rate为梯度的一阶矩估计(类似于AdaGrad),Adaptive Gradient Methods with Dynamic Bound of Learning Rate为梯度的二阶矩估计(类似于RMSProp)

Adaptive Gradient Methods with Dynamic Bound of Learning RateAdaptive Gradient Methods with Dynamic Bound of Learning Rate分别为Adaptive Gradient Methods with Dynamic Bound of Learning RateAdaptive Gradient Methods with Dynamic Bound of Learning Rate的偏差矫正

这样,每一次迭代的Adaptive Gradient Methods with Dynamic Bound of Learning Rate更新为Adaptive Gradient Methods with Dynamic Bound of Learning Rate,其中的Adaptive Gradient Methods with Dynamic Bound of Learning Rate是为了防止除0操作

默认的参数设置为:

  • Adaptive Gradient Methods with Dynamic Bound of Learning Rate
  • Adaptive Gradient Methods with Dynamic Bound of Learning Rate
  • Adaptive Gradient Methods with Dynamic Bound of Learning Rate
  • Adaptive Gradient Methods with Dynamic Bound of Learning Rate

 

回顾之AMSGrad

对比Adam,AMSGrad的主要改动为:

  • 基于算法简单性考量,去除了Adam的偏差矫正步骤
  • 仅当Adaptive Gradient Methods with Dynamic Bound of Learning Rate时,才应用Adaptive Gradient Methods with Dynamic Bound of Learning Rate ——这有助于避免收敛至次优解

算法流程为:

Adaptive Gradient Methods with Dynamic Bound of Learning Rate

Adaptive Gradient Methods with Dynamic Bound of Learning Rate

Adaptive Gradient Methods with Dynamic Bound of Learning Rate

Adaptive Gradient Methods with Dynamic Bound of Learning Rate

 

论文内容

Abstract

像AdaGrad, RMSprop和Adam这样的自适应算法非常流行,但是和随机梯度下降SGD比起来,它们甚至不能收敛。最近提出的AMSGrad优化算法试图去解决这个问题,但是也没比现有的优化方法有多大的提升。本文提出了两个优化算法:

  • AdaBound——Adam的变体
  • AMSBound——AMSGrad的变体

这两个优化算法在学习率learning rate上应用了动态边界(dynamic bounds)机制。

 

Introduction

  • SGD (Robbins & Monro, 1951)
    • 优点:虽然很简单,但是在很多应用上都表现得很好
    • 缺点:在所有方向上都均匀地缩放梯度——>当训练数据非常稀疏时会导致表现较差,同时速度也不快
  • 为解决SGD的这个问题,各种自适应方法被提了出来,这些方法的普遍思路是用过去梯度均方的某种形式来调节梯度。这类方法包括Adam (Kingma & Lei Ba, 2015), AdaGrad (Duchi et al., 2011)和RMSprop (Tielema & Hinton, 2012)。Adam由于其快速训练的特性,成为了许多深度学习框架的默认优化方法。
  • 尽管这些自适应方法很流行,但是其泛化能力和样本外的表现能力还不如其非自适应的版本。自适应方法的特点是:通常在训练刚开始的时候非常快,但是到了未遇到的数据(测试集或者评估集)它的变现就停滞了。(Wilson et al., 2017)最近自然语言处理和计算机视觉方面的最优(state-of-the-art)工作也都是选择SGD(带momentum动量)的优化方法。
  • Reddi et al.(2018)提出了一个Adam的变体称为AMSGrad。但是AMSGrad在未见数据上的泛化能力似乎和Adam差不多。过大的learning rate会导致较差的泛化能力,而过小的learning rate会导致模型很难收敛。
  • 作者所提出的两个优化方法的思路是:在Adam和AMSGrad方法的学习率使用动态边界,其中下界和上界分别初始化为0和无穷大,它们平滑地收敛到一个恒定的最终步长。这两个新的变量可以看成是训练开始时的自适应方法,随着时间步长的增长平稳地转换为SGD(带动量)。这样能保证训练开始时较快的速度,同时在训练结束时也能有较好的泛化能力

 

Notations and Preliminaries

Notations

  • Adaptive Gradient Methods with Dynamic Bound of Learning RateAdaptive Gradient Methods with Dynamic Bound of Learning Rate的第i个坐标
  • Adaptive Gradient Methods with Dynamic Bound of Learning RateAdaptive Gradient Methods with Dynamic Bound of Learning Rate的k幂
  • Adaptive Gradient Methods with Dynamic Bound of Learning RateAdaptive Gradient Methods with Dynamic Bound of Learning Rate的L2范数
  • Adaptive Gradient Methods with Dynamic Bound of Learning Rate: Adaptive Gradient Methods with Dynamic Bound of Learning Rate在第t个迭代的第i个坐标
  • Adaptive Gradient Methods with Dynamic Bound of Learning Rate: 向量Adaptive Gradient Methods with Dynamic Bound of Learning RateAdaptive Gradient Methods with Dynamic Bound of Learning Rate的内积
  • Adaptive Gradient Methods with Dynamic Bound of Learning Rate: 向量Adaptive Gradient Methods with Dynamic Bound of Learning RateAdaptive Gradient Methods with Dynamic Bound of Learning Rate的元素乘积
  • Adaptive Gradient Methods with Dynamic Bound of Learning Rate: 向量Adaptive Gradient Methods with Dynamic Bound of Learning RateAdaptive Gradient Methods with Dynamic Bound of Learning Rate的元素除
  • Adaptive Gradient Methods with Dynamic Bound of Learning Rate向量Adaptive Gradient Methods with Dynamic Bound of Learning RateAdaptive Gradient Methods with Dynamic Bound of Learning Rate的元素最大值(最小值)
  • Adaptive Gradient Methods with Dynamic Bound of Learning Rate: 所有Adaptive Gradient Methods with Dynamic Bound of Learning Rate的正定矩阵集合
  • Adaptive Gradient Methods with Dynamic Bound of Learning Rate: 对于向量Adaptive Gradient Methods with Dynamic Bound of Learning Rate,正定矩阵Adaptive Gradient Methods with Dynamic Bound of Learning RateAdaptive Gradient Methods with Dynamic Bound of Learning Rate
  • Adaptive Gradient Methods with Dynamic Bound of Learning Rate: 对Adaptive Gradient Methods with Dynamic Bound of Learning Rate的映射操作Adaptive Gradient Methods with Dynamic Bound of Learning Rate定义为:Adaptive Gradient Methods with Dynamic Bound of Learning Rate

Online convex programming

分析迭代优化方法的一个灵活框架是在线优化问题(online optimization problem)

公式描述可描述为算法和Loss的重复竞争:

  • 在step Adaptive Gradient Methods with Dynamic Bound of Learning Rate,算法选择一个决定Adaptive Gradient Methods with Dynamic Bound of Learning Rate
  • 选择一个凸损失函数Adaptive Gradient Methods with Dynamic Bound of Learning Rate,算法Adaptive Gradient Methods with Dynamic Bound of Learning Rate引起了损失Adaptive Gradient Methods with Dynamic Bound of Learning Rate
  • 总损失Adaptive Gradient Methods with Dynamic Bound of Learning Rate和固定决策最小值之间的差为Adaptive Gradient Methods with Dynamic Bound of Learning Rate
  • 假定可选集合Adaptive Gradient Methods with Dynamic Bound of Learning Rate有边界限定,Adaptive Gradient Methods with Dynamic Bound of Learning Rate是所有Adaptive Gradient Methods with Dynamic Bound of Learning RateAdaptive Gradient Methods with Dynamic Bound of Learning Rate的边界
  • 使得Adaptive Gradient Methods with Dynamic Bound of Learning Rate

A generic overview of optimization methods

  • 跟随Reddi et al. (2018)提供了一个封装了许多流行的自适应和非自适应方法的通用框架:Adaptive Gradient Methods with Dynamic Bound of Learning Rate
    • 对理解各个优化方法的属性很有用,但是Algorithm1 仍然很抽象(因为Adaptive Gradient Methods with Dynamic Bound of Learning RateAdaptive Gradient Methods with Dynamic Bound of Learning Rate没有明确定义)
  • 在上述框架下,总结了流行的优化方法:Adaptive Gradient Methods with Dynamic Bound of Learning Rate
    • Adam和RMSprop可以看成是AdaGrad的变体,前两个使用了指数移动平均作为Adaptive Gradient Methods with Dynamic Bound of Learning Rate,而不是AdaGrad中使用的简单平均
    • RMSprop本质上是AdamAdaptive Gradient Methods with Dynamic Bound of Learning Rate的简化版本
    • AMSGrad没有列在表格中,是因为其Adaptive Gradient Methods with Dynamic Bound of Learning Rate不是一个简简单单的表达式。Adaptive Gradient Methods with Dynamic Bound of Learning Rate可定义为Adaptive Gradient Methods with Dynamic Bound of Learning RateAdaptive Gradient Methods with Dynamic Bound of Learning Rate通过以下递归过程获得Adaptive Gradient Methods with Dynamic Bound of Learning Rate
    • Adaptive Gradient Methods with Dynamic Bound of Learning Rate定义和Adam一样

 

The Non-Convergence Caused by Extreme Learning Rate

  • 推测Adam算法中过大或者国小的学习速率都是其泛化能力差的原因
  • 为证实推论,在CIFAR-10上使用了带Adam优化方法的ResNet-34并采样了几个weigts和bias的学习率(在不同的层上随机选择了9个Adaptive Gradient Methods with Dynamic Bound of Learning Rate卷积核以及最后线性层的bias):                                                                                                                                                                                          
    Adaptive Gradient Methods with Dynamic Bound of Learning Rate
    采样的weights和bias,这里取了学习率的对数。可以发现,模型在快要收敛时,学习率要么很小(<0.01),要么很大(>1000)
  • AMSGrad减小了过大学习率的影响,但是忽视了过小学习率这一方面的影响

Theorem 1

存在在线凸优化问题,对于任何初始step size Adaptive Gradient Methods with Dynamic Bound of Learning Rate,Adam优化有非零平均regret。即Adaptive Gradient Methods with Dynamic Bound of Learning Rate

Theorem 2

对任意常数Adaptive Gradient Methods with Dynamic Bound of Learning Rate满足Adaptive Gradient Methods with Dynamic Bound of Learning Rate,存在在线凸优化问题对于任意初始step size Adaptive Gradient Methods with Dynamic Bound of Learning Rate,Adam优化有非零平均regret。即Adaptive Gradient Methods with Dynamic Bound of Learning Rate

Theorem 3

对任意常数Adaptive Gradient Methods with Dynamic Bound of Learning Rate满足Adaptive Gradient Methods with Dynamic Bound of Learning Rate,存在随机凸优化问题对于任意初始step size Adaptive Gradient Methods with Dynamic Bound of Learning Rate,Adam不能收敛到最优解

 

Adaptice Mement Estimation with Dynamic Bound

作者提出算法的思路是:组合自适应算法训练初期速度较快的优点和SGD最终泛化能力较好的优点。也就是该算法训练初期表现得像自适应优化方法,训练末期表现得像SGD。

Adaptive Gradient Methods with Dynamic Bound of Learning Rate

 其中, Adaptive Gradient Methods with Dynamic Bound of Learning Rate是一个当Adaptive Gradient Methods with Dynamic Bound of Learning Rate从0开始的非下降函数,逐渐收敛到Adaptive Gradient Methods with Dynamic Bound of Learning RateAdaptive Gradient Methods with Dynamic Bound of Learning Rate是一个当Adaptive Gradient Methods with Dynamic Bound of Learning RateAdaptive Gradient Methods with Dynamic Bound of Learning Rate开始的非上升函数,逐渐收敛到Adaptive Gradient Methods with Dynamic Bound of Learning Rate.本文中这两个函数通过以下公式计算:

Adaptive Gradient Methods with Dynamic Bound of Learning Rate

Adaptive Gradient Methods with Dynamic Bound of Learning Rate

Theorem 4 & Corollary 4.1

Adaptive Gradient Methods with Dynamic Bound of Learning Rate

定理4和推论4.1主要说明的是,AdaBound的regret上限为Adaptive Gradient Methods with Dynamic Bound of Learning Rate

 

Experiments

实验主要集中在三个任务上:MNIST人类,CIFAR-10分类,Penn Treebank语言建模Adaptive Gradient Methods with Dynamic Bound of Learning Rate

Feedforward Neural Network

在MNIST数据集上的多分类表现为:

Adaptive Gradient Methods with Dynamic Bound of Learning Rate

训练集上,所有的算法基本上ACC都达到了100%,在测试集上SGD要比Adam和AMSGrada好一点,AdaBound和AMSBound要比它们的原型要好一点。

Convolutional Neural Network

在CIFAR-10上使用了DenseNet-121 (Huang et al., 2017)和ResNet-34 (He et al., 2016)。实验设置了200个epoch,在第150个epoch时将学习率减小10

DenseNet

Adaptive Gradient Methods with Dynamic Bound of Learning Rate

训练开始时,自适应优化方法的速度快,表现更好,但是当150epoch将学习率减小时,SGDM就超过了自适应优化方法。可以看出AdaBound和AMSBound有着自适应算法一样的优化速度,同时比SGDM在测试集上的准确率还要高。此外,AdaBound和AMSBound在测试集上的精度也比它们的原型高了2%.

ResNet

Adaptive Gradient Methods with Dynamic Bound of Learning Rate

基本结论和DenseNet上的实验结果差不多。测试集上AdaBound和AMSBound的精度甚至比SGDM高1%.

Recurrent Neural Network

Adaptive Gradient Methods with Dynamic Bound of Learning Rate

 

CONTACT INFORMATION

E-mail: [email protected]

QQ: 46611253