《Adaptive Gradient Methods With Dynamic Bound Of Learning Rate》
论文页面:https://openreview.net/pdf?id=Bkg3g2R9FX
评审页面:https://openreview.net/forum?id=Bkg3g2R9FX
GitHub地址:https://github.com/Luolc/AdaBound
亮点总结
1、AdaBound算法的初始化速度快。
2、AdaBound算法对超参数不是很敏感,省去了大量调参的时间。
3、适合应用在CV、NLP领域,可以用来开发解决各种流行任务的深度学习模型。
We investigate existing adaptive algorithms and find that extremely large or small learning rates can result in the poor convergence behavior. A rigorous proof of non-convergence for ADAM is provided to demonstrate the above problem.
Motivated by the strong generalization ability of SGD, we design a strategy to constrain the learn- ing rates of ADAM and AMSGRAD to avoid a violent oscillation. Our proposed algorithms, AD- ABOUND and AMSBOUND, which employ dynamic bounds on their learning rates, achieve a smooth transition to SGD. They show the great efficacy on several standard benchmarks while maintaining advantageous properties of adaptive methods such as rapid initial progress and hyper- parameter insensitivity.
我们研究了现有的自适应算法,发现极大或极小的学习率都会导致较差的收敛行为。为证明上述问题,ADAM给出了非收敛性的严格证明。
基于SGD较强的泛化能力,我们设计了一种策略来约束ADAM和AMSGRAD的学习速率,以避免剧烈的振荡。我们提出的算法,ADABOUND和AMSBOUND,采用了动态的学习速率边界,实现了向SGD的平稳过渡。它们在保持自适应方法初始化速度快、超参数不敏感等优点的同时,在多个标准基准上显示了良好的效果。
论文解读
自适应优化方法,如ADAGRAD, RMSPROP和ADAM已经被提出,以实现一个基于学习速率的元素级缩放项的快速训练过程。虽然它们普遍存在,但与SGD相比,它们的泛化能力较差,甚至由于不稳定和极端的学习速率而无法收敛。最近的研究提出了AMSGRAD等算法来解决这一问题,但相对于现有的方法没有取得很大的改进。在我们的论文中,我们证明了极端的学习率会导致糟糕的表现。我们提供了ADAM和AMSGRAD的新变体,分别称为ADABOUND和AMSBOUND,它们利用学习速率的动态边界来实现从自适应方法到SGD的渐进平稳过渡,并给出收敛性的理论证明。我们进一步对各种流行的任务和模型进行实验,这在以往的工作中往往是不够的。实验结果表明,新的变异可以消除自适应方法与SGD的泛化差距,同时在训练早期保持较高的学习速度。此外,它们可以对原型带来显著的改进,特别是在复杂的深度网络上。该算法的实现可以在https://github.com/Luolc/AdaBound找到。
实验结果
In this section, we turn to an empirical study of different models to compare new variants with popular optimization methods including SGD(M), ADAGRAD, ADAM, and AMSGRAD. We focus on three tasks: the MNIST image classification task (Lecun et al., 1998), the CIFAR-10 image classification task (Krizhevsky & Hinton, 2009), and the language modeling task on Penn Treebank (Marcus et al., 1993). We choose them due to their broad importance and availability of their architectures for reproducibility. The setup for each task is detailed in Table 2. We run each experiment three times with the specified initialization method from random starting points. A fixed budget on the number of epochs is assigned for training and the decay strategy is introduced in following parts. We choose the settings that achieve the lowest training loss at the end.
在这一节中,我们将对不同的模型进行实证研究,将新变量与常用的优化方法(包括SGD(M)、ADAGRAD、ADAM和AMSGRAD))进行比较。我们主要关注三个任务:MNIST图像分类任务(Lecun et al.,1998)、CIFAR-10图像分类任务(Krizhevsky & Hinton, 2009)和Penn Treebank上的语言建模任务(Marcus et al.1993)。我们之所以选择它们,是因为它们的架构具有广泛的重要性和可再现性。表2详细列出了每个任务的设置。我们使用指定的初始化方法从随机的起点运行每个实验三次。为训练指定了固定的时域数预算,下面将介绍衰减策略。我们选择的设置,实现最低的训练损失在最后。
1、FEEDFORWARD NEURAL NETWORK
We train a simple fully connected neural network with one hidden layer for the multiclass classification problem on MNIST dataset. We run 100 epochs and omit the decay scheme for this experiment.
Figure 2 shows the learning curve for each optimization method on both the training and test set. We find that for training, all algorithms can achieve the accuracy approaching 100%. For the test part, SGD performs slightly better than adaptive methods ADAM and AMSGRAD. Our two proposed methods, ADABOUND and AMSBOUND, display slight improvement, but compared with their prototypes there are still visible increases in test accuracy.
针对MNIST数据集上的多类分类问题,我们训练了一个具有隐层的简单全连通神经网络。我们运行了100个epochs,省略了这个实验的衰变方案。
图2显示了训练和测试集上每种优化方法的学习曲线。我们发现在训练中,所有算法都能达到接近100%的准确率。在测试部分,SGD的性能略优于ADAM和AMSGRAD的自适应方法。我们提出的 ADABOUND和AMSBOUND两种方法显示出轻微的改进,但与它们的原型相比,测试精度仍然有明显的提高。