RNA is an ideal meta-learning algorithm for deep CNNs, because contrary to many acceleration methods Lin et al. (2015); Güler (1992), optimization can be performed off-line and does not involve any potentially expensive extra-learning process. This means one can focus on acceleration a posteriori. RNA related numerical computations are not expensive, and form a simple linear system from a well-chosen linear combination of several optimization steps. This system is usually very small relative to the number of parameters, so the cost of acceleration grows linearly with respect to this number.
Here, we study applications applications of RNA to several recent architectures, like ResNet (He et al., 2016)
, applied to classical challenging datasets, like CIFAR10 or ImageNet. Our contributions are are twofold: first we demonstrate that it is often possible to achieve an accuracy similar to the final epoch in half the time; second, we show that RNA slightly improves the test classification performance, at no additional numerical cost. We provide an implementation that can be incorporated using only few lines of code around many standard Python deep learning frameworks, like PyTorch111Code can be found here:
2 Accelerating with Regularized Nonlinear Acceleration
This section briefly describes the RNA procedure and we refer the reader to Scieur et al. (2016) for more extensive explanations and theoretical guarantees. For the sake of simplicity, we consider an iterate sequence of elements of produced from the successive steps of an iterative optimization algorithm. For example, each could correspond to the parameters of a neural network at epoch , trained via a gradient descent algorithm, i.e.
with the step size (or learning rate) of the algorithm. Local minimization of is naturally achieved by where . RNA aims to linearly combine the parameters into an estimate
so that becomes smaller. In other terms, RNA output which solves approximately
In the next subsection, we describe the algorithm and intuitively explain the acceleration mechanism when using the optimization method (1), because this restrictive setting makes the analysis simpler.
2.1 Regularized Nonlinear Acceleration Algorithm
In practice, solving (2) is a difficult task. Instead, we will assume that the function is approximately quadratic in the neighbourhood of . This approximation is common in optimization for the design of (quasi-)second order methods, such as the Newton’s method or BFGS. Thus, can be considered almost as an affine function, which means:
From a finite difference scheme, one can easily recover from the iterates in (1), because for any , we have . As linearized iterates of a flow tends to be aligned, minimizing the -norm of (3) requires incorporating some regularization to avoid ill-conditioning
where . This exactly corresponds to the combination of steps 2 and 3 of Algorithm 1. Similar ideas hold is the stochastic case (Scieur et al., 2017), under limited assumption on the signal to noise ratio.
2.2 Practical Usage
We produced a software package based on PyTorch that includes in minimal modifications of existing standard procedures. As claimed, the RNA procedure does not require any access to the data, but simply stores regularly some model parameters in a buffer. On the fly acceleration on CPU is achievable, since one step of RNA is roughly equivalent to squaring a matrix of size , to form a matrix and solve the corresponding system. is typically 10 in the experiments that follow.
3 Applications to CNNs
We now describe the performance of our method on classical CNN training problems.
3.1 Classification pipelines
Because the RNA algorithm is generic, it can be easily applied to many different existing CNN training codes. We used our method with various CNNs on CIFAR10 and ImageNet; the first dataset consists of RGB images of size whereas the latter is more challenging with images of size . Data augmentation via random translation is applied. In both cases, we trained our CNN via SGD with momentum 0.9 and a weight decay of , until convergence. The initial learning rate is 0.1 (0.01 for VGG and AlexNet), and is decreased by 10 at epoch and respectively for CIFAR and ImageNet. For ImageNet, we used AlexNet (Krizhevsky et al., 2012) and ResNet (He et al., 2016)
because they are standard architectures in computer vision. For the CIFAR dataset, we used the standard VGG, ResNet and DenseNet(Huang et al., 2017). AlexNet is trained with drop-out (Srivastava et al., 2014)
on its fully connected layers, whereas the others CNNs are trained with batch-normalization(Ioffe & Szegedy, 2015).
In these experiments, each corresponds to the parameters resulting from one pass on the data. We apply successively, off-line, at each epoch the RNA on and report the accuracy obtained by the extrapolated CNN on the validation set. Here, we fix and .
3.2 Numerical results
Figure 1 reports performance on the validation set
of the vanilla and extrapolated CNN via RNA, at each epoch. Observe that RNA accuracy convergence is smoother than on the original CNNs which shows an effective variance reduction. In addition, we observe the impact of acceleration: the accelerated networks quickly present good generalization performance, even competitive with the best one. Note that several iterations after a learning rate drop are necessary to obtain acceleration because this corresponds to a brutal change in the optimization. Furthermore, selecting the hyperparametercan be tricky: for example, a larger
removes the outlier validation performance at epoch 40 of Figure1, for ResNet-18 on ImageNet. Here, we have deliberately chosen to use generic parameters to make the comparison as fair as possible, but more sophisticated adaptive strategies has been discussed by Scieur et al. (2017).
Tables 1 reports the lowest validation error of the vanilla architectures compared to their extrapolated counterpart. Off-line optimization by RNA has only slightly improved final accuracy, but these improvements have been obtained after the training procedure, in an offline fashion, without extra-learning.
We acknowledge support from the European Union’s Seventh Framework Programme (FP7-PEOPLE-2013-ITN) under grant agreement n.607290 SpaRTaN and from the European Research Council (grant SEQUOIA 724063). Alexandre d’Aspremont was partially supported by the data science joint research initiative with the fonds AXA pour la recherche and Kamet Ventures. Edouard Oyallon was partially supported by a postdoctoral grant from DPEI of Inria (AAR 2017POD057) for the collaboration with CWI.
- Defazio et al. (2014) Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. In Advances in neural information processing systems, pp. 1646–1654, 2014.
- Güler (1992) Osman Güler. New proximal point algorithms for convex minimization. SIAM Journal on Optimization, 2(4):649–664, 1992.
He et al. (2016)
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
Deep residual learning for image recognition.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, 2016.
- Huang et al. (2017) Gao Huang, Zhuang Liu, Kilian Q Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, volume 1, pp. 3, 2017.
- Ioffe & Szegedy (2015) Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
- Johnson & Zhang (2013) Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in neural information processing systems, pp. 315–323, 2013.
- Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012.
- Lin et al. (2015) Hongzhou Lin, Julien Mairal, and Zaid Harchaoui. A universal catalyst for first-order optimization. In Advances in Neural Information Processing Systems, pp. 3384–3392, 2015.
- Scieur et al. (2016) Damien Scieur, Alexandre d’Aspremont, and Francis Bach. Regularized nonlinear acceleration. In Advances In Neural Information Processing Systems, pp. 712–720, 2016.
- Scieur et al. (2017) Damien Scieur, Francis Bach, and Alexandre d’Aspremont. Nonlinear acceleration of stochastic algorithms. In Advances in Neural Information Processing Systems, pp. 3985–3994, 2017.
Srivastava et al. (2014)
Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and
Dropout: a simple way to prevent neural networks from overfitting.
Journal of Machine Learning Research, 15(1):1929–1958, 2014.
- Wilson et al. (2017) Ashia C Wilson, Rebecca Roelofs, Mitchell Stern, Nati Srebro, and Benjamin Recht. The marginal value of adaptive gradient methods in machine learning. In Advances in Neural Information Processing Systems, pp. 4151–4161, 2017.