Nonlinear Acceleration of Deep Neural Networks

  • TITLE: Nonlinear Acceleration of Deep Neural Networks.

  • AUTHORS: D. Scieur, E. Oyallon, A. d'Aspremont and F. Bach

  • ABSTRACT: Regularized nonlinear acceleration (RNA) is a generic extrapolation scheme for optimization methods, with marginal computational overhead. It aims to improve convergence using only the iterates of simple iterative algorithms. However, so far its application to optimization was theoretically limited to gradient descent and other single-step algorithms. Here, we adapt RNA to a much broader setting including stochastic gradient with momentum and Nesterov's fast gradient. We use it to train deep neural networks, and empirically observe that extrapolated networks are more accurate, especially in the early iterations. A straightforward application of our algorithm when training ResNet-152 on ImageNet produces a top-1 test error of 20.88, improving by 0.8 the reference classification pipeline. Furthermore, the code runs offline in this case, so it never negatively affects performance.

  • STATUS: Submitted

  • ArXiv PREPRINT: 1805.09639