Fast asynchronous parallel optimization

Context

Machine learning tasks very often involve optimizing finite sums of loss functions. In order to handle the now massive amounts of data current problems generate, we need to define new fast and efficient optimization algorithms, which can leverage the common parallel architecture of modern computer architectures. In order to maximize efficiency, we also need robust theoretical analysis of these algorithms, which give us insights on how to correctly use parallel resources depending on the dataset and associated model.
Our code is available on github.

People

Abstract

We describe Asaga, an asynchronous parallel version of the incremental gradient algorithm Saga that enjoys fast linear convergence rates. We highlight a subtle but important technical issue present in a large fraction of the recent convergence rate proofs for asynchronous parallel optimization algorithms, and propose a simplification of the recently proposed “perturbed iterate” framework that resolves it. We thereby prove that Asaga can obtain a theoretical linear speed-up on multicore systems even without sparsity assumptions. We present results of an implementation on a 40-core architecture illustrating the practical speed-up as well as the hardware overhead.

Paper

Asaga: Asynchronous parallel Saga.
Rémi Leblond, Fabian Pedregosa and Simon Lacoste-Julien.
Accepted at AISTATS 2017
Presented at OPT2016 (9th NIPS Workshop on Optimization for Machine Learning)


BibTeX

            
                @article{leblond2016Asaga,
                    title       = "Asaga: Asynchronous Parallel Saga",
                    journal     = "arxiv:1606.04809",
                    author      = "Leblond, Rémi and Pedregosa, Fabian and Lacoste-Julien, Simon",
                    year        = "2016"
                }
            
        

Acknowledgements

We thank Xinghao Pan for sharing with us his implementation of Kromagnon, a competing algorithm we compare Asaga to in our paper. This work was partially supported by a Google Research Award.