Optimisation et Apprentissage Statistique

Francis Bach

INRIA - Ecole Normale Supérieure

Master M2 Mathématiques de l'aléatoire

(Université Paris-Sud), 2eme semestre, 2018/2019
 


Inscription

The class will be taught in French or English, depending on attendance (all slides and class notes are in English).



Résumé

De nombreux problèmes d'estimation en apprentissage statistique sont formulés comme des problèmes d'optimisation. Ces formulations ont permis une separation entre l'analyse des performances de l'estimateur et le développement d'algorithmes de résolution du problème. Face aux grands volumes de données, une telle séparation n'est plus efficace et l'analyse doit mêler statistique et optimisation. Dans ce cours, seront présentés de manière unifiée les résultats statistiques classiques sur la M-estimation et l'optimisation convexe, en montrant les liens entre statistique et optimisation. Un accent sera mis sur l'analyse non-asymptotique de l'approximation stochastique et du gradient stochastique.

Attention, le but de ce cours est de comprendre comment s'analysent les algorithmes et sera donc fortenent acces sur les preuves.



R
éférences

Transparents (Preliminary version, will be updated before every class)

S. Bubeck. Convex optimization: Algorithms and complexity. Foundations and Trends in Machine Learning, 8(3-4):231–357, 2015.

Y. Nesterov. Introductory lectures on convex optimization: a basic course. Kluwer Academic Publishers, 2004.



Dates et contenu des cours


Les cours ont lieu au département de Mathématiques de l'Université Paris-Sud (Orsay, nouveau batiment du departement de mathematiques, salle
0A2, le jeudi de 14h à 17h).

La presence en cours pour 4 cours sur 6 est obligatoire. Cette annee il y aura un examen ecrit.

Dates Sujets du cours Notes de cours de l'annee 2015/2016
Jeudi 24/01 de 14h à 17h Introduction au probleme generique d'apprentissage
Classes de fonctions (convexes, lisses, fortement convexes)
Analyse statistique par complexite de Rademacher
 
lecture1.pdf
Jeudi 31/01 de 14h à 17h

Analyse des algorithmes deterministes d'optimisation (descente de gradient, Newton, etc.) lecture2.pdf
Jeudi 14/02 de 14h à 17h

Approximation stochastique pour problemes convexes non-lisses lecture4.pdf
Jeudi 28/02 de 14h à 17h
Approximation stochastique classique (analyse asymptotique) lecture3.pdf
Jeudi 14/03 de 14h à 17h
Approximation stochastique pour problemes convexes lisses

Jeudi 21/03 de 14h à 17h
Minimisation de sommes finies
Jeudi 04/04 de 14h à 17h Examen ecrit




Assignments

The goal of these assignments is to implement the main algorithms seen in class, with any language of your choice (Python, Matlab or R are advised). Every assignment will be a few lines of code run on small-scale data. The only required outputs for each homework are a few (readable) curves and comments, to be sent as a single pdf file by email to 
fbachorsay2017@gmail.com with the subject [HWn], with n being the homework number (no acknowledgements will be sent back).

In case of any problems with loading the data, please contact the lecturer.

Homework 1 (Batch algorithms), due Thursday 14/02
The file data_orsay_2017.mat (matlab format, readable from Matlab and Python) contains training data and testing data for a binary classification problem with d=100, n=10 000 training observations, and n=100 000 testing observations. For the two loss functions (square, logistic), and two numbers of training observations (10 000 and 1000), plot training and testing errors (on the same plot) as a function of the number of iterations of gradient descent with constant step-size.

This makes a total of 4 plots (2 loss functions x 2 numbers of observartions), to be put in a single pdf file, with two sentences (max) of comments for each plot.
Homework 2 (Nonsmooth stochastic algorithms), due Thursday 14/03
With the same data, and the full training data (10 000 observations), we consider logistic regression, and three plots for stochastic gradient descent, all with 20 passes over the data. For all 4 plots, compare averaged testing logistic losses (as a function of the number of iterations). Advice: only compute testing losses every 100 iterations.
(a) No averaging vs. averaging: compare averaged SGD and plain SGD with step-sizes decreasing as 1/sqrt(n). Single plot + one sentence comment.
(b) Strongly-convex vs. convex (averaged SGD): consider adding mu/2 times the squared norm to the objective (do not forget to add it to the gradient and the testing objective as well). Compare the step size in 1/(mu n) and the step size in 1/sqrt(n). For mu = 1e-1 and 1e-5. Two plots overall + two sentences of comments.
(c) cycling vs. sampling with replacement: compare running SGD with multiple passes on the training data, comparing cycling with sampling with replacement. Single plot + one sentence comment.
Homework 3 (Smooth stochastic algorithms+finite data sets), due Thursday 04/04
We consider the same data, for logistic regression and least-squares regression with regularization by mu times the squared Euclidean norm (with mu=1e-1 and mu=1e-5). Compare constant-step-size stochastic gradient (sampling with replacement with 50 passes) and SAG.
This makes overall: 2 (two lossess) x 2 (two mus) x 2 (two algorithms) = 8 plots








Validation


Le cours sera valide par (1) une lecture d'article avec restitution ecrite, (2) des implementations tres simples en Matlab/R/Python et (3) un examen ecrit.

La presence
en cours pour 4 cours sur 6 est obligatoire.

In order to obtain a grade for this class, students have to (1) read a research article with a three-page report as well as (2) do simple implementations in 
Matlab/R/Python.

For the study of research articles, it is required to (a) present the main ideas of the papers, (b) its relationship to the class, (c) potentially the main elements of proofs, and (d) if applicable a simple simulation showing the benefits of the proposed method.

The reports may be prepared in French or English. Note that in some cases it may be worth selecting a relevant subset of results to present.

The three-page report (to be sent to fbachorsay2017@gmail.com by April 25).

Here is a list of potential papers (if you have any suggestions, please let me know). Papers will be allocated in a first-come / first-serve basis by sending email to Francis Bach (three students may take the same paper).

A. Dieuleveut, F. Bach. Non-parametric Stochastic Approximation with Large Step sizes. [pdf]

Hongzhou Lin, Julien Mairal, Zaid Harchaoui. A Universal Catalyst for First-Order Optimization. [pdf]

M. Schmidt, N. Le Roux, F. Bach. Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization. [pdf]


Shai Shalev-Shwartz, Nathan Srebro, Karthik Sridharan. Fast Rates for Regularized Objectives. [pdf]

F. Orabona. Simultaneous Model Selection and Optimization through Parameter-free Stochastic Learning. [pdf]

 
Cameron Musco, Christopher Musco. Randomized Block Krylov Methods for Stronger and Faster Approximate Singular Value Decomposition. [pdf]

A. Defossez and F. Bach. Averaged least-mean-squares: bias-variance trade-offs and optimal sampling distributions. [pdf]

G. Lan. An optimal method for stochastic composite optimization. [pdf]

A. Dieuleveut, N. Flammarion, and F. Bach. Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression. [pdf]


A. Defazio, F. Bach, S. Lacoste-Julien. SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives. [pdf]

Anatoli Juditsky, Arkadi Nemirovski. First Order Methods for Nonsmooth Convex Large-Scale Optimization, I: General Purpose Methods. [pdf]

Anatoli Juditsky, Arkadi Nemirovski. First Order Methods for Nonsmooth Convex Large-Scale Optimization, II: Utilizing Problem's Structure. [pdf]

Y. Nesterov. Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems. [pdf]

S. Lacoste-Julien and M. Jaggi. On the Global Linear Convergence of Frank-Wolfe Optimization Variants. [pdf]

Elad Hazan and Satyen Kale. Beyond the Regret Minimization Barrier: Optimal Algorithms for Stochastic Strongly-Convex Optimization. [pdf]

Hamed Karimi, Julie Nutini, Mark Schmidt. Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition. [pdf]

D. Scieur, V. Roulet, F. Bach, A. d'Aspremont. Integration Methods and Accelerated Optimization Algorithms. [pdf]


N. Flammarion, F. Bach. Stochastic Composite Least-Squares Regression with convergence rate O(1/n). [pdf]