Ulysse MarteauFerey

Briefly
Since April 2018, I have been working as a member of the SIERRA Team, which is part of the Computer Science Department of École Normale Supérieure Ulm and is also a joint team between CNRS and INRIA. I started my Ph.D. in September 2019, after having completed my master's thesis in August 2018 along with a one year internship. I work under the supervision of Francis Bach and Alessandro Rudi on stochastic approximation and optimization for high dimensionnal learning problems.
I graduated from École Normale Supérieure de Paris (ENS Ulm), from the mathematics department (DMA) in 2019 and got a master degree from École Normale Supérieure de Paris Saclay in mathematics, vision and machine learning (MVA).
Contact

Research interests
My main research interests are convex optimization, statistics and PDEs. More precisely, here is are a selection of research topics I am interested in:
Publications
U. MarteauFerey, F. Bach, A. Rudi. Globally Convergent Newton Methods for Illconditioned Generalized Selfconcordant Losses. [arXiv, hal, pdf, poster], NeurIPS, 2019. [Show Abstract]
Abstract: In this paper, we study largescale convex optimization algorithms based on the Newton method applied to regularized generalized selfconcordant losses, which include logistic regres sion and softmax regression. We first prove that our new simple scheme based on a sequence of problems with decreasing regularization parameters is provably globally convergent, that this convergence is linear with a constant factor which scales only logarithmically with the condition number. In the parametric setting, we obtain an algorithm with the same scaling than regular firstorder methods but with an improved behavior, in particular in illconditioned problems. Second, in the nonparametric machine learning setting, we provide an explicit algorithm combining the previous scheme with Nyström projection techniques, and prove that it achieves optimal generalization bounds with a time complexity of order O(ndfλ), a memory complexity of order O(df2λ) and no dependence on the condition number, generalizing the results known for leastsquares regression. Here n is the number of observations and dfλ is the associated degrees of freedom. In particular, this is the first largescale algorithm to solve logistic and softmax regressions in the nonparametric setting with large condition numbers and theoretical guarantees.
U. MarteauFerey, D. Ostrovskii, F. Bach, A. Rudi. Beyond LeastSquares: Fast Rates for Regularized Empirical Risk Minimization through SelfConcordance. [arXiv, hal, pdf, poster, slides, video], COLT, 2019. [Show Abstract]
Abstract: We consider learning methods based on the regularization of a convex empirical risk by a squared Hilbertian norm, a setting that includes linear predictors and nonlinear predictors through positivedefinite kernels. In order to go beyond the generic analysis leading to convergence rates of the excess risk as from observations, we assume that the individual losses are selfconcordant, that is, their thirdorder derivatives are bounded by their secondorder derivatives. This setting includes leastsquares, as well as all generalized linear models such as logistic and softmax regression. For this class of losses, we provide a biasvariance decomposition and show that the assumptions commonly made in leastsquares regression, such as the source and capacity conditions, can be adapted to obtain fast nonasymptotic rates of convergence by improving the bias terms, the variance terms or both.
