Ressources

Bachelor, master's thesis

  • Master's Thesis.
    [pdf], under the supervision of Francis Bach and Alessandro Rudi, 2018. [Show Abstract]

    Abstract: Reproducing Kernel Hilbert Spaces (RKHS) provide a rigorous functional analysis framework to perform non-parametric learning. Kernel methods, optimization methods in these spaces, enjoy very nice statistical properties. However, up to recently, these methods scaled very badly in the number of data points, both in time andmemory requirements, hence their limited applicability. However, the FALKON algorithm proposed by Rudi et al. has made a huge step in scaling down these requirements in the case of classical least-square regression, namely scaling the complexity to O(nsqrt(n)) in time and O(n) in memory, while keeping optimal guarantees. Our aim in this thesis is to explore the possible extensions of the ideas behind Falkon to more complex loss functions, such as the logistic loss. These methods rely two main ideas: 1) reduction of the feature space using random projections and 2) the use of iterative solvers, combined with a good pre-conditioning

  • Master's thesis (M.Sc.1).
    [pdf (french and english)], under the supervision of Nathanaël Beresticky, 2017. [Show Abstract]

    Abstract: In this paper, we summarize the mathematical content of our 4-month internship under the direction of Nathanaël Berestycki. This work is centered around random planar maps in the critical-FK model, and more specifically around a theorem due to Sheffield which describes the infinite critical-FK random map by a scaling limit to a two-dimensional Brownian motion. From this theorem, we tried to investigate two problems.The first was to find a conjecture for the convergence of the critical-FK random map in the critical case (see section 1 for the definition of the critical case). The aim was to have a self consistent model which would allow us to guess the appropriate rescaling function. This is only a conjecture as we considered the proof was out of reach for us. The second was to find certain loop exponents using Sheffield’s results. These exponents have already been computed using partition and caracteristic function methods, but we tried to use a different methods using the properties of Brownian motion. These ideas originally came from a paper by Berestycki, Laslier and Ray.

  • Autour du Transport Optimal et de ses Applications en Apprentissage Supervisé (B.Sc.3).
    [pdf (french)], under the supervision of Bertrand Maury and with Jean Alaux, 2016. [Show Abstract]

    Abstract: La question du transport optimal est un problème d'optimisation sous contrainte qui a été soulevé pour la première fois par Monge en 1781. La théorie a connu de nombreux développements au cours du 20ième siècle. Aujourd'hui, on lui découvre des applications nouvelles en informatique. Nous allons étudier en particulier son apport en traitement de l'image et en apprentissage supervisé.

Team Seminar Slides

  • Linear systems and inverse problems.
    [slides], Sierra seminar, April 2019. [Show Abstract]

    Abstract: The aim of this presentation is to show the connections between linear systems, inverse problems and machine learning. In particular how the study of the statistical properties of the ERM estimator in the case of least squares can be related to the resolution an ill-posed inverse problem. In particular, using this formalism, we will decompose the statistical error in two terms: one linked to the approximation of the objective function, and one to the approximation of an operator in the case of Kernel methods, analyzing the tradeoff of bias and variance in terms of linear systems.