Researcher at INRIA - Sierra team
DI - École Normale Supérieure
Opensource Library in Python + PyTorch + CUDA, implementing the Falkon algorithm for multicore and multi-GPU architectures. Falkon allows to perform efficient and accurate supervised learning on large scale datasets. The algorithm is introduced in this paper and further improved in this paper.
pdf code slides video@inproceedings{rudi2017falkon, title={FALKON: An optimal large scale kernel method}, author={Rudi, Alessandro and Carratino, Luigi and Rosasco, Lorenzo}, booktitle={Advances in Neural Information Processing Systems}, pages={3891--3901}, year={2017} }
Opensource Library in MATLAB + CUDA, implementing the Falkon algorithm for multicore and single GPU architectures. Falkon allows to perform efficient and accurate supervised learning on large scale datasets. The algorithm is introduced in this paper.
pdf code slides video@inproceedings{rudi2017falkon, title={FALKON: An optimal large scale kernel method}, author={Rudi, Alessandro and Carratino, Luigi and Rosasco, Lorenzo}, booktitle={Advances in Neural Information Processing Systems}, pages={3891--3901}, year={2017} }
Opensource Library in MATLAB, implementing the NYTRO algorithm for multicore architectures. NYTRO combines Nystrom approximation with early stopping providing fast solutions to non-parametric learning problems. The algorithm is introduced in this paper
pdf code slides video@inproceedings{camoriano2016nytro, title={NYTRO: When subsampling meets early stopping}, author={Camoriano, Raffaello and Angles, Tom{\'a}s and Rudi, Alessandro and Rosasco, Lorenzo}, booktitle={Artificial Intelligence and Statistics}, pages={1403--1411}, year={2016} }
Opensource Library in MATLAB, implementing the incremental algorithm introduced in this paper to solve empirical risk minimization with squared loss and Nystrom approximation.
pdf code slides video@incollection{rudi2015less, title = {Less is More: Nystr\"{o}m Computational Regularization}, author = {Rudi, Alessandro and Camoriano, Raffaello and Rosasco, Lorenzo}, booktitle = {Advances in Neural Information Processing Systems 28}, editor = {C. Cortes and N. D. Lawrence and D. D. Lee and M. Sugiyama and R. Garnett}, pages = {1657--1665}, year = {2015}, publisher = {Curran Associates, Inc.}, url = {http://papers.nips.cc/paper/5936-less-is-more-nystrom-computational-regularization.pdf} }
We consider learning methods based on the regularization of a convex empirical risk by a squared Hilbertian norm, a setting that includes linear predictors and non-linear predictors through positive-definite kernels. In order to go beyond the generic analysis leading to convergence rates of the excess risk as \(O(1/\sqrt{n})\) from \(n\) observations, we assume that the individual losses are self-concordant, that is, their third-order derivatives are bounded by their second-order derivatives. This setting includes least-squares, as well as all generalized linear models such as logistic and softmax regression. For this class of losses, we provide a bias-variance decomposition and show that the assumptions commonly made in least-squares regression, such as the source and capacity conditions, can be adapted to obtain fast non-asymptotic rates of convergence by improving the bias terms, the variance terms or both.
pdf code slides video@article{marteau2019beyond, title={Beyond Least-Squares: Fast Rates for Regularized Empirical Risk Minimization through Self-Concordance}, author={Marteau-Ferey, Ulysse and Ostrovskii, Dmitrii and Bach, Francis and Rudi, Alessandro}, booktitle={Arxiv preprint arXiv:1902.03046}, year={2019} }
In this work we provide an estimator for the covariance matrix of a heavy-tailed random vector. We prove that the proposed estimator admits "affine-invariant" bounds of the form $$ (1+\epsilon) {\bf S} \preceq \widehat{\bf S} \preceq (1+\epsilon) {\bf S},$$ in high probability, where \(\bf S\) is the unknown covariance matrix, and \(\preceq\) is the positive semidefinite order on symmetric matrices. The result only requires the existence of fourth-order moments, and allows for \(\epsilon = O(\sqrt{\kappa^4 d/n})\) where \(\kappa\) is some measure of kurtosis of the distribution, \(d\) is the dimensionality of the space, and \(n\) is the sample size. More generally, we can allow for regularization with level \(\lambda\), then \(\epsilon\) depends on the degrees of freedom number which is generally smaller than \(d\). The computational cost of the proposed estimator is essentially \(O(d^2n + d^3)\), comparable to the computational cost of the sample covariance matrix in the statistically interesting regime \(n \gg d\). Its applications to eigenvalue estimation with relative error and to ridge regression with heavy-tailed random design are discussed.
pdf code slides video@article{ostrovskii2019affine, title={Affine Invariant Covariance Estimation for Heavy-Tailed Distributions}, author={Ostrovskii, Dmitrii and Rudi, Alessandro}, booktitle={Arxiv preprint arXiv:1902.03086}, year={2019} }
In this work we provide a theoretical framework for structured prediction that generalizes the existing theory of surrogate methods for binary and multiclass classification based on estimating conditional probabilities with smooth convex surrogates (eg logistic regression). The theory relies on a natural characterization of structural properties of the task loss and allows to derive statistical guarantees for many widely used methods in the context of multilabeling, ranking, ordinal regression and graph matching. In particular, we characterize the smooth convex surrogates compatible with a given task loss in terms of a suitable Bregman divergence composed with a link function. This allows to derive tight bounds for the calibration function and to obtain novel results on existing surrogate frameworks for structured prediction such as conditional random fields and quadratic surrogates.
pdf code slides video@article{nowak2019general, title={A General Theory for Structured Prediction with Smooth Convex Surrogates}, author={Nowak-Vila, Alex and Bach, Francis and Rudi, Alessandro}, booktitle={Arxiv preprint arXiv:1902.01958}, year={2019} }
We are interested in a framework of online learning with kernels for low-dimensional but large-scale and potentially adversarial datasets. Considering the Gaussian kernel, we study the computational and theoretical performance of online variations of kernel Ridge regression. The resulting algorithm is based on approximations of the Gaussian kernel through Taylor expansion. It achieves for \(d\)-dimensional inputs a (close to) optimal regret of order \(O((logn)^{d+1})\) with per-round time complexity and space complexity \(O((logn)^{2d})\). This makes the algorithm a suitable choice as soon as \(n \gg e^d\) which is likely to happen in a scenario with small dimensional and large-scale dataset.
pdf code slides video@article{jezequel2019efficient, title={Efficient online learning with kernels for adversarial large scale problems}, author={Jézéquel, Rémi and Gaillard, Pierre and Rudi, Alessandro}, booktitle={Arxiv preprint arXiv:1902.09917}, year={2019} }
The problem of devising learning strategies for discrete losses (eg, multilabeling, ranking) is currently addressed with methods and theoretical analyses ad-hoc for each loss. In this paper we study a least-squares framework to systematically design learning algorithms for discrete losses, with quantitative characterizations in terms of statistical and computational complexity. In particular we improve existing results by providing explicit dependence on the number of labels for a wide class of losses and faster learning rates in conditions of low-noise. Theoretical results are complemented with experiments on real datasets, showing the effectiveness of the proposed general approach
pdf code slides video@inproceedings{nowak2019sharp, title={Sharp analysis of learning with discrete losses}, author={Nowak-Vila, Alex and Bach, Francis and Rudi, Alessandro}, booktitle={Artificial Intelligence and Statistics}, pages={to appear}, year={2019} }
The Sinkhorn distance, a variant of the Wasserstein distance with entropic regularization, is an increasingly popular tool in machine learning and statistical inference. We give a simple, practical, parallelizable algorithm NYS-SINK, based on Nyström approximation, for computing Sinkhorn distances on a massive scale. As we show in numerical experiments, our algorithm easily computes Sinkhorn distances on data sets hundreds of times larger than can be handled by state-of-the-art approaches. We also give provable guarantees establishing that the running time and memory requirements of our algorithm adapt to the intrinsic dimension of the underlying data.
pdf code slides video@article{altschuler2018massively, title={Massively scalable Sinkhorn distances via the Nystr\"om method}, author={Altschuler, Jason and Bach, Francis and Rudi, Alessandro and Weed, Jonathan}, journal={Arxiv preprint arXiv:1812.05189}, year={2018} }
Computing the quadratic transportation metric (also called the 2-Wasserstein distance or root mean square distance) between two point clouds, or, more generally, two discrete distributions, is a fundamental problem in machine learning, statistics, computer graphics, and theoretical computer science. A long line of work has culminated in a sophisticated geometric algorithm due to [1], which runs in time \(O(n^{3/2})\), where \(n\) is the number of points. However, obtaining faster algorithms has proven difficult since the 2-Wasserstein distance is known to have poor sketching and embedding properties, which limits the effectiveness of geometric approaches. In this paper, we give an extremely simple deterministic algorithm with \(\tilde{O}(n)\) runtime by using a completely different approach based on entropic regularization, approximate Sinkhorn scaling, and low-rank approximations of Gaussian kernel matrices. We give explicit dependence of our algorithm on the dimension and precision of the approximation.
pdf code slides video@article{altschuler2018approximating, title={Approximating the quadratic transportation metric in near-linear time}, author={Altschuler, Jason and Bach, Francis and Rudi, Alessandro and Weed, Jonathan}, journal={Arxiv preprint arXiv:1810.10046}, year={2018} }
Sketching and stochastic gradient methods are arguably the most common techniques to derive efficient large scale learning algorithms. In this paper, we investigate their application in the context of nonparametric statistical learning. More precisely, we study the estimator defined by stochastic gradient with mini batches and random features. The latter can be seen as form of nonlinear sketching and used to define approximate kernel methods. The considered estimator is not explicitly penalized/constrained and regularization is implicit. Indeed, our study highlight how different parameters, such as number of features, iterations, step-size and mini-batch size control the learning properties of the solutions. We do this by deriving optimal finite sample bounds, under standard assumptions. The obtained results are corroborated and illustrated by numerical experiments.
pdf code slides video@inproceedings{carratino2018learning, title={Learning with sgd and random features}, author={Carratino, Luigi and Rudi, Alessandro and Rosasco, Lorenzo}, booktitle={Advances in Neural Information Processing Systems}, pages={10212--10223}, year={2018} }
Structured prediction provides a general framework to deal with supervised problems where the outputs have semantically rich structure. While classical approaches consider finite, albeit potentially huge, output spaces, in this paper we discuss how structured prediction can be extended to a continuous scenario. Specifically, we study a structured prediction approach to manifold valued regression. We characterize a class of problems for which the considered approach is statistically consistent and study how geometric optimization can be used to compute the corresponding estimator. Promising experimental results on both simulated and real data complete our study.
pdf code slides video@inproceedings{rudi2018manifold, title={Manifold Structured Prediction}, author={Rudi, Alessandro and Ciliberto, Carlo and Marconi, GianMaria and Rosasco, Lorenzo}, booktitle={Advances in Neural Information Processing Systems}, pages={5611--5622}, year={2018} }
Applications of optimal transport have recently gained remarkable attention thanks to the computational advantages of entropic regularization. However, in most situations the Sinkhorn approximation of the Wasserstein distance is replaced by a regularized version that is less accurate but easy to differentiate. In this work we characterize the differential properties of the original Sinkhorn distance, proving that it enjoys the same smoothness as its regularized version and we explicitly provide an efficient algorithm to compute its gradient. We show that this result benefits both theory and applications: on one hand, high order smoothness confers statistical guarantees to learning with Wasserstein approximations. On the other hand, the gradient formula allows us to efficiently solve learning and optimization problems in practice. Promising preliminary experiments complement our analysis.
pdf code slides video@inproceedings{luise2018wasserstein, title = {Differential Properties of Sinkhorn Approximation for Learning with Wasserstein Distance}, author = {Luise, Giulia and Rudi, Alessandro and Pontil, Massimiliano and Ciliberto, Carlo}, booktitle = {Advances in Neural Information Processing Systems 31}, pages = {5864--5874}, year = {2018}, }
Leverage score sampling provides an appealing way to perform approximate computations for large matrices. Indeed, it allows to derive faithful approximations with a complexity adapted to the problem at hand. Yet, performing leverage scores sampling is a challenge in its own right and further approximations are typically needed. In this paper, we study the problem of leverage score sampling for positive definite matrices defined by a kernel. Our contribution is twofold. First we provide a novel algorithm for leverage score sampling. We provide theoretical guarantees as well as empirical results proving that the proposed algorithm is currently the fastest and most accurate solution to this problem. Second, we analyze the properties of the proposed method in a downstream supervised learning task. Combining several algorithmic ideas, we derive the fastest solver for kernel ridge regression and Gaussian process regression currently available. Also in this case, theoretical findings are corroborated by experimental results.
pdf code slides video@inproceedings{rudi2018fast, title={On fast leverage score sampling and optimal learning}, author={Rudi, Alessandro and Calandriello, Daniele and Carratino, Luigi and Rosasco, Lorenzo}, booktitle={Advances in Neural Information Processing Systems}, pages={5673--5683}, year={2018} }
We consider stochastic gradient descent (SGD) for least-squares regression with potentially several passes over the data. While several passes have been widely reported to perform practically better in terms of predictive performance on unseen data, the existing theoretical analysis of SGD suggests that a single pass is statistically optimal. While this is true for low-dimensional easy problems, we show that for hard problems, multiple passes lead to statistically optimal predictions while single pass does not; we also show that in these hard models, the optimal number of passes over the data increases with sample size. In order to define the notion of hardness and show that our predictive performances are optimal, we consider potentially infinite-dimensional models and notions typically associated to kernel methods, namely, the decay of eigenvalues of the covariance matrix of the features and the complexity of the optimal predictor as measured through the covariance matrix. We illustrate our results on synthetic experiments with non-linear kernel methods and on a classical benchmark with a linear model.
pdf code slides video@inproceedings{pillaud2018statistical, title = {Statistical Optimality of Stochastic Gradient Descent on Hard Learning Problems through Multiple Passes}, author = {Pillaud-Vivien, Loucas and Rudi, Alessandro and Bach, Francis}, booktitle = {Advances in Neural Information Processing Systems 31}, pages = {8125--8135}, year = {2018}, }
Key to structured prediction is exploiting the problem structure to simplify the learning process. A major challenge arises when data exhibit a local structure (e.g., are made by "parts") that can be leveraged to better approximate the relation between (parts of) the input and (parts of) the output. Recent literature on signal processing, and in particular computer vision, has shown that capturing these aspects is indeed essential to achieve state-of-the-art performance. While such algorithms are typically derived on a case-by-case basis, in this work we propose the first theoretical framework to deal with part-based data from a general perspective. We derive a novel approach to deal with these problems and study its generalization properties within the setting of statistical learning theory. Our analysis is novel in that it explicitly quantifies the benefits of leveraging the part-based structure of the problem with respect to the learning rates of the proposed estimator.
pdf code slides video@article{ciliberto2018localized, title={Localized Structured Prediction}, author={Ciliberto, Carlo and Bach, Francis and Rudi, Alessandro}, journal={arXiv preprint arXiv:1806.02402}, year={2018} }
Simulating the time-evolution of quantum mechanical systems is BQP-hard and expected to be one of the foremost applications of quantum computers. We consider the approximation of Hamiltonian dynamics using subsampling methods from randomized numerical linear algebra. We propose conditions for the efficient approximation of state vectors evolving under a given Hamiltonian. As an immediate application, we show that sample based quantum simulation, a type of evolution where the Hamiltonian is a density matrix, can be efficiently classically simulated under specific structural conditions. Our main technical contribution is a randomized algorithm for approximating Hermitian matrix exponentials. The proof leverages the Nyström method to obtain low-rank approximations of the Hamiltonian. We envisage that techniques from randomized linear algebra will bring further insights into the power of quantum computation.
pdf code slides video@article{rudi2018approximating, title={Approximating Hamiltonian dynamics with the Nystr\"om method}, author={Rudi, Alessandro and Wossnig, Leonard and Ciliberto, Carlo and Rocchetto, Andrea and Pontil, Massimiliano and Severini, Simone}, journal={arXiv preprint arXiv:1804.02484}, year={2018} }
Kriging is one of the most widely used emulation methods in simulation. However, memory and time requirements potentially hinder its application to datasets generated by high-dimensional simulators. We borrow from the machine learning literature to propose a new algorithmic implementation of kriging that, while preserving prediction accuracy, notably reduces time and memory requirements. The theoretical and computational foundations of the algorithm are provided. The work then reports results of extensive numerical experiments to compare the performance of the proposed algorithm against current kriging implementations, on simulators of increasing dimensionality. Findings show notable savings in time and memory requirements that allow one to handle inputs in more that \(10,000\) dimensions.
pdf code slides videoIn this paper, we study regression problems over a separable Hilbert space with the square loss, covering non-parametric regression over a reproducing kernel Hilbert space. We investigate a class of spectral-regularized algorithms, including ridge regression, principal component analysis, and gradient methods. We prove optimal, high-probability convergence results in terms of variants of norms for the studied algorithms, considering a capacity assumption on the hypothesis space and a general source condition on the target function. Consequently, we obtain almost sure convergence results with optimal rates. Our results improve and generalize previous results, filling a theoretical gap for the non-attainable cases.
pdf code slides video@article{lin2018optimal, title={Optimal rates for spectral algorithms with least-squares regression over hilbert spaces}, author={Lin, Junhong and Rudi, Alessandro and Rosasco, Lorenzo and Cevher, Volkan}, journal={Applied and Computational Harmonic Analysis}, year={2018}, publisher={Elsevier} }
We consider binary classification problems with positive definite kernels and square loss, and study the convergence rates of stochastic gradient methods. We show that while the excess testing loss (squared loss) converges slowly to zero as the number of observations (and thus iterations) goes to infinity, the testing error (classification error) converges exponentially fast if low-noise conditions are assumed.
pdf code slides video@inproceedings{pillaud2017exponential, author = {Pillaud-Vivien, Loucas and Rudi, Alessandro and Bach, Francis}, title = {Exponential Convergence of Testing Error for Stochastic Gradient Methods}, booktitle = {Conference On Learning Theory, {COLT} 2018, Stockholm, Sweden, 6-9 July 2018.}, pages = {250--296}, year = {2018} }
We study the generalization properties of ridge regression with random features in the statistical learning framework. We show for the first time that \(O(1/\sqrt{n})\) learning bounds can be achieved with only \(O(\sqrt{n})\) random features rather than \(O(n)\) as suggested by previous results. Further, we prove faster learning rates and show that they might require more random features, unless they are sampled according to a possibly problem dependent distribution. Our results shed light on the statistical computational trade-offs in large scale kernelized learning, showing the potential effectiveness of random features in reducing the computational complexity while keeping optimal generalization properties.
pdf code slides video@inproceedings{rudi2017generalization, title={Generalization properties of learning with random features}, author={Rudi, Alessandro and Rosasco, Lorenzo}, booktitle={Advances in Neural Information Processing Systems}, pages={3218--3228}, year={2017} }
Kernel methods provide a principled way to perform non linear, nonparametric learning. They rely on solid functional analytic foundations and enjoy optimal statistical properties. However, at least in their basic form, they have limited applicability in large scale scenarios because of stringent computational requirements in terms of time and especially memory. In this paper, we take a substantial step in scaling up kernel methods, proposing FALKON, a novel algorithm that allows to efficiently process millions of points. FALKON is derived combining several algorithmic principles, namely stochastic projections, iterative solvers and preconditioning. Our theoretical analysis shows that optimal statistical accuracy is achieved requiring essentially \(O(n)\) memory and \(O(n \sqrt{n})\) time. Extensive experiments show that state of the art results on available large scale datasets can be achieved even on a single machine.
pdf code slides video@inproceedings{rudi2017falkon, title={FALKON: An optimal large scale kernel method}, author={Rudi, Alessandro and Carratino, Luigi and Rosasco, Lorenzo}, booktitle={Advances in Neural Information Processing Systems}, pages={3891--3901}, year={2017} }
In the framework of non-parametric support estimation, we study the statistical properties of an estimator defined by means of Kernel Principal Component Analysis (KPCA). In the context of anomaly/novelty detection the algorithm was first introduced by Hoffmann in 2007. We also extend to above analysis to a larger class of set estimators defined in terms of a filter function.
pdf code slides video@article{rudi2017regularized, author={Rudi, Alessandro and De Vito, Ernesto and Verri, Alessandro and Odone, Francesca}, title={Regularized Kernel Algorithms for Support Estimation}, journal={Frontiers in Applied Mathematics and Statistics}, volume={3}, pages={23}, year={2017}, doi={10.3389/fams.2017.00023} }
Key to multitask learning is exploiting relationships between different tasks to improve prediction performance. If the relations are linear, regularization approaches can be used successfully. However, in practice assuming the tasks to be linearly related might be restrictive, and allowing for nonlinear structures is a challenge. In this paper, we tackle this issue by casting the problem within the framework of structured prediction. Our main contribution is a novel algorithm for learning multiple tasks which are related by a system of nonlinear equations that their joint outputs need to satisfy. We show that the algorithm is consistent and can be efficiently implemented. Experimental results show the potential of the proposed method.
pdf code slides video@inproceedings{ciliberto2017consistent, title={Consistent Multitask Learning with Nonlinear Output Relations}, author={Ciliberto, Carlo and Rudi, Alessandro and Rosasco, Lorenzo and Pontil, Massimiliano}, booktitle={Advances in Neural Information Processing Systems}, pages={1983--1993}, year={2017} }
We propose and analyze a regularization approach for structured prediction problems. We characterize a large class of loss functions that allows to naturally embed structured outputs in a linear space. We exploit this fact to design learning algorithms using a surrogate loss approach and regularization techniques. We prove universal consistency and finite sample bounds characterizing the generalization properties of the proposed method. Experimental results are provided to demonstrate the practical usefulness of the proposed approach.
pdf code slides video@inproceedings{ciliberto2016consistent, title={A Consistent Regularization Approach for Structured Prediction}, author={Ciliberto, Carlo and Rosasco, Lorenzo and Rudi, Alessandro}, booktitle={Advances in Neural Information Processing Systems}, pages={4412--4420}, year={2016} }
Early stopping is a well known approach to reduce the time complexity for performing training and model selection of large scale learning machines. On the other hand, memory/space (rather than time) complexity is the main constraint in many applications, and randomized subsampling techniques have been proposed to tackle this issue. In this paper we ask whether early stopping and subsampling ideas can be combined in a fruitful way. We consider the question in a least squares regression setting and propose a form of randomized iterative regularization based on early stopping and subsampling. In this context, we analyze the statistical and computational properties of the proposed method. Theoretical results are complemented and validated by a thorough experimental analysis.
pdf code slides video@inproceedings{camoriano2016nytro, title={NYTRO: When subsampling meets early stopping}, author={Camoriano, Raffaello and Angles, Tom{\'a}s and Rudi, Alessandro and Rosasco, Lorenzo}, booktitle={Artificial Intelligence and Statistics}, pages={1403--1411}, year={2016} }
We study Nystrom type subsampling approaches to large scale kernel methods, and prove learning bounds in the statistical learning setting, where random sampling and high probability estimates are considered. In particular, we prove that these approaches can achieve optimal learning bounds, provided the subsampling level is suitably chosen. These results suggest a simple incremental variant of Nystr\"om Kernel Regularized Least Squares, where the subsampling level implements a form of computational regularization, in the sense that it controls at the same time regularization and computations. Extensive experimental analysis shows that the considered approach achieves state of the art performances on benchmark large scale datasets.
pdf code slides video@incollection{rudi2015less, title = {Less is More: Nystr\"{o}m Computational Regularization}, author = {Rudi, Alessandro and Camoriano, Raffaello and Rosasco, Lorenzo}, booktitle = {Advances in Neural Information Processing Systems 28}, editor = {C. Cortes and N. D. Lawrence and D. D. Lee and M. Sugiyama and R. Garnett}, pages = {1657--1665}, year = {2015}, publisher = {Curran Associates, Inc.}, url = {http://papers.nips.cc/paper/5936-less-is-more-nystrom-computational-regularization.pdf} }
We consider here the classic problem of support estimation, or learning a set from random samples, and propose a natural but novel approach to address it. We do this by investigating its connection with a seemingly distinct problem, namely subspace learning.
pdf code slides video@incollection{rudi2014learning, title={Learning Sets and Subspaces}, author={Rudi, Alessandro and Canas, G. D. and De Vito, Ernesto and Rosasco, Lorenzo}, booktitle={Regularization, Optimization, Kernels, and Support Vector Machines}, pages={337--357}, year={2014}, publisher={CRC Press} }
In this paper we discuss the Spectral Support Estimation algorithm [1] by analyzing its geometrical and computational properties. The estimator is non-parametric and the model selection depends on three parameters whose role is clarified by simulations on a two-dimensional space. The performance of the algorithm for novelty detection is tested and compared with its main competitors on a collection of real benchmark datasets of different sizes and types.
pdf code slides video@article{rudi2014geometrical, title={Geometrical and computational aspects of Spectral Support Estimation for novelty detection}, author={Rudi, Alessandro and Odone, Francesca and De Vito, Ernesto}, journal={Pattern Recognition Letters}, volume={36}, pages={107--116}, year={2014}, publisher={Elsevier} }
A large number of algorithms in machine learning, from principal component analysis (PCA), and its non-linear (kernel) extensions, to more recent spectral embedding and support estimation methods, rely on estimating a linear subspace from samples. In this paper we introduce a general formulation of this problem and derive novel learning error estimates. Our results rely on natural assumptions on the spectral properties of the covariance operator associated to the data distribution, and hold for a wide class of metrics between subspaces. As special cases, we discuss sharp error estimates for the reconstruction properties of PCA and spectral support estimation. Key to our analysis is an operator theoretic approach that has broad applicability to spectral learning methods.
pdf code slides video@inproceedings{rudi2013sample, title={On the sample complexity of subspace learning}, author={Rudi, Alessandro and Canas, Guillermo D and Rosasco, Lorenzo}, booktitle={Advances in Neural Information Processing Systems}, pages={2067--2075}, year={2013} }
The process of model selection and assessment aims at finding a subset of parameters that minimize the expected test error for a model related to a learning algorithm. Given a subset of tuning parameters, an exhaustive grid search is typically performed. In this paper an automatic algorithm for model selection and assessment is proposed. It adaptively learns the error function in the parameters space, making use of scale space theory and statistical learning theory in order to estimate a reduced number of models and, at the same time, to make them more reliable. Extensive experiments are performed on the MNIST dataset.
pdf code slides video@inproceedings{rudi2012adaptive, title={Adaptive Optimization for Cross Validation.}, author={Rudi, Alessandro and Chiusano, Gabriele and Verri, Alessandro}, booktitle={ESANN}, year={2012} }
A novel approach to 3D gaze estimation for wearable multi-camera devices is proposed and its effectiveness is demonstrated both theoretically and empirically. The proposed approach, firmly grounded on the geometry of the multiple views, introduces a calibration procedure that is efficient, accurate, highly innovative but also practical and easy. Thus, it can run online with little intervention from the user. The overall gaze estimation model is general, as no particular complex model of the human eye is assumed in this work. This is made possible by a novel approach, that can be sketched as follows: each eye is imaged by a camera; two conics are fitted to the imaged pupils and a calibration sequence, consisting in the subject gazing a known 3D point, while moving his/her head, provides information to 1) estimate the optical axis in 3D world; 2) compute the geometry of the multi-camera system; 3) estimate the Point of Regard in 3D world. The resultant model is being used effectively to study visual attention by means of gaze estimation experiments, involving people performing natural tasks in wide-field, unstructured scenarios.
pdf code slides video@inproceedings{pirri2011general, title={A general method for the point of regard estimation in 3D space}, author={Pirri, Fiora and Pizzoli, Matia and Rudi, Alessandro}, booktitle={Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on}, pages={921--928}, year={2011}, organization={IEEE} }
The Viewing Graph [1] represents several views linked by the corresponding fundamental matrices, estimated pairwise. Given a Viewing Graph, the tuples of consistent camera matrices form a family that we call the Solution Set. This paper provides a theoretical framework that formalizes different properties of the topology, linear solvability and number of solutions of multi-camera systems. We systematically characterize the topology of the Viewing Graph in terms of its solution set by means of the associated algebraic bilinear system. Based on this characterization, we provide conditions about the linearity and the number of solutions and define an inductively constructible set of topologies which admit a unique linear solution. Camera matrices can thus be retrieved efficiently and large viewing graphs can be handled in a recursive fashion. The results apply to problems such as the projective reconstruction from multiple views or the calibration of camera networks.
pdf code slides video@inproceedings{rudi2010linear, title={Linear solvability in the viewing graph}, author={Rudi, Alessandro and Pizzoli, Matia and Pirri, Fiora}, booktitle={Asian Conference on Computer Vision}, pages={369--381}, year={2010}, organization={Springer} }