Smooth Optimization with Approximate Gradient
TITLE: Smooth Optimization with Approximate Gradient.
AUTHORS: Alexandre d'Aspremont
ABSTRACT: We show that the optimal complexity of Nesterov's smooth first-order optimization algorithm is preserved when the gradient is only computed up to a small, uniformly bounded error. In applications of this method to semidefinite programs, this often means computing only a few dominant eigenvalues of the current iterate instead of a full matrix exponential, which significantly reduces the method's computational cost. This also allows sparse problems to be solved efficiently using maximum eigenvalue packages such as ARPACK.
STATUS: SIAM Journal on Optimization, 19(3), pp. 1171-1183, December 2008.
ArXiv PREPRINT: math.OC/0512344
PAPER: Local [PDF/SmoothSparseSDP.pdf pdf file
|