Abstract:
The Lenstra--Lenstra--Lov\'asz lattice basis reduction algorithm
(called LLL or L^3) is a fundamental tool in computational number
theory and theoretical computer science, which can be viewed
as an efficient algorithmic version of Hermite's inequality on Hermite's constant.
Given an integer d-dimensional lattice basis with vectors of Euclidean norm less
than~$B$ in an $n$-dimensional space, the ${\rm L}^3$~algorithm outputs a
reduced basis in~$O(d^3n\,{\rm log}\,B\cdot\mathcal{M}(d\,{\rm log}\,B))$ bit operations,
where~$\mathcal{M}(k)$ denotes the time required to multiply $k$-bit
integers. This worst-case complexity is problematic for
applications where $d$ or/and ${\rm log}\,B$ are often large. As a
result, the original ${\rm L}^3$~algorithm is almost never used in
practice, except in tiny dimension. Instead, one applies
floating-point variants where the long-integer arithmetic required
by Gram--Schmidt orthogonalization is replaced by floating-point
arithmetic. Unfortunately, this is known to be unstable in the
worst case: the usual floating-point ${\rm L}^3$~algorithm is not even
guaranteed to terminate, and the output basis may not be
${\rm L}^3$-reduced at all. In this article, we introduce the
${\rm L}^2$~algorithm, a new and natural floating-point variant of the
${\rm L}^3$~algorithm which provably outputs ${\rm L}^3$-reduced bases in
polynomial time $O(d^2n(d+{\rm log}\,B)\,{\rm log}\,B\cdot\mathcal{M}(d))$. This is the first
${\rm L}^3$~algorithm whose running time (without fast integer arithmetic)
provably grows only quadratically with respect to ${\rm log}\,B$, like
Euclid's gcd algorithm and Lagrange's two-dimensional algorithm.
Bibtex:
@article{NgSt09,
title={An {LLL} Algorithm with Quadratic Complexity},
author={P. Q. Nguyen and D. Stehl\'e},
journal={SIAM J. of Computing},
volume={ 39},
number= 3,
pages = {874–-903},
year=2009
}