Jacques Stern
Designing Identification Schemes with Keys of Short Size.
To appear in Advances in Cryptology CRYPTO'94
In the last few years, there have been several attempts to
build identification
protocols that do not rely on arithmetical operations with large
numbers but only use simple operations.
One was presented at the CRYPTO 89 rump session and
depends on the so-called Permuted Kernel problem (PKP).
Another appeared in the CRYPTO 93 proceedings and is based
on the syndrome decoding problem (SD) form the theory of
error correcting codes. In this paper, we
introduce a new scheme of the same family with the distinctive character
that both the secret key and the public identification key can
be taken to be of short length. By short, we basically mean
the usual size of conventional symmetric cryptosystems. As
is known, the possibility of using short keys has been a
challenge in public key cryptography and has practical applications.
Our scheme relies on a combinatorial problem which we
call Constrained Linear Equations (CLE in short) and
which consists of solving a set of linear equations modulo
some small prime q, the unknowns being subject to belong to
a specific subset of the integers mod q. Thus, we enlarge
the set of tools that can be used in cryptography.
Lattice Reduction: a Toolbox for the Cryptanalyst.
(Joint work with Antoine Joux).
In recent years, methods based on lattice reduction have been used
repeatedly for the cryptanalytic attack of various systems. Even if
they do not rest on highly sophisticated theories, these methods may
look a bit intricate to the practically oriented cryptographers, both from
the mathematical and the algorithmic point of view. The aim of the
present paper is to explain what can be achieved by lattice reduction
algorithms, even without understanding of the actual mechanisms
involved. Two examples are given, one of them being the attack devised
by the second named author against Knuth's truncated linear congruential
generator, which has been announced a few years ago and appears here for
the first time in journal version.
The Hardness of Approximate Optima in Lattices, Codes, and Systems of Linear Equations.
(Joint work with Sanjeev Arora, László Babai and Z. Sweedyk).
Published in proceedings of FOCS'93
We prove the following about
the Nearest Lattice Vector Problem (in any norm),
the Nearest Codeword Problem for binary codes,
the problem of learning a halfspace in the presence of errors, and some other problems.
- Approximating the optimum within any constant factor is NP-hard.
- If for some > 0 there exists a
polynomial time algorithm that approximates the optimum within a factor of
then NP is in quasi-polynomial
deterministic time: .
Moreover, we show that second result also holds for the
Shortest Lattice Vector Problem in the norm.
Improving the factor to
for either
of the lattice problems would imply the hardness of the Shortest
Vector Problem in norm; an old open problem.
Our proofs use reductions from few-prover, one-round interactive
proof systems,
either directly, or through a set-cover problem.
Expansion properties of directed random Schreier graphs
of small degree and an application to cryptography
(Joint work with Antoine Joux and Jean-Pierre Tillich).
Submitted to FOCS'94
We show that random directed Schreier graphs S(n)/S(n-k) (where
S(n) is the set of permutations of n elements)
are for fixed k, almost always good expanders. These graphs generalize the common probabilistic model of
random regular graphs, for which such kind of results were already known.
Nevertheless here we are particularly interested in estimating
the expansion constant of directed random graphs, whereas all the known
result were obtained for undirected graphs. Moreover our result
covers the case of graphs with very low degree, which was
not really considered by previous work of A. Broder, J. Friedman, and
E. Shamir.
What motivated our study was a cryptographic issue : it was
thought that some low-cost cryptographic devices could be constructed
by using permutation automata. We prove here, that the scheme which was proposed is in
general not secure, provided that some random directed Schreier graphs
with low degree have good expansion properties. The
aforementioned result imply that one could break such schemes
by a polynomial time algorithm.
Approximating the number of error locations is NP-complete.
Proceedings of the AAECC-10 Conference
Using recent results from complexity theory, we show that,
under the assumption , no
polynomial time algorithm can compute an
upper bound for the number of error locations of a word y
with respect to a code C, which is guaranteed to be
within a constant ratio of the true (Hamming) distance of y
to C.
Thus the barrier which prevents the design of very general
decoding algorithms that would apply to unstructured codes
is even more solid than was thought before.
We also give an analogous result for integer lattices.
A new identification scheme based on syndrome decoding.
Proceedings of CRYPTO'93
Zero-knowledge proofs were introduced in 1985, in a paper by
Goldwasser, Micali and Rackoff. Their practical
significance was soon demonstrated in the work of Fiat and
Shamir, who turned zero-knowledge proofs of quadratic
residuosity into efficient means of establishing user identities.
Still, as is almost always the case in public-key cryptography,
the Fiat-Shamir scheme relied on arithmetic operations on large
numbers. In 1989, there were two attempts to build identification
protocols that only use simple operations.
One appeared in the
EUROCRYPT proceedings and relies on the intractability of some coding
problems, the other was presented at the CRYPTO rump session and
depends on the so-called Permuted Kernel problem (PKP). Unfortunately,
the first of the schemes was not really practical. In the present paper,
we propose a new identification scheme, based on error-correcting codes,
which is zero-knowledge and is of practical value. Furthermore, we
describe several variants, including one which has an identity based
character. The security of our scheme depends on the hardness of decoding
a word of given syndrome w.r.t. some binary linear error-correcting code.
Attacks on the Birational Permutation Signature
(Joint work with Don Coppersmith and Serge Vaudenay)
In Proceedings of CRYPTO'93.
Shamir presents a family of
cryptographic signature schemes based on
birational permutations of the integers modulo a large
integer N of unknown factorization.
These schemes are attractive
because of the low computational requirements,
both for signature generation and signature verification.
However, the two schemes presented in Shamir's paper are weak.
We show here how to break the first scheme, by first reducing it
algebraically to the earlier Ong-Schnorr-Shamir signature scheme,
and then applying the Pollard solution
to that scheme.
We then show some attacks on the second scheme.
These attacks give ideas which can be applied to schemes
in this general family.
Weaknesses of a public key cryptosystem
based on factorization of finite groups.
(Joint work with Simon Blackburn and Sean Murphy).
Proceedings of EUROCRYPT'93
Recently, Qu and Vanstone have announced the construction
of several new public-key cryptosystems based on group
factorization. One of these was described at the last AUSCRYPT
meeting. We point out a serious weakness of this last system
which makes it insecure. Our method only uses elementary
algebra.
On the length of cryptographic hash-values used in
cryptographic identification scheme.
(Joint work with M. Girault).
Proceedings of CRYPTO'94
Polynomial-time construction of linear codes with almost equal weights.
(Joint work with G. Lachaud).
in Sequences II, Methods in Communications, Security, and
Computer Science, Springer-Verlag, 1991
Polynomial-time construction of spherical codes.
(Joint work with G. Lachaud).
Proceedings of the AAECC-9 Conference
Polynomial-time construction of codes I: Linear codes with almost equal weights.
(Joint work with G. Lachaud).
in Applicable Algebra in Engineering, Communication and Computing, 1992
Polynomial-time construction of codes II: Spherical codes and the kissing number of spheres.
(Joint work with G. Lachaud).
in IEEE Transactions on Information Theory, 1993