Computers manipulate approximations of real numbers, called floating-point
numbers. The calculations they make are accurate enough for most applications.
Unfortunately, in some (catastrophic) situations, the floating-point
operations lose so much precision that they quickly become irrelevant. In this article,
we review some of the problems one can encounter, focussing on the IEEE754-1985
norm. We give a (sketch of a) semantics of its basic operations then abstract them % HOP?
(in the sense of abstract interpretation) to extract information about
the possible loss of precision. The expected application is abstract
debugging of software ranging from
simple on-board systems (which use more and more on-the-shelf micro-processors
with floating-point units) to scientific codes. The abstract analysis
is demonstrated on simple examples and compared with related work.