IEEE Standard 754 floating point is the most common representation today for real numbers on computers, including Intel-based PC's, Macintoshes, and most Unix platforms. This article gives a brief overview of IEEE floating point and its representation. Discussion of arithmetic implementation may be found in the book mentioned at the bottom of this article.
There are several ways to represent real numbers on computers. Fixed point places a radix point somewhere in the middle of the digits, and is equivalent to using integers that represent portions of some unit. For example, one might represent 1/100ths of a unit; if you have four decimal digits, you could represent 10.82, or 00.01. Another approach is to use rationals, and represent every number as the ratio of two integers.
Floating-point representation – the most common solution – uses scientific notation to encode numbers, with a base number and an exponent. For example, 123.456 could be represented as 1.23456 × 10^{2}. In hexadecimal, the number 123.abc might be represented as 1.23abc × 16^{2}. In binary, the number 10100.110 could be represented as 1.0100110 × 2^{4}.
Floating-point solves a number of representation problems. Fixed-point has a fixed window of representation, which limits it from representing both very large and very small numbers. Also, fixed-point is prone to a loss of precision when two large numbers are divided.
Floating-point, on the other hand, employs a sort of "sliding window" of precision appropriate to the scale of the number. This allows it to represent numbers from 1,000,000,000,000 to 0.0000000000000001 with ease, and while maximizing precision (the number of digits) at both ends of the scale.
IEEE floating point numbers have three basic components: the sign, the exponent, and the mantissa. The mantissa is composed of the fraction and an implicit leading digit (explained below). The exponent base (2) is implicit and need not be stored.
The following figure shows the layout for single (32-bit) and double (64-bit) precision floating-point values. The number of bits for each field are shown (bit ranges are in square brackets, 00 = least-significant bit):
Sign | Exponent | Fraction | |
---|---|---|---|
Single Precision | 1 [31] | 8 [30–23] | 23 [22–00] |
Double Precision | 1 [63] | 11 [62–52] | 52 [51–00] |
Laid out as bits, floating point numbers look like this:
Single: SEEEEEEE EMMMMMMM MMMMMMMM MMMMMMMM Double: SEEEEEEE EEEEMMMM MMMMMMMM MMMMMMMM MMMMMMMM MMMMMMMM MMMMMMMM MMMMMMMM
The sign bit is as simple as it gets. 0 denotes a positive number, and 1 denotes a negative number. Flipping the value of this bit flips the sign of the number.
The exponent field needs to represent both positive and negative exponents. To do this, a bias is added to the actual exponent in order to get the stored exponent. For IEEE single-precision floats, this value is 127. Thus, an exponent of zero means that 127 is stored in the exponent field. A stored value of 200 indicates an exponent of (200–127), or 73. For reasons discussed later, exponents of −127 (all 0s) and +128 (all 1s) are reserved for special numbers.
For double precision, the exponent field is 11 bits, and has a bias of 1023.
The mantissa, also known as the significand, represents the precision bits of the number. It is composed of an implicit leading bit (left of the radix point) and the fraction bits (to the right of the radix point).
To find out the value of the implicit leading bit, consider that any number can be expressed in scientific notation in many different ways. For example, the number 50 can be represented as any of these:
.5000 × 10^{2} 0.050 × 10^{3} 5000. × 10^{−2}
In order to maximize the quantity of representable numbers, floating-point numbers are typically stored in normalized form. This basically puts the radix point after the first non-zero digit. In normalized form, five is represented as 5.000 × 10^{0}.
A nice little optimization is available to us in base two, since there's only one possible non-zero digit: 1. Thus, we can just assume a leading digit of 1, and don't need to store it in the floating-point representation. As a result, a 32-bit floating-point value effectively has 24 bits of mantissa: 23 explicit fraction bits plus one implicit leading bit (of 1).
So, to sum up:
Let's consider single-precision floats for a second. Note that we're taking essentially a 32-bit number and reinterpreting the fields to cover a much broader range. Something has to give, and it's precision. For example, regular 32-bit integers, with all precision centered around zero, can precisely store integers with 32-bits of resolution. Single-precision floating-point, on the other hand, is unable to match this resolution with its 24 bits. It does, however, approximate this value by effectively truncating from the lower end and rounding up. For example:
11110000 11001100 10101010 10101111 // 32-bit integer = +1.1110000 11001100 10101011 x 2^{31} // Single-Precision Float = 11110000 11001100 10101011 00000000 // Corresponding Value
This approximates the 32-bit value, but doesn't yield an exact representation. On the other hand, besides the ability to represent fractional components (which integers lack completely), the floating-point value can represent numbers around 2^{127}, compared to 32-bit integers' maximum value around 2^{32}.
The range of positive floating point numbers can be split into normalized numbers (which preserve the full precision of the mantissa), and denormalized numbers (discussed later) which use only a portion of the fractions's precision.
Denormalized | Normalized | Approximate Decimal | |
---|---|---|---|
Single Precision | |||
Double Precision |
Since every floating-point number has a corresponding, negated value (by toggling the sign bit), the ranges above are symmetric around zero.
There are five distinct numerical ranges that single-precision floating-point numbers are not able to represent:
Overflow means that values have grown too large for the representation, much in the same way that you can overflow integers. Underflow is a less serious problem because is just denotes a loss of precision, which is guaranteed to be closely approximated by zero.
Here's a table of the total effective range (excluding infinite values) of IEEE floating-point numbers:
Binary | Decimal | |
---|---|---|
Single | ± (2−2^{−23}) × 2^{127} | ≈ ± 10^{38.53} |
Double | ± (2−2^{−52}) × 2^{1023} | ≈ ± 10^{308.25} |
Note that the extreme values occur (regardless of sign) when the exponent is at the maximum value for finite numbers (2^{127} for single-precision, 2^{1023} for double), and the mantissa is filled with 1s (including the normalizing 1 bit).
IEEE reserves exponent field values of all 0s and all 1s to denote special values in the floating-point scheme.
If the exponent is all 0s, but the fraction is non-zero (else it would be
interpreted as zero), then the value is a denormalized number,
which now has an assumed leading 0 before the binary point.
Thus, this represents a number
As denormalized numbers get smaller, they gradually lose precision as the left bits of the fraction become zeros. At the smallest denormalized value (only the least-significant fraction bit is one), a 32-bit floating-point number has but a single bit of precision, compared to the standard 24-bits for normalized values.
The value NaN (Not a Number) is used to represent a value that does not represent a real number. NaN's are represented by a bit pattern with an exponent of all 1s and a non-zero fraction. There are two categories of NaN: QNaN (Quiet NaN) and SNaN (Signalling NaN).
A QNaN is a NaN with the most significant fraction bit set. QNaN's propagate freely through most arithmetic operations. These values are generated from an operation when the result is not mathematically defined.
An SNaN is a NaN with the most significant fraction bit clear. It is used to signal an exception when used in operations. SNaN's can be handy to assign to uninitialized variables to trap premature usage.
Semantically, QNaN's denote indeterminate operations, while SNaN's denote invalid operations.
Operations on special numbers are well-defined by IEEE. In the simplest case, any operation with a NaN yields a NaN result. Other operations are as follows:
Operation | Result |
---|---|
n ÷ ±∞ | 0 |
±∞ × ±∞ | ±∞ |
±nonZero ÷ ±0 | ±∞ |
±finite × ±∞ | ±∞ |
∞ + ∞ ∞ − −∞ |
+∞ |
−∞ − ∞ −∞ + −∞ |
−∞ |
∞ − ∞ −∞ + ∞ |
NaN |
±0 ÷ ±0 | NaN |
±∞ ÷ ±∞ | NaN |
±∞ × 0 | NaN |
To sum up, the following are the corresponding values for a given representation:
Sign | Exponent (e) | Fraction (f) | Value |
---|---|---|---|
0 | 00⋯00 | 00⋯00 | +0 |
0 | 00⋯00 | 00⋯01 ⋮ 11⋯11 |
Positive Denormalized Real 0.f × 2^{(−b+1)} |
0 | 00⋯01 ⋮ 11⋯10 |
XX⋯XX | Positive Normalized Real 1.f × 2^{(e−b)} |
0 | 11⋯11 | 00⋯00 | +∞ |
0 | 11⋯11 | 00⋯01 ⋮ 01⋯11 |
SNaN |
0 | 11⋯11 | 1X⋯XX | QNaN |
1 | 00⋯00 | 00⋯00 | −0 |
1 | 00⋯00 | 00⋯01 ⋮ 11⋯11 |
Negative Denormalized Real −0.f × 2^{(−b+1)} |
1 | 00⋯01 ⋮ 11⋯10 |
XX⋯XX | Negative Normalized Real −1.f × 2^{(e−b)} |
1 | 11⋯11 | 00⋯00 | −∞ |
1 | 11⋯11 | 00⋯01 ⋮ 01⋯11 |
SNaN |
1 | 11⋯11 | 1X⋯XX | QNaN |
A lot of this stuff was observed from small programs I wrote to go back and forth between hex and floating point (printf-style), and to examine the results of various operations. The bulk of this material, however, was lifted from Stallings' book.