Computer - Float representation and usage

> (Data|State) Management and Processing > (Data Type|Data Structure) > Number, Numeric, Quantity > Number representation in Computer

1 - About

Computer representations of floating point numbers typically use a form of rounding to significant figures, but with binary numbers. The number of correct significant figures is closely related to the notion of relative error (which has the advantage of being a more accurate measure of precision, and is independent of the radix of the number system used).

Modern systems usually provide floating-point support that conforms to double.

Floating-point is ubiquitous (everywhere) in computer systems

  • Almost every language has a floating-point datatype (Javascript, Python, Java, Oracle (SQL), …)
  • Computers from PCs to supercomputers have floating-point accelerators (???)
  • Most compilers will be called upon to compile floating-point algorithms from time to time;
  • Every operating system must respond to floating-point exceptions such as overflow

Generally, the numbers represented in float are to big to fit in their physical representation (typically 32 bit). Therefore the result of a floating-point calculation must often be rounded in order to fit back into its finite representation. This rounding error is the characteristic feature of floating-point computation.

3 - Management

3.1 - Usage

Avoid float and double if exact answers are required

If you need precise numbers (e.g. money), see decimals.

Float are great, for geometry (2D, 3D,…).

Floats (doubles) are fast because they are native type. Floats are usable with vector registers (xmm etc.) whereas decimals aren't.

Advertising

3.2 - Specification

The IEEE standard gives an algorithm for addition, subtraction, multiplication, division and square root, and requires that implementations produce the same result as that algorithm.

3.3 - Visualization

3.4 - List

3.5 - Rounding Error

Floating-point arithmetic can only produce approximate results, rounding to the nearest representable real number.

Floating-point numbers offer a trade-off between accuracy and performance.

With a 52 bits of precision , if you're trying to represent numbers whose expansion repeats endlessly, the expansion is cut off after 52 bits.

Unfortunately, most software needs to produce output in base 10, and common fractions in base 10 are often repeating decimals in binary.

For example:

  • 1.1 decimal is binary 1.0001100110011 …;
  • .1 = 1/16 + 1/32 + 1/256 plus an infinite number of additional terms.

IEEE 754 has to chop off that infinitely repeated decimal after 52 digits, so the representation is slightly inaccurate.

Sometimes you can see this inaccuracy when the number is printed:

>>> 1.1
1.1000000000000001
Advertising

3.5.1 - Guard Digits

Guard Digits are a means of reducing the error when subtracting two nearby numbers.

3.6 - Associativity Error

real numbers are associative but this is not always true of floating-point numbers:

console.log(   (0.1 + 0.2) + 0.3   ); // 0.6000000000000001
console.log(    0.1 + (0.2 + 0.3)  ); // 0.6
 
console.log(   ( (0.1 + 0.2) + 0.3 ) == ( 0.1 + (0.2 + 0.3) )  ); // false

3.7 - Inexact representations

Always remember that floating point representations using float and double are inexact. Floating-point numbers offer a trade-off between accuracy and performance.

For example, consider these Javascript number expressions (Javascript supports only float)

console.log(999199.1231231235 == 999199.1231231236) // true
console.log(1.03 - 0.41) // 0.6200000000000001

In Java, for exactness, you want to use BigDecimal.

3.8 - to Integer

Doubles (float) can represent integers perfectly with up to 53 bits of precision.

All of the integers from -9,007,199,254,740,992 (–2^53) to 9,007,199,254,740,992 (2^53) are then valid doubles.

Advertising

4 - Documentation / Reference