Javascript - Floating-Point Values

Introduction

To define a floating-point value, include a decimal point and at least one number after the decimal point.

Here are some examples:

var floatNum1 = 1.1; 
var floatNum2 = 0.1; 
var floatNum3 = .1;     //valid, but not recommended 

When there is no digit after the decimal point, the number becomes an integer.

If the number being represented is a whole number (such as 1.0), it will be converted into an integer, as in this example:

var floatNum1 = 1.;     //missing digit after decimal - interpreted as integer 1 
var floatNum2 = 10.0;   //whole number - interpreted as integer 10 

Floating-point values can be represented using e-notation.

E-notation is used to indicate a number that should be multiplied by 10 raised to a given power.

The format of e-notation in ECMAScript is to have a number followed by an uppercase or lowercase letter E, followed by the power of 10 to multiply by.

var floatNum = 3.125e7;    //equal to 31250000 

In this example, floatNum is equal to 31,250,000. The notation essentially says, "Take 3.125 and multiply it by 10,000,000."

0.00000000000000003 can be written as 3e-17.

By default, ECMAScript converts any floating-point value with at least six zeros after the decimal point into e-notation.

Floating-point values are accurate up to 17 decimal places but are far less accurate in arithmetic computations than whole numbers.

For instance, adding 0.1 and 0.2 yields 0.30000000000000004 instead of 0.3.

if (a + b == 0.3){    // 0.30000000000000004 == 0.3. 
    console.log("0.3."); 
} 

Here the sum of two numbers is tested to see if it's equal to 0.3.

You should never test for specific floating-point values.