3

I came across this issue a while back when I was testing some HTML forms. The Maximum number of digits in JavaScript Number with a decimal point is only 16

I have tried the following

var x = 12345678912345.6789

x is 12345678912345.68 - 16 Digits only

var x = 123456789123.6789

x is 123456789123.6789 - 16 Digits only

new Number(12345678912345.6789)

12345678912345.68 - 16 Digits only

new Number(123456789123.6789)

123456789123.6789 - 16 Digits only

if you count the total digits, they are 16. If you increase digits before decimal point, the digits after decimal point get rounded

Similarly

new Number(.12345678912367890)

is 0.1234567891236789 - 16 Digits only (notice trailing 0 is missing)

Which makes me deduce that I can only have 16 Digits in a number with Decimal in it. If I try to add more digits, the number starts to round

I also observed that when I serialize a number with decimal into JSoN in ASP.NET MVC, it also converts the number to Max 16 Digits and rounds the rest

My Question: why?

U.P
  • 7,152
  • 6
  • 32
  • 58
  • Because the precision of this number type is 16 digits? Note also that the number type you are using probably also is not _exact_, meaning that it uses some sort of float implementation. – Tim Biegeleisen Feb 21 '19 at 05:46
  • 1
    JS `Number` uses double-precision floating point which follows IEEE 754 (see [here](https://modernweb.com/what-every-javascript-developer-should-know-about-floating-points/)) which has limitation of 64-bit precision (16 digits). That's why some numbers are rounded while represented in more than 16 digits. – Tetsuya Yamamoto Feb 21 '19 at 05:54
  • @TetsuyaYamamoto thanks for you comment. It helps me understood things clearly. Would you mind putting this as an answer so I can mark it as correct? – U.P Feb 21 '19 at 07:50

2 Answers2

6

According to JS/ECMAScript specification, the Number type uses double-precision floating point which has 64-bit format (binary64), consists of a sign bit (determines positive or negative value), 11 exponent bits and 52 fraction bits (each digit represents 4-bits, hence 64-bit has 16 digits):

The Number type representing the double-precision 64-bit format IEEE 754-2008 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic.

The maximum positive number which can be represented properly using double precision is 9007199254740992, which can be achieved by using Math.pow(2, 53). If the number range is between Math.pow(2, 53) and Math.pow(2, 54) (or between Math.pow(2, -53) and Math.pow(2, -54)), only even numbers can be represented properly because the exponent bits will affect LSB (least-significant bit) on the fraction bits.

Let's review the large number part:

var x = 12345678912345.6789

var x = new Number(12345678912345.6789)

This number contains more than 52 fractional bits (72 bits in total), hence the rounding used to keep the fractional bits to 52.

Also with this decimal number:

var x = new Number(.12345678912367890)

This number contains 68 fractional bits, hence the last zero is chopped off to keep 64-bit length.

Usually numeric representation larger than 9007199254740992 or smaller than 1.1102230246251565E-16 are stored as literal strings instead of Number. If you need to compute very large numbers, there are certain external libraries available to perform arbitrary precision arithmetic.

Further reading:

ECMAScript: Number Encoding

ECMAScript: Working with large integers

Tetsuya Yamamoto
  • 21,982
  • 5
  • 34
  • 53
1

In javascript You can't represent most decimal fractions exactly with binary floating point types (which is what ECMAScript uses to represent floating point value).

This is why javascript uses 16 numbers after the decimal point. for example -

Try to run:

var x =  3.14159265358979323846;
print(x.toFixed(20));

and see the following is being cast to float.

If you want to cast more then 16 points after the decimal point you can either:

Barr J
  • 9,047
  • 1
  • 22
  • 41