15

I'm currently marvelling over this:

C++ 11

#include <iostream>
#include <iomanip>
#include <limits>

int main()
{
  double d = 1.305195828773568;
  std::cout << std::setprecision(std::numeric_limits<double>::max_digits10) << d << std::endl;
  // Prints  1.3051958287735681
}

Python

>>> repr(1.305195828773568)
'1.305195828773568'

What's going on, why the extra 1 in C++?

So far I thought that C++ and Python use the same 64 bit IEEE doubles under the hood; both formatting functions are supposed to print the full precision.

nh2
  • 22,055
  • 11
  • 64
  • 113
  • 1
    Note that *the* property of `repr` is: `eval(repr(x)) == x` **not** that numbers are printed with all decimal digits. If you want a precision of `k` decimal digits you should use a proper formatting function. – Bakuriu Sep 04 '15 at 05:32

3 Answers3

10

you can force python to print the 1 as well (and many more of the following digits):

print('{:.16f}'.format(1.305195828773568))
# -> 1.3051958287735681

from https://docs.python.org/2/tutorial/floatingpoint.html:

>>> 7205759403792794 * 10**30 // 2**56
100000000000000005551115123125L

In versions prior to Python 2.7 and Python 3.1, Python rounded this value to 17 significant digits, giving ‘0.10000000000000001’. In current versions, Python displays a value based on the shortest decimal fraction that rounds correctly back to the true binary value, resulting simply in ‘0.1’.

"print the full precision" is hard to do: what is the full precision? the representation of floats is binary; only fractions of powers of 2 can be represented exactly (to full precision); most decimal fractions can not be represented exactly in base 2.

but the float in the memory will be the same for python and c++; it is just the string representation that differs.

hiro protagonist
  • 36,147
  • 12
  • 68
  • 86
  • "most decimal fractions can not be represented exactly in base 2" - yes, but reverse is not true. All binary fractions are representable in decimal. So it should be possible to get exact decimal representation. – n0rd Sep 04 '15 at 01:00
  • @n0rd Correct. The only downsides are that it would be a rather long representation and probably not representative of the actual input from which the binary float was obtained. – Eugene Ryabtsev Sep 04 '15 at 04:28
  • 2
    The exact value of the closest IEEE 64-bit binary floating point number is 1.3051958287735681008001620284630917012691497802734375 - definitely not representative of the actual input. – Patricia Shanahan Sep 04 '15 at 05:38
  • 1
    Indeed, exact decimal representation of a floating-point number would require in the worst case as many digits as the length of the mantissa plus one, so 53 digits in case of doubles. – P-Gn Sep 04 '15 at 08:23
  • thanks for your comments! these make the answer way clearer and to the point. – hiro protagonist Sep 04 '15 at 08:47
  • @user1735003: it can take many more digits than that. Take for example 1e-20. When converted to a double and printed in decimal it is 0.00000000000000000000999999999999999945153271454209571651729503702787392447107715776066783064379706047475337982177734375 (123 significant digits). (Smaller numbers will have many more digits.) – Rick Regan Sep 04 '15 at 14:11
  • @Rick you're right. 53 is the worst case when the exponent is zero. Add one for every negative exponent. So with 1e-20, you get an exponent of -67 in base two, so that should be a max of 67+53 = 120 digits... which is consistent with your example that actually has "only" 100 significant digits (starting from the leading 9). – P-Gn Sep 04 '15 at 15:23
  • @user1735003: Thanks, I (we) miscounted. It's 99 significant digits (and 20 insignificant leading 0s). – Rick Regan Sep 04 '15 at 17:12
3

When the format ends up using fixed point notation, the precision() specifies the number of fractional digits. Since there is an additional non-fractional digits in your example one more than those which can be safely represented is created.

When using scientific notation, the total number of digits is counted and you'll get the same digits as the original (plus an exponent, of course). The C and C++ options for formatting floating point numbers are actually fairly bad. In particular, there is no option which lets the formatter decide the appropriate number of digits although the underlying algorithm can actually determine these.

Dietmar Kühl
  • 141,209
  • 12
  • 196
  • 356
2

Taken from an answer to this question:

IEEE 754 floating point is done in binary. There's no exact conversion from a given number of bits to a given number of decimal digits. 3 bits can hold values from 0 to 7, and 4 bits can hold values from 0 to 15. A value from 0 to 9 takes roughly 3.5 bits, but that's not exact either.

An IEEE 754 double precision number occupies 64 bits. Of this, 52 bits are dedicated to the significand (the rest is a sign bit and exponent). Since the significand is (usually) normalized, there's an implied 53rd bit.

Now, given 53 bits and roughly 3.5 bits per digit, simple division gives us 15.1429 digits of precision. But remember, that 3.5 bits per decimal digit is only an approximation, not a perfectly accurate answer.

This weird .1429 after the 15 digits you provided is probably the culprit of the added 1.

For what it's worth, Python has this written on their site:

Historically, the Python prompt and built-in repr() function would choose the one with 17 significant digits, 0.10000000000000001. Starting with Python 3.1, Python (on most systems) is now able to choose the shortest of these and simply display 0.1.

Community
  • 1
  • 1
erip
  • 13,935
  • 9
  • 51
  • 102