300

There have been several questions posted to SO about floating-point representation. For example, the decimal number 0.1 doesn't have an exact binary representation, so it's dangerous to use the == operator to compare it to another floating-point number. I understand the principles behind floating-point representation.

What I don't understand is why, from a mathematical perspective, are the numbers to the right of the decimal point any more "special" that the ones to the left?

For example, the number 61.0 has an exact binary representation because the integral portion of any number is always exact. But the number 6.10 is not exact. All I did was move the decimal one place and suddenly I've gone from Exactopia to Inexactville. Mathematically, there should be no intrinsic difference between the two numbers -- they're just numbers.

By contrast, if I move the decimal one place in the other direction to produce the number 610, I'm still in Exactopia. I can keep going in that direction (6100, 610000000, 610000000000000) and they're still exact, exact, exact. But as soon as the decimal crosses some threshold, the numbers are no longer exact.

What's going on?

Edit: to clarify, I want to stay away from discussion about industry-standard representations, such as IEEE, and stick with what I believe is the mathematically "pure" way. In base 10, the positional values are:

... 1000  100   10    1   1/10  1/100 ...

In binary, they would be:

... 8    4    2    1    1/2  1/4  1/8 ...

There are also no arbitrary limits placed on these numbers. The positions increase indefinitely to the left and to the right.

Barry Brown
  • 19,087
  • 14
  • 65
  • 102
  • 58
    In binary, the number 3 is represented as 2¹+2°=2+1. Nice and easy. Now, take a look at 1/3. How would you represent that, using negative powers of 2? Experiment a little and you'll see that 1/3 equals the sum of the infinite sequence 2^-2 + 2^-4 + 2^-6 + 2^-8 + ..., ie. not that easy to represent exact in binary. – Lars Haugseth Jul 06 '09 at 21:33
  • 22
    Jon Skeet answers the question in your body very well. One thing that is missing is that you actually ask two different questions. The title question is "why can't decimal numbers be represented exactly in binary?" The answer is, they can be. Between your title and body you conflate the idea of "binary" and the idea of a "floating point representation." Floating point is a way of expressing decimal numbers in a fixed number of binary digits at the cost of precision. Binary is just a different base for counting and can express any number decimal can, given an infinite number of digits. – Chris Blackwell Jul 06 '09 at 22:22
  • 3
    There's several systems that have exact decimal representation. It works pretty much like you describe. The SQL decimal type is one example. LISP languages have it built in. There are several commercial and opensource libraries for using exact decimal calculations. It's just that there's no hardware support for this, and just most languages and hardware out there implements the IEEE standards for representing an infinite amount of numbers in 32 or 64 bits. – nos Jul 10 '09 at 20:47
  • 2
    You might find this helpful to understand exactly what's going on inside a floating point nubmber: [Anatomy of a floating point number](http://www.johndcook.com/blog/2009/04/06/anatomy-of-a-floating-point-number/). – John D. Cook Jul 06 '09 at 20:23
  • 1
    This question appears to be off-topic because it is about Math (even if it's programming related math) and would be better on [math.se] – Cole Johnson Sep 07 '14 at 17:03

20 Answers20

381

Decimal numbers can be represented exactly, if you have enough space - just not by floating binary point numbers. If you use a floating decimal point type (e.g. System.Decimal in .NET) then plenty of values which can't be represented exactly in binary floating point can be exactly represented.

Let's look at it another way - in base 10 which you're likely to be comfortable with, you can't express 1/3 exactly. It's 0.3333333... (recurring). The reason you can't represent 0.1 as a binary floating point number is for exactly the same reason. You can represent 3, and 9, and 27 exactly - but not 1/3, 1/9 or 1/27.

The problem is that 3 is a prime number which isn't a factor of 10. That's not an issue when you want to multiply a number by 3: you can always multiply by an integer without running into problems. But when you divide by a number which is prime and isn't a factor of your base, you can run into trouble (and will do so if you try to divide 1 by that number).

Although 0.1 is usually used as the simplest example of an exact decimal number which can't be represented exactly in binary floating point, arguably 0.2 is a simpler example as it's 1/5 - and 5 is the prime that causes problems between decimal and binary.


Side note to deal with the problem of finite representations:

Some floating decimal point types have a fixed size like System.Decimal others like java.math.BigDecimal are "arbitrarily large" - but they'll hit a limit at some point, whether it's system memory or the theoretical maximum size of an array. This is an entirely separate point to the main one of this answer, however. Even if you had a genuinely arbitrarily large number of bits to play with, you still couldn't represent decimal 0.1 exactly in a floating binary point representation. Compare that with the other way round: given an arbitrary number of decimal digits, you can exactly represent any number which is exactly representable as a floating binary point.

Community
  • 1
  • 1
Jon Skeet
  • 1,261,211
  • 792
  • 8,724
  • 8,929
  • 8
    That's a damn fine example sir! – Tom Ritter Jul 06 '09 at 20:23
  • 5
    ...wish I could upvote this twice. I've been asked about this entirely too many times. It's almost like people can't think outside of base 10. hehe – Justin Niessner Jul 06 '09 at 20:23
  • 38
    Yeah, there are 10 kinds of people in the world - those who understand binary and those who don't. – duffymo Jul 06 '09 at 20:26
  • http://en.wikipedia.org/wiki/IEEE_754-1985 provides a decent explanation of the internal representation of IEEE binary floating point numbers. And it's examples may help clarify why precision is affected by representational choice. – LBushkin Jul 06 '09 at 20:31
  • "Decimal numbers _can_ be represented exactly". false. if at best misleading. you still have finite space/time, and most numbers cannot be represented in _any_ form anyway. – nlucaroni Jul 06 '09 at 20:47
  • So it's the relative-primeness of the two bases that causes the problem, correct? By that line of reasoning, would it be correct to say that any number that can be represented exactly in base 10 can also be represented exactly in, say, base 20 because they share the same prime factors? – Barry Brown Jul 06 '09 at 20:48
  • 1
    @Barry: Yes, that's right. And any binary number can be represented exactly in decimal for exactly the same reason. – Jon Skeet Jul 06 '09 at 20:49
  • @nlucaroni: Note that I said "decimal number" not "any number". If it can be represented in decimal, then it's clearly a rational number to start with. I'll edit for the finiteness side of things. – Jon Skeet Jul 06 '09 at 20:50
  • I don't understand what you're saying (if anything) about the difference between base 10 and base 2. Any finite number (rational and/or irrational) can be expressed exactly (definingly 'exactly' as meaning 'with vanishingly small imprecision') if you can have a non-finite number of decimal places, no matter whether in base 2 or in base 10. – ChrisW Jul 06 '09 at 21:27
  • 2
    @ChrisW: That's because of the way you're defining "exactly". My point is that however many "3"s you have at the end of 0.333333... you won't get to 1/3... whereas if you give me any exact binary number of the form [10100010101].[101010101] for as big as you want on each side of the binary point, I can give you a decimal value which is absolutely exactly the same value, in a finite number of digits. – Jon Skeet Jul 06 '09 at 21:32
  • 1
    I do, however, take your point that a truly infinite number of 3s (rather than just an *arbitrary but finite* number) would give an infinitely good approximation of 1/3. Will edit. – Jon Skeet Jul 06 '09 at 21:32
  • 83
    @JonSkeet: *Ctrl+Alt+Delete* would look awkward with just two fingers. – Lars Haugseth Jul 06 '09 at 21:39
  • 2
    @Barry: not relative prime, but with one base having a prime factor that is not a factor of the other base (e.g. 10 and 6 are not relatively prime, but 1/10 is an infinite decimal in base 6, and 1/6 is an infinite decimal in base 10). – Jason S Jul 06 '09 at 22:13
  • We just have to wait until evolution gives us either 2 hands with 8 vingers each or 4 hands with 4 fingers each. I just don't like the chop 8 off idea. – Toon Krijthe Jul 07 '09 at 09:31
  • 3
    It's worth noting that .3333333 repeating infinitely, is not an approximation of 1/3 but is in fact equal to 1/3. It's the exact same concept as .999999... is equal to 1. – JSchlather Jul 07 '09 at 22:59
  • 1
    .999999999999999 even with an infinite number of digits is definitely not equal to one. It asymptotically approaches 1, but never reaches it. It's the exact same concept as 1/x != 0 even as x->infinity. – muusbolla Aug 03 '09 at 17:24
  • 2
    @muusbolla: I believe this depends on various precise definitions, particularly "equals". For example, it makes sense to suggest that if there is no non-zero value of d for which x + d = y, then x=y. You can't find any such value of d for the difference between 0.9 recurring and 1... – Jon Skeet Aug 03 '09 at 17:42
  • We can represent 1/3, 1/9, 1/27, or any rational in decimal notation. We do it by adding an extra symbol. For example, a line over the digits that repeat in the decimal expansion of the number. So what we need to do is represent numbers as a sequence of binary numbers, a radix point, and some other symbol to indicate the repeating part of the sequence. – ntownsend Sep 25 '09 at 14:25
  • 20
    @muusbolla: No. The numbers represented by the decimal representation `1` and the decimal representation `0.9...` (infinitely repeating `9`s after the decimal point) are equal. Perhaps the easiest way to see this is the following: Let x = `0.9...`. Note that `10x = 9.9....`. Therefore `9x = 10x - x = 9.9... - 0.9... = 9` so that `9x = 9` and `x = 1`. There are other ways to see this, but I believe that this is the simplest. – jason Nov 04 '09 at 16:36
  • 3
    @Lars Haugseth: think Ctrl+Alt *pedals* :o) – Piskvor left the building Jul 08 '10 at 15:22
  • 1
    @Piskvor: This is why the term nose boot was invented. – Joe D Aug 12 '10 at 12:17
  • 1
    I don't think *"3 is a prime number which isn't a factor of 10"* properly describes it either. It should be more like *"the factors of x are/are not all also factors of 10."* I don't know if there's a word for that. Also @muusbolla see [http://en.wikipedia.org/wiki/0.999...](http://en.wikipedia.org/wiki/0.999...) – BlueRaja - Danny Pflughoeft Feb 27 '11 at 20:25
  • @JonSkeet; `The reason you can't represent 0.1 as a binary floating point number is for exactly the same reason`: I do not find any recurring in `1/10` unlike `1/3`. (also`10` (decimal) is not prime) – haccks Nov 21 '13 at 14:34
  • 1
    @haccks: You do if you try to represent a tenth in binary rather than decimal. 10 isn't prime, but the point is that 5 is coprime to 2, so anything which has a factor of 5 in will not be able to be exactly represented in binary floating point. (You can't represent 0.1 exactly in binary because you can't represent 0.2 exactly in binary...) – Jon Skeet Nov 21 '13 at 14:43
  • What about `3` and `7` if they will be the co factor of `2`? – haccks Nov 21 '13 at 15:03
  • [@duffymo](https://stackoverflow.com/questions/1089018/why-cant-decimal-numbers-be-represented-exactly-in-binary#comment902751_1089026) There are 2 kinds of people in the world - those who can deal with incomplete information. – chux - Reinstate Monica Oct 25 '17 at 21:24
  • @muusbolla you're wrong, you can't say that 9,99 - 0.9 = 9, it's actually inferior limit to 9. so x is inferior limit to 1, in fact it was 0.9999. Maths is always right. – Raoul Scalise Mar 24 '20 at 09:18
26

For example, the number 61.0 has an exact binary representation because the integral portion of any number is always exact. But the number 6.10 is not exact. All I did was move the decimal one place and suddenly I've gone from Exactopia to Inexactville. Mathematically, there should be no intrinsic difference between the two numbers -- they're just numbers.

Let's step away for a moment from the particulars of bases 10 and 2. Let's ask - in base b, what numbers have terminating representations, and what numbers don't? A moment's thought tells us that a number x has a terminating b-representation if and only if there exists an integer n such that x b^n is an integer.

So, for example, x = 11/500 has a terminating 10-representation, because we can pick n = 3 and then x b^n = 22, an integer. However x = 1/3 does not, because whatever n we pick we will not be able to get rid of the 3.

This second example prompts us to think about factors, and we can see that for any rational x = p/q (assumed to be in lowest terms), we can answer the question by comparing the prime factorisations of b and q. If q has any prime factors not in the prime factorisation of b, we will never be able to find a suitable n to get rid of these factors.

Thus for base 10, any p/q where q has prime factors other than 2 or 5 will not have a terminating representation.

So now going back to bases 10 and 2, we see that any rational with a terminating 10-representation will be of the form p/q exactly when q has only 2s and 5s in its prime factorisation; and that same number will have a terminating 2-representatiion exactly when q has only 2s in its prime factorisation.

But one of these cases is a subset of the other! Whenever

q has only 2s in its prime factorisation

it obviously is also true that

q has only 2s and 5s in its prime factorisation

or, put another way, whenever p/q has a terminating 2-representation, p/q has a terminating 10-representation. The converse however does not hold - whenever q has a 5 in its prime factorisation, it will have a terminating 10-representation , but not a terminating 2-representation. This is the 0.1 example mentioned by other answers.

So there we have the answer to your question - because the prime factors of 2 are a subset of the prime factors of 10, all 2-terminating numbers are 10-terminating numbers, but not vice versa. It's not about 61 versus 6.1 - it's about 10 versus 2.

As a closing note, if by some quirk people used (say) base 17 but our computers used base 5, your intuition would never have been led astray by this - there would be no (non-zero, non-integer) numbers which terminated in both cases!

AakashM
  • 59,217
  • 16
  • 147
  • 181
  • So then why does "alert(0.15*0.15)" display "0.0225"? – Michael Geiser Nov 03 '14 at 20:00
  • 5
    @MichaelGeiser short answer: rounding at the point of display. What you think is `0.15` is actually (when stored as an IEEE double) ` 0.149999999999999994448884876874`. See [jsfiddle](http://jsfiddle.net/j69gLdvr/). – AakashM Nov 03 '14 at 22:29
  • Nice clear on point code example! I wish I could give you an up vote for that! I have to play with a few functions to explore where the round up cut off occurs. I'm still just amazed that we actually have to deal with this garbage; since people work in base ten almost 100% of the time and we use non-integers so much of the time that you'd think the default implementation of floating point math would handle this nonsense. – Michael Geiser Nov 04 '14 at 16:13
  • 1
    @MichaelGeiser the circuits to work with base 2 are smaller, faster, and more power efficient than the ones to work with base 10. Today we might be able to justify the overhead but in the 1970s when the standards were being set, it was a big deal. Trying to do it without the direct support of processor circuitry is even worse, expect orders of magnitude differences in speed. – Mark Ransom Jan 27 '16 at 22:59
  • This answer explains better than Jon Skeet himself! – goelakash Mar 13 '16 at 21:53
  • This answer explains how to check whether a number `x` has a terminating `b`-representation rigorously. – Jingguo Yao Jan 13 '17 at 06:01
16

The root (mathematical) reason is that when you are dealing with integers, they are countably infinite.

Which means, even though there are an infinite amount of them, we could "count out" all of the items in the sequence, without skipping any. That means if we want to get the item in the 610000000000000th position in the list, we can figure it out via a formula.

However, real numbers are uncountably infinite. You can't say "give me the real number at position 610000000000000" and get back an answer. The reason is because, even between 0 and 1, there are an infinite number of values, when you are considering floating-point values. The same holds true for any two floating point numbers.

More info:

http://en.wikipedia.org/wiki/Countable_set

http://en.wikipedia.org/wiki/Uncountable_set

Update: My apologies, I appear to have misinterpreted the question. My response is about why we cannot represent every real value, I hadn't realized that floating point was automatically classified as rational.

TM.
  • 94,986
  • 30
  • 119
  • 125
  • 6
    Actually, rational numbers *are* countably infinite. But not every *real* number is a rational number. I can certainly produce a sequence of exact decimal numbers which will reach any exact decimal number you want to give me eventually. It's if you need to deal with *irrational* numbers as well that you get into uncountably infinite sets. – Jon Skeet Jul 06 '09 at 20:25
  • True, I should be saying "real", not "floating-point". Will clarify. – TM. Jul 06 '09 at 20:27
  • 1
    At which point the logic becomes less applicable, IMO - because not only can we not deal with all *real* numbers using binary floating point, but we can't even deal with all *rational* numbers (such as 0.1). In other words, I don't think it's really to do with countability at all :) – Jon Skeet Jul 06 '09 at 20:31
  • @jonskeet I know that disagreeing with Jon Skeet would break a fundamental law of nature, so of course I won't do it :) However, I do think that it is okay to think of the internal representation of the numbers as indices to a set of the values that you want to represent externally. With this line of thinking, you can see that no matter how big your list of indices is (even if you had say, infinite bits of precision), you *still* wouldn't be able to represent all the real numbers. – TM. Jul 06 '09 at 20:39
  • 3
    @TM: But the OP isn't trying to represent all the real numbers. He's trying to represent all exact *decimal* numbers, which is a subset of the *rational* numbers, and therefore only countably infinite. If he were using an infinite set of bits *as a decimal floating point type* then he'd be fine. It's using those bits as a *binary* floating point type that causes problems with decimal numbers. – Jon Skeet Jul 06 '09 at 20:41
  • @molf that's a really good point. I guess I was misinterpreting the question as "why can't we represent any fractional value when we can represent any integer value". – TM. Jul 06 '09 at 20:42
11

To repeat what I said in my comment to Mr. Skeet: we can represent 1/3, 1/9, 1/27, or any rational in decimal notation. We do it by adding an extra symbol. For example, a line over the digits that repeat in the decimal expansion of the number. What we need to represent decimal numbers as a sequence of binary numbers are 1) a sequence of binary numbers, 2) a radix point, and 3) some other symbol to indicate the repeating part of the sequence.

Hehner's quote notation is a way of doing this. He uses a quote symbol to represent the repeating part of the sequence. The article: http://www.cs.toronto.edu/~hehner/ratno.pdf and the Wikipedia entry: http://en.wikipedia.org/wiki/Quote_notation.

There's nothing that says we can't add a symbol to our representation system, so we can represent decimal rationals exactly using binary quote notation, and vice versa.

ntownsend
  • 6,884
  • 9
  • 35
  • 34
  • That notation system works if we know where the cycle starts and ends. Humans are pretty good at detecting cycles. But, in general, computers aren't. To use be able to use a repetition symbol effectively, the computer would have to be able to figure out where the cycles are after doing a calculation. For the number 1/3, for example, the cycle starts right away. But for the number 1/97, the cycle doesn't show itself until you've worked out the answer to at least 96 digits. (Actually, you'd need 96*2+1 = 193 digits to be sure.) – Barry Brown Sep 25 '09 at 17:04
  • 4
    Actually it's not hard at all for the computer to detect the cycle. If you read Hehner's paper he describes how to detect the cycles for the various arithmetic operations. For example, in the division algorithm, which uses repeated subtraction, you know where the cycle begins when you see a difference that you have seen before. – ntownsend Sep 28 '09 at 14:08
  • 3
    Also, the question was about representing numbers exactly. Sometimes exact representation means a lot of bits. The beauty of quote notation is that Hehner demonstrates that on average there is a 31% saving in the size of representation compared to the standard 32-bit fixed-length representation. – ntownsend Sep 28 '09 at 14:14
6

BCD - Binary-coded Decimal - representations are exact. They are not very space-efficient, but that's a trade-off you have to make for accuracy in this case.

Alan
  • 3,667
  • 1
  • 24
  • 33
  • 1
    BCD are no more or less exact than any other base. Example: how do you represent 1/3 exactly in BCD? You can't. – Jörg W Mittag Jul 07 '09 at 09:23
  • 14
    BCD is an exact representation of a DECIMAL, thus the, um, "decimal" part of its name. There is no exact decimal representation of 1/3 either. – Alan Jul 11 '09 at 15:07
4

It's the same reason you cannot represent 1/3 exactly in base 10, you need to say 0.33333(3). In binary it is the same type of problem but just occurs for different set of numbers.

James
  • 351
  • 1
  • 2
  • 6
4

(Note: I'll append 'b' to indicate binary numbers here. All other numbers are given in decimal)

One way to think about things is in terms of something like scientific notation. We're used to seeing numbers expressed in scientific notation like, 6.022141 * 10^23. Floating point numbers are stored internally using a similar format - mantissa and exponent, but using powers of two instead of ten.

Your 61.0 could be rewritten as 1.90625 * 2^5, or 1.11101b * 2^101b with the mantissa and exponents. To multiply that by ten and (move the decimal point), we can do:

(1.90625 * 2^5) * (1.25 * 2^3) = (2.3828125 * 2^8) = (1.19140625 * 2^9)

or in with the mantissa and exponents in binary:

(1.11101b * 2^101b) * (1.01b * 2^11b) = (10.0110001b * 2^1000b) = (1.00110001b * 2^1001b)

Note what we did there to multiply the numbers. We multiplied the mantissas and added the exponents. Then, since the mantissa ended greater than two, we normalized the result by bumping the exponent. It's just like when we adjust the exponent after doing an operation on numbers in decimal scientific notation. In each case, the values that we worked with had a finite representation in binary, and so the values output by the basic multiplication and addition operations also produced values with a finite representation.

Now, consider how we'd divide 61 by 10. We'd start by dividing the mantissas, 1.90625 and 1.25. In decimal, this gives 1.525, a nice short number. But what is this if we convert it to binary? We'll do it the usual way -- subtracting out the largest power of two whenever possible, just like converting integer decimals to binary, but we'll use negative powers of two:

1.525         - 1*2^0   --> 1
0.525         - 1*2^-1  --> 1
0.025         - 0*2^-2  --> 0
0.025         - 0*2^-3  --> 0
0.025         - 0*2^-4  --> 0
0.025         - 0*2^-5  --> 0
0.025         - 1*2^-6  --> 1
0.009375      - 1*2^-7  --> 1
0.0015625     - 0*2^-8  --> 0
0.0015625     - 0*2^-9  --> 0
0.0015625     - 1*2^-10 --> 1
0.0005859375  - 1*2^-11 --> 1
0.00009765625...

Uh oh. Now we're in trouble. It turns out that 1.90625 / 1.25 = 1.525, is a repeating fraction when expressed in binary: 1.11101b / 1.01b = 1.10000110011...b Our machines only have so many bits to hold that mantissa and so they'll just round the fraction and assume zeroes beyond a certain point. The error you see when you divide 61 by 10 is the difference between:

1.100001100110011001100110011001100110011...b * 2^10b
and, say:
1.100001100110011001100110b * 2^10b

It's this rounding of the mantissa that leads to the loss of precision that we associate with floating point values. Even when the mantissa can be expressed exactly (e.g., when just adding two numbers), we can still get numeric loss if the mantissa needs too many digits to fit after normalizing the exponent.

We actually do this sort of thing all the time when we round decimal numbers to a manageable size and just give the first few digits of it. Because we express the result in decimal it feels natural. But if we rounded a decimal and then converted it to a different base, it'd look just as ugly as the decimals we get due to floating point rounding.

Boojum
  • 6,326
  • 1
  • 28
  • 32
4

This is a good question.

All your question is based on "how do we represent a number?"

ALL the numbers can be represented with decimal representation or with binary (2's complement) representation. All of them !!

BUT some (most of them) require infinite number of elements ("0" or "1" for the binary position, or "0", "1" to "9" for the decimal representation).

Like 1/3 in decimal representation (1/3 = 0.3333333... <- with an infinite number of "3")

Like 0.1 in binary ( 0.1 = 0.00011001100110011.... <- with an infinite number of "0011")

Everything is in that concept. Since your computer can only consider finite set of digits (decimal or binary), only some numbers can be exactly represented in your computer...

And as said Jon, 3 is a prime number which isn't a factor of 10, so 1/3 cannot be represented with a finite number of elements in base 10.

Even with arithmetic with arbitrary precision, the numbering position system in base 2 is not able to fully describe 6.1, although it can represent 61.

For 6.1, we must use another representation (like decimal representation, or IEEE 854 that allows base 2 or base 10 for the representation of floating-point values)

ThibThib
  • 7,270
  • 3
  • 27
  • 36
  • You could represent 1/3 as the fraction itself. You don't need an infinite amount of bits to represent it. You just represent it as the fraction 1/3, instead of the result of taking 1 and dividing it by 3. Several systems works that way. You then need a way to use the standard / * + - and similar operators to work on the representation of fractions, but that's pretty easy - you can do those operations with a pen and paper, teaching a computer to do it is no big deal. – nos Jul 10 '09 at 20:55
  • I was talking about "binary (2's complement) representation". Because, of course, using an other representation may help you to represent *some* number with finite number of elements (and you will need infinite number of elements for some others) – ThibThib Aug 11 '09 at 17:18
3

If you make a big enough number with floating point (as it can do exponents), then you'll end up with inexactness in front of the decimal point, too. So I don't think your question is entirely valid because the premise is wrong; it's not the case that shifting by 10 will always create more precision, because at some point the floating point number will have to use exponents to represent the largeness of the number and will lose some precision that way as well.

Dan Lew
  • 81,251
  • 29
  • 178
  • 174
3

I'm surprised no one has stated this yet: use continued fractions. Any rational number can be represented finitely in binary this way.

Some examples:

1/3 (0.3333...)

0; 3

5/9 (0.5555...)

0; 1, 1, 4

10/43 (0.232558139534883720930...)

0; 4, 3, 3

9093/18478 (0.49209871198181621387596060179673...)

0; 2, 31, 7, 8, 5

From here, there are a variety of known ways to store a sequence of integers in memory.

In addition to storing your number with perfect accuracy, continued fractions also have some other benefits, such as best rational approximation. If you decide to terminate the sequence of numbers in a continued fraction early, the remaining digits (when recombined to a fraction) will give you the best possible fraction. This is how approximations to pi are found:

Pi's continued fraction:

3; 7, 15, 1, 292 ...

Terminating the sequence at 1, this gives the fraction:

355/113

which is an excellent rational approximation.

Nick
  • 4,830
  • 8
  • 36
  • 67
  • But how would you represent that in binary? For example 15 requires 4 bits to be represented but 292 requires 9. How does the hardware (or even the software) know where the bit boundaries are between each? It's the efficiency versus accuracy tradeoff. – ardent Jul 05 '13 at 19:26
2

In the equation

2^x = y ;  
x = log(y) / log(2)

Hence, I was just wondering if we could have a logarithmic base system for binary like,

 2^1, 2^0, 2^(log(1/2) / log(2)), 2^(log(1/4) / log(2)), 2^(log(1/8) / log(2)),2^(log(1/16) / log(2)) ........

That might be able to solve the problem, so if you wanted to write something like 32.41 in binary, that would be

2^5 + 2^(log(0.4) / log(2)) + 2^(log(0.01) / log(2))

Or

2^5 + 2^(log(0.41) / log(2))
rachit_verma
  • 181
  • 2
  • 6
1

The problem is that you do not really know whether the number actually is exactly 61.0 . Consider this:


float a = 60;
float b = 0.1;
float c = a + b * 10;

What is the value of c? It is not exactly 61, because b is not really .1 because .1 does not have an exact binary representation.

Dima
  • 37,098
  • 13
  • 69
  • 112
1

There's a threshold because the meaning of the digit has gone from integer to non-integer. To represent 61, you have 6*10^1 + 1*10^0; 10^1 and 10^0 are both integers. 6.1 is 6*10^0 + 1*10^-1, but 10^-1 is 1/10, which is definitely not an integer. That's how you end up in Inexactville.

Mark Ransom
  • 271,357
  • 39
  • 345
  • 578
1

A parallel can be made of fractions and whole numbers. Some fractions eg 1/7 cannot be represented in decimal form without lots and lots of decimals. Because floating point is binary based the special cases change but the same sort of accuracy problems present themselves.

mP.
  • 17,011
  • 10
  • 66
  • 101
0

The number 61.0 does indeed have an exact floating-point operation—but that's not true for all integers. If you wrote a loop that added one to both a double-precision floating point number and a 64-bit integer, eventually you'd reach a point where the 64-bit integer perfectly represents a number, but the floating point doesn't—because there aren't enough significant bits.

It's just much easier to reach the point of approximation on the right side of the decimal point. If you started writing out all the numbers in binary floating point, it'd make more sense.

Another way of thinking about it is that when you note that 61.0 is perfectly representable in base 10, and shifting the decimal point around doesn't change that, you're performing multiplication by powers of ten (10^1, 10^-1). In floating point, multiplying by powers of two does not affect the precision of the number. Try taking 61.0 and dividing it by three repeatedly for an illustration of how a perfectly precise number can lose its precise representation.

John Calsbeek
  • 34,309
  • 7
  • 88
  • 100
0

There are an infinite number of rational numbers, and a finite number of bits with which to represent them. See http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems.

zpasternack
  • 17,494
  • 2
  • 59
  • 78
  • But even with an infinite number of bits, if you used a floating *binary* point, you still wouldn't be able to represent 0.1 exactly, just like you can't represent 1/3 exactly in decimal even with an infinite number of bits. – Jon Skeet Jul 06 '09 at 20:48
  • 3
    @Jon That's untrue: with an *infinite* number of decimals, I *can* for example express 'one third' *exactly*. The real-world problem is that *not physically possible* to have "an infinite number" of decimals or of bits. – ChrisW Jul 06 '09 at 21:16
0

you know integer numbers right? each bit represent 2^n


2^4=16
2^3=8
2^2=4
2^1=2
2^0=1

well its the same for floating point(with some distinctions) but the bits represent 2^-n 2^-1=1/2=0.5
2^-2=1/(2*2)=0.25
2^-3=0.125
2^-4=0.0625

Floating point binary representation:

sign  Exponent    Fraction(i think invisible 1 is appended to the fraction )
B11  B10 B9 B8   B7 B6 B5 B4 B3 B2 B1 B0

yan bellavance
  • 4,344
  • 19
  • 56
  • 87
0

The high scoring answer above nailed it.

First you were mixing base 2 and base 10 in your question, then when you put a number on the right side that is not divisible into the base you get problems. Like 1/3 in decimal because 3 doesnt go into a power of 10 or 1/5 in binary which doesnt go into a power of 2.

Another comment though NEVER use equal with floating point numbers, period. Even if it is an exact representation there are some numbers in some floating point systems that can be accurately represented in more than one way (IEEE is bad about this, it is a horrible floating point spec to start with, so expect headaches). No different here 1/3 is not EQUAL to the number on your calculator 0.3333333, no matter how many 3's there are to the right of the decimal point. It is or can be close enough but is not equal. so you would expect something like 2*1/3 to not equal 2/3 depending on the rounding. Never use equal with floating point.

old_timer
  • 62,459
  • 8
  • 79
  • 150
0

As we have been discussing, in floating point arithmetic, the decimal 0.1 cannot be perfectly represented in binary.

Floating point and integer representations provide grids or lattices for the numbers represented. As arithmetic is done, the results fall off the grid and have to be put back onto the grid by rounding. Example is 1/10 on a binary grid.

If we use binary coded decimal representation as one gentleman suggested, would we be able to keep numbers on the grid?

Joe
  • 1
  • 1
    Decimal numbers, sure. But that's just by definition. You can't represent 1/3 in decimal, any more than you can represent 0.1 in binary. Any quantization scheme fails for an infinitely large set of numbers. – Kylotan Feb 18 '11 at 20:45
0

For a simple answer: The computer doesn't have infinite memory to store fraction (after representing the decimal number as the form of scientific notation). According to IEEE 754 standard for double-precision floating-point numbers, we only have a limit of 53 bits to store fraction. For more info: http://mathcenter.oxford.emory.edu/site/cs170/ieee754/

logbasex
  • 584
  • 1
  • 5
  • 13