71

I always tell in c# a variable of type double is not suitable for money. All weird things could happen. But I can't seem to create an example to demonstrate some of these issues. Can anyone provide such an example?

(edit; this post was originally tagged C#; some replies refer to specific details of decimal, which therefore means System.Decimal).

(edit 2: I was specific asking for some c# code, so I don't think this is language agnostic only)

doekman
  • 17,528
  • 19
  • 61
  • 80
  • [Why not use Double or Float to represent currency?](http://stackoverflow.com/q/3730019/995714) – phuclv Jul 28 '16 at 03:38

8 Answers8

116

Very, very unsuitable. Use decimal.

double x = 3.65, y = 0.05, z = 3.7;
Console.WriteLine((x + y) == z); // false

(example from Jon's page here - recommended reading ;-p)

Hamid Pourjam
  • 18,954
  • 8
  • 53
  • 67
Marc Gravell
  • 927,783
  • 236
  • 2,422
  • 2,784
  • 48
    Darn it, if I'd known I had an example on my own page, I wouldn't have come up with a different one ;) – Jon Skeet Nov 25 '08 at 08:55
34

You will get odd errors effectively caused by rounding. In addition, comparisons with exact values are extremely tricky - you usually need to apply some sort of epsilon to check for the actual value being "near" a particular one.

Here's a concrete example:

using System;

class Test
{
    static void Main()
    {
        double x = 0.1;
        double y = x + x + x;
        Console.WriteLine(y == 0.3); // Prints False
    }
}
Sam Holder
  • 30,911
  • 13
  • 94
  • 171
Jon Skeet
  • 1,261,211
  • 792
  • 8,724
  • 8,929
  • If you are consuming a service that returns double currency values that you cannot control are there gotchas to think about in converting them to decimal ? Precision loss etc... – vikingben Oct 13 '16 at 15:06
  • 1
    @vikingben: Absolutely - fundamentally, that's a broken way of doing things, and you need to work out how you're best to interpret the data. – Jon Skeet Oct 13 '16 at 15:13
7

Yes it's unsuitable.

If I remember correctly double has about 17 significant numbers, so normally rounding errors will take place far behind the decimal point. Most financial software uses 4 decimals behind the decimal point, that leaves 13 decimals to work with so the maximum number you can work with for single operations is still very much higher than the USA national debt. But rounding errors will add up over time. If your software runs for a long time you'll eventually start losing cents. Certain operations will make this worse. For example adding large amounts to small amounts will cause a significant loss of precision.

You need fixed point datatypes for money operations, most people don't mind if you lose a cent here and there but accountants aren't like most people..

edit
According to this site http://msdn.microsoft.com/en-us/library/678hzkk9.aspx Doubles actually have 15 to 16 significant digits instead of 17.

@Jon Skeet decimal is more suitable than double because of its higher precision, 28 or 29 significant decimals. That means less chance of accumulated rounding errors becoming significant. Fixed point datatypes (ie integers that represent cents or 100th of a cent like I've seen used) like Boojum mentions are actually better suited.

Mendelt
  • 34,619
  • 6
  • 71
  • 94
  • Note that System.Decimal, the suggested type to use in .NET, is still a floating point type - but it's a floating decimal point rather than a floating binary point. That's more important than having fixed precision in most cases, I suspect. – Jon Skeet Nov 25 '08 at 09:19
  • 1
    That's precisely the issue. Currency is nowadays typically decimal. Back before the US stock markets decimalized, however, binary fractions were in use (I started seeing 256ths and even 1024ths at one point) and so doubles would have been more appropriate than decimals for stock prices! Pre-decimalization pounds sterling would have been a real pain though at 960 farthings to the pound; that's neither decimal nor binary, but it certainly provides a generous variety of prime factors for easy fractions. – Jeffrey Hantin Nov 10 '10 at 22:47
  • 1
    Even more important than just beind a decimal floating point, `decimal` the expression `x + 1 != x` is always true. Also, it retains precision, so you can tell the difference between `1` and `1.0`. – Gabe Mar 16 '11 at 14:00
  • @Gabe: Those properties are only meaningful if one scales one's values so that a value of 1 represents the smallest currency unit. A `Decimal` value may lose precision to the right of the decimal point without indicating any problem. – supercat Mar 01 '13 at 22:51
  • `double` has 15.9 significant decimal digits considering integer values only. The situation after the decimal point is value-dependent. – user207421 Jul 18 '18 at 01:29
5

Since decimal uses a scaling factor of multiples of 10, numbers like 0.1 can be represented exactly. In essence, the decimal type represents this as 1 / 10 ^ 1, whereas a double would represent this as 104857 / 2 ^ 20 (in reality it would be more like really-big-number / 2 ^ 1023).

A decimal can exactly represent any base 10 value with up to 28/29 significant digits (like 0.1). A double can't.

Richard Poole
  • 3,796
  • 20
  • 29
  • 3
    Decimal doesn't have 96 significant digits. It has 96 significant *bits*. Decimal has around 28 significant digits. – Jon Skeet Nov 25 '08 at 09:21
  • In which language are you speaking of the decimal type? Or do all languages that support this type support it in exactly the same way? Might want to specify. – Adam Davis Nov 25 '08 at 09:25
  • @Adam - this post originally had the C# tag, so we are talking about System.Decimal specifically. – Marc Gravell Nov 25 '08 at 09:29
  • Oops, well spotted Jon! Corrected. Adam, I'm talking C#, as per the question. Do any other languages have a type called decimal? – Richard Poole Nov 25 '08 at 09:30
  • @Richard: Well, all languages that is based on .NET does, since System.Decimal is not a unique C# type, it is a .NET type. – awe Jan 12 '10 at 09:51
  • @awe - I meant non-.NET languages. I'm ignorantly unaware of any that have a native base 10 floating point type, but I have no doubt they exist. – Richard Poole Jan 19 '10 at 14:24
4

My understanding is that most financial systems express currency using integers -- i.e., counting everything in cents.

IEEE double precision actually can represent all integers exactly in the range -2^53 through +2^53. (Hacker's Delight, pg. 262) If you use only addition, subtraction and multiplication, and keep everything to integers within this range then you should see no loss of precision. I'd be very wary of division or more complex operations, however.

Boojum
  • 6,326
  • 1
  • 28
  • 32
  • If you're only going to use integers though, why not use an integer type to start with? – Jon Skeet Nov 25 '08 at 09:52
  • 2
    Heh - int64_t can represent all integers exactly in the range -2^63 to +2^63-1. If you use only addition, subtraction and multiplication, and keep everything to integers within this range then you should see no loss of precision. I'd be very wary of division, however. – Steve Jessop Nov 25 '08 at 11:33
  • Some antiquated systems which are (alas?) still in use support `double`, but do not support any 64-bit integer type. I would suggest that performing calculations as `double`, scaled that any semantically-required rounding will always be to whole units, is apt to be the most efficient approach. – supercat Jun 09 '12 at 23:45
3

Using double when you don't know what you are doing is unsuitable.

"double" can represent an amount of a trillion dollars with an error of 1/90th of a cent. So you will get highly precise results. Want to calculate how much it costs to put a man on Mars and get him back alive? double will do just fine.

But with money there are often very specific rules saying that a certain calculation must give a certain result and no other. If you calculate an amount that is very very very close to $98.135 then there will often be a rule that determines whether the result should be $98.14 or $98.13 and you must follow that rule and get the result that is required.

Depending on where you live, using 64 bit integers to represent cents or pennies or kopeks or whatever is the smallest unit in your country will usually work just fine. For example, 64 bit signed integers representing cents can represent values up to 92,223 trillion dollars. 32 bit integers are usually unsuitable.

gnasher729
  • 47,695
  • 5
  • 65
  • 91
0

No a double will always have rounding errors, use "decimal" if you're on .Net...

Thomas Hansen
  • 5,443
  • 1
  • 21
  • 28
  • 6
    Careful. *Any* floating-point representation will have rounding errors, decimal included. It's just that decimal will round in ways that are intuitive to humans (and generally appropriate for money), and binary floating point won't. But for non-financial number-crunching, double is often much, much better than decimal, even in C#. – Daniel Pryden Aug 31 '09 at 08:06
-4

Actually floating-point double is perfectly well suited to representing amounts of money as long as you pick a suitable unit.

See http://www.idinews.com/moneyRep.html

So is fixed-point long. Either consumes 8 bytes, surely preferable to the 16 consumed by a decimal item.

Whether or not something works (i.e. yields the expected and correct result) is not a matter of either voting or individual preference. A technique either works or it doesn't.

  • Linking an article you wrote that disagrees with decades of common practices and expert options that floating-point is unsuitable for financial transaction representations is going to have to have a little more backup than a single page. – MuertoExcobito Jun 08 '15 at 15:42