23

Note that I am not looking for something opinion based or some third party library - I merely want confirmation that nothing is planned (or pointers to some discussion by the powers to be). I tried google and fail to find anything, so it looks like I am heading towards a "write my own implementation in C++/CLI using the Intel library").

Like many I am working with financial numbers and that means floats are terrifically problematic. At the same time, the .NET decimal is a beast - slow but also large (128 bit large) Which makes it inefficient to use when you amass hundreds of thousands of them and want them in a struct with some more information.

IEEE 754 defines 3 decimal types that likely have coming support in hardware in mainstream procesors (they already have in less commmon ones, the Power Series for example), for 32, 64 and 128 bit. There is an optimized Intel library to do decimal mathematics and it is possibly quite likely that at one point at least the easier mathematics will be in hardware.

All I could find is an ancient discussion in the Annotated C# Standard about interop between .NET decimal and the IEEE then proposed standard and that it was rejected but made in a way it can be identified whether a bitfield is either a .NET decimal or an bit inveted IEEE decimal 128.

Since then, many years have passed. Now IEEE 754 : 2008 is finished and - I wonder whether there is anything official that has been published about how that is to go on. As I said, the .NET decimal is slow and has zero chance to ever get hardware acceleration - and i is unwiedely big.

So, anyone knows anything in a blog or something? Note - it has to be official or referring to, I am not here for opinions that are from people unrelated to the .NET langauges or BCL teams. THis is about a canonical resource whether additional data types are considered in the future... possibly for a .NET 5.0 / 6.0 timeframe.

TomTom
  • 1
  • 9
  • 78
  • 143
  • Isn't the de facto standard for financial numbers simply a long int, measured in 1/100 of a cent? – Mr Lister Mar 03 '14 at 08:43
  • Not really when things are quoted 4-6 digits after the dot (0.000001) and then - this standard well predates the new IEEE, so - this is exactly why it is IMHO not really nice and due to a correction. De facto now based on outdated specifications is something to review. – TomTom Mar 03 '14 at 09:50
  • 5
    To those voting to close: I am NOT looking for opinions (reading helps here, really) but to stuff like blog posts FROM RELEVANT PEOPLE (i.e. people involved in the language specs, .NET runtime) where they talk about the issue. I wrote that explicitly. – TomTom Mar 03 '14 at 17:01
  • For the record, I did not vote to close and I have no intention of answering, because I am not qualified to, and I did read. I was just throwing in my two hundredths of a cent. – Mr Lister Mar 03 '14 at 17:52
  • @MrLister I suspected so. I really hope for any answer otherwise I Will just sit down and start making my own data types based on the Intel IEEE library in C++/CLI ;) I have some scnearios where the 128bit decimal really gets in my way being way too large (to use multiple in a struct, for example). – TomTom Mar 03 '14 at 20:43
  • Okay, maybe a long int for 1/100 of a cent won't work, but if you need values to .000001, could you just use long int measured in 1/1000000 (or whatever) of a cent? (I'm guessing not, since it's a pretty obvious solution, but I don't see why.) – Beska Mar 17 '14 at 20:15
  • No, because this is not flexible enough. I need something with mantisse because the prices sometimes go down a lot, sometimes not. You basically tell me that IEEE is made by a bunch of idiots - there is a reason that stuff exists, and that is that not every solution is so simple. – TomTom Mar 17 '14 at 20:29
  • I'm not saying that the IEEE standard is made by a bunch of idiots, and of course there are reasons that they did it the way they did...it can hold a much larger range than could be done by the basic system I'm describing. But it seems like the basic system of having an long just representing a fraction of a cent could hold a range that would be more than large enough, even for prices that "go down a lot". Assuming you don't need prices more accurate than 1/100,000,000,000 of a cent, you could still handle an enormous real price range...10's of millions of dollars. – Beska Mar 18 '14 at 14:47
  • No, sorry. It is not really that much space when you realize that you have to keep around 8 to 10 digits for awkward prices in some exchanges. I know one symbol in MIlan and one in Moscow that are TERRIFIC in how small their stock price is. – TomTom Mar 18 '14 at 14:48
  • This question appears to be off-topic because it is not really a programming problem per se, but about the specs themselves. – rjzii Mar 26 '14 at 14:25
  • @rob yeah. Except fopr real pogrmmers who have to deal with this problem and wonder whether an update of the .NET framework for the next vesion brings a solution. I understand you do not consider planning your projects something professional. Nice to know. – TomTom Mar 26 '14 at 14:30
  • 1
    @TomTom First, don't be a jerk. Second, I would have started this question off over at [Programmers.SE](http://programmers.stackexchange.com/) since it looks like a better first for that site as opposed here. – rjzii Mar 26 '14 at 14:57
  • 1
    What advantage would a base-ten floating-point type have over a fixed-point type? I know that some processors have *historically* included support for base-ten floating point, but I would think its time has come and gone. If pennies matter, an attempt to add two values which are large enough that the result can't be computed accurate to the penny should fail rather than yield an incorrect result, but I know of no floating-point types, decimal or otherwise, that behave that way. – supercat Apr 07 '14 at 20:22
  • 1
    @supercat THe fact that it is decimal based and as such usable for - for example - accounting and financial calcualtions. And processors actually do START to support IEEE 754 decimal types now. IBM POWER6 includes DFP in hardware, as does the IBM System z9. SilMinds offers SilAx; a configurable vector DFP coprocessor. So far the non-standard was bad - now with a standard hardware moves in. If you care about the numbers then this is a lot better than any float because it is exact in decimals. – TomTom Apr 08 '14 at 08:03
  • @TomTom: In what way is decimal floating-point better than decimal fixed-point [store a straight 128-bit number of trillionths] or units-plus-decimal fraction [e.g. use the bottom 32 bits to hold trillionths and the top 96 to hold 1/256ths]? I would think that one would want financial software to uphold (a+b)+c = a+(b+c), and a*b+a*c = a*(b+c) or throw an exception when it cannot, and floating-point types--whether binary or decimal--don't. If I could add new hardware numeric types, I'd push for "reinstatement" of 80-bit types, as well as 40-bit and 20-bit, stored as packed triples. – supercat Apr 08 '14 at 12:47
  • Performance, stadanndarization, accountability that is exact. SIZE - yes, I can hold 128 bit, but hey, why should I waste that when 32 bit are enough? Variablility of size. Yes, you can go around wasting memory like stupid. I am not. – TomTom Apr 08 '14 at 12:54
  • @TomTom: Floating-point types, whether base-10 or base-2, *aren't* exact. With a fixed point type, the value of 1/3+x-x will equal the value of 1/3 for any x that does not cause an overflow. With a floating-point type--no matter how precise--it generally won't. With regard to storage efficiency, 32 bits seems too anemic for any sort of financial purpose. A 64-bit decimal float might work, but I would expect that the vast majority of cases where a 64-bit decimal float would be adequate could be handled by a 64-bit fixed point type with 6 decimal places. – supercat Apr 08 '14 at 13:05
  • FLoating base 10 is exact as much as any base 10 limited it. This is 100% as exact as money goes because money is somehow in our civilizations accounted for in a decimal case, so you can do an exact representation of all valid elements. Now, 32 may be anemic for your financial purposes - but it is perfectly fit, but it is perfect for the area I am using it in which has a lot of such numbers to crunch. A lot of them, but all within the reach of 32 bit. What do we learn from that? YOu put a lot of assumoptions in the ear and give bad advice. – TomTom Apr 08 '14 at 13:08
  • @TomTom: When using fixed-point maths, rounding only occurs with division and multiplication--never with addition and subtraction. Further, the magnitude of the rounding error resulting from multiplication or division will be independent of the size of the operands. As for 32 bits being anemic, a 32-bit decimal float would have seven decimal digits of precision, while a 32-bit fixed-point number would have nine. I can see 32-bit fixed as being usable, but go beyond $170,000 with a 32-bit decimal float and the system will start *silently* dropping pennies. – supercat Apr 08 '14 at 13:37
  • Then it is nice that our area of analysis for this part os lower than that, is it no? UPs. Depending what you do 170.000 is a LOT of value. For example it is perfect to register the execution price of a financial stock transaction. MAybe not the value, but the price. – TomTom Apr 08 '14 at 13:39
  • @TomTom: From a numerical processing perspective, the right thing to do is use integers. From a programming standpoint, the only problems with integers are that (1) if one uses the same numeric type for values representing tenths and values representing thousandths and tries to add them, programming languages will let you do so without scaling, yielding bogus results; (2) even though processors can generally compute functions like x*y/10000 efficiently with 32-bit x and y, 64-bit immediate result, and 32-bit final result, programming languages don't expose that ability. – supercat Apr 08 '14 at 13:48
  • @TomTom: In any case, I would posit that for financial purposes, it's extremely desirable (if not essential) to have a type where (a+b)+c = a+(b+c) for all combinations of values which do not throw an exception. I am unaware of any floating-point type which has such a property. Do you disagree with the importance of that property, or know of any floating-point types where it holds? – supercat Apr 08 '14 at 13:58
  • No, because I program software people use and I am not in this universe to make idiotic theoretical claim. Want your own question? Open it. THis is about a STANDARD data type set and C#, not about some supercat't ideas how the world shoul dbe done. I do not care for that at all. – TomTom Apr 08 '14 at 14:06
  • @TomTom: Accounting is based upon the principle that if one adds one or more numbers to a value and then subtracts them, in any order, one will get the original value. One could design a floating-point type which would guarantee that all calculations which did not explicitly specify rounding would either be exact or fail; division would need to be done via method rather than an operator so as to allow precision to be specified (and preferably return a residue along with a quotient). I'm unaware, however, of any floating-point types, decimal or otherwise, that work that way. – supercat Apr 08 '14 at 15:30
  • 3
    Well now the C# and VB.NET compilers are (Apache) open sourced, it is now possible to add IEEE decimal language support if you can find enough of the community with an interest and the time. – Pete Stensønes Apr 12 '14 at 09:46

2 Answers2

2

I've programmed financial software and completely understand why simple integers are not enough.

It's exceedingly difficult to prove a negative but here goes:

http://social.msdn.microsoft.com/forums/vstudio/en-US/48c698db-d602-4f83-92bb-1c0506d58a78/ieee754 In short if the language doesn't require it .NET doesn't care.

Finding a language that requires it: Looks like Dietmar Kühl was going to attempt to add a decimal to c but failed to submit in time: C++ decimal data types

Despite that IEEE is still doing plenty of work on decimals. Try these google searches:

     Decimal site:ieee.org and limit it to the last month.
     Decimal Draft site:ieee.org and limit it to the last year.    

If it's in the hardware the software will eventually have it. Assembly will let you do anything the hardware can do but if you're stuck in a virtual machine that is ignorant of the new op codes I'm not sure you can do anything but emulate.

So for now I don't see much more than you do. Sorry.

Oh and you can tell supercat from me that (a+b)+c = a+(b+c) isn't anymore guaranteed on integers than it is on floating points. Bits can also fall off the left end of a non infinite sized integer number. 'a' might have been negative and saved you from the overflow if you had added it first.

Community
  • 1
  • 1
candied_orange
  • 5,855
  • 1
  • 24
  • 57
  • "(a+b)+c = a+(b+c) isn't anymore guaranteed on integers" -- they are identical in two's complement unchecked arithmetic. No one looks at the overflow bit these days. – Jim Balter Apr 19 '16 at 06:26
0

I don't know about Microsoft's plans (if any).

However, you might take a look at Mike Colishaw's page on decimal arithmetic at

You can get his specs for decimal arithmetic (the basis of IEEE 754 decimal floating point), as well as a C language reference implementation, test cases, etc.:

Compile that, p/Invoke those functions from a C# decimal floating point structure that you write and your should be good to go.

Mostly. Interoperability with other code is likely to be problematic.

Nicholas Carey
  • 60,260
  • 12
  • 84
  • 126
  • Good reference - that said, there is a refernce library from Intel already around that I could use as the basis for a C++/CLI library. All functionality provided and quite optimized ;) – TomTom Apr 23 '14 at 18:39