Say I am rouding the number 1.20515
to 4 decimal places in IEEE-compliant languages (C, Java, etc.) using the default round-to-half-even rule, the result will be "1.2051" which is not even.
I think this is due to the fact that 1.20515
is slightly biased towards 1.2051
when stored in binary, so there isn't even a tie in binary space.
However, if the input 1.20515
is exact in decimals, isn't this kind of rounding actually wrong?
Edit:
What I really want to know is if I do not want to use exact decimal arithmetic (e.g. Java's BigDecimal
), would these binary rounding rules introduce bias in the work flow: exact decimal in string (6 d.p. max) -> parse to IEEE double -> round using IEEE rules to 4 d.p.
Edit 2:
The "exact decimal" input is generated by Java using BigDecimal
or String
that comes directly from a database. The formatting, unfortunately, has to be done in JavaScript, which lacks a lot of support for proper rounding (and I am looking into implementing some).