0

I'm a high school student learning coding in my pastime and I got stuck while learning Visual Basic. I'm having trouble figuring out the difference between Decimals, Doubles and Integers. I have searched the internet but found very little or confusing help. What I know so far is that Integers store whole numbers, Decimals hold's decimals and Doubles can hold both. But why would I choose Doubles over Decimals? If someone could please help explain the difference between the three.

Deemeehaa
  • 113
  • 2
  • 10

1 Answers1

1

Doubles are double-precision (64-bit) floating point numbers. They are represented using a 52 bit mantissa, an 11 bit exponent, and a 1 bit sign. Floating point numbers are not exact representations of decimal numbers; rather, they are binary approximations. They are therefore suitable for scientific work where precision is more important than accuracy, but are not suitable for financial calculations, where accuracy is paramount.

Decimals are the same decimal numbers we use in school, and work exactly the same way. They have a range of 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335. They are as close to an exact representation of decimal numbers as possible, and are designed for financial calculations, where accuracy and minimal rounding errors are very important.

Integers are whole numbers, zero, and all of the negative representations of whole numbers. Math using integers is exact, with no round-off errors. The high-order bit represents the number's sign. Precision depends on the number of bytes used to represent the integer; for example, a 16-bit signed integer can represent numbers from -32768 to 32767.

Robert Harvey
  • 168,684
  • 43
  • 314
  • 475