In C#, why is it that this works correctly:
double dr = 1.5;
but this does not:
double dr = (double)(3/2);
The later equates to 1
In C#, why is it that this works correctly:
double dr = 1.5;
but this does not:
double dr = (double)(3/2);
The later equates to 1
This occurs because the (3/2)
bit of the code gets evaluated before you do the cast to double
. So when that code is executed, those values are integers, and therefore the result of the calculation is also an integer. This means that the fractional part of the result is discarded and the result is an integer value: 1
.
By the time you come to cast it to a double there is already no fraction recorded in the value. Casting it to a double cannot put back information which has already been removed.
If you cast the individual values to doubles first, or express them with decimal points, it will work:
Console.WriteLine((double)(3/2));
Console.WriteLine(((double)3/(double)2));
Console.WriteLine(3.0/2.0);
outputs
1
1.5
1.5
Both of 3 and 2 are integer so if you want make it work try this :
double dr = (double)(3.0/2.0);
Because you are actually doing a division of 2 integers and then casting it to double. 3/2 would result in 1.5 but. as 3 and 2 are integer values it is floored to 1 and then casted to double.
To make it work you need to:
var result = (double) (3.0/2.0);
Your result depends on in which order the operations are executed.
In your example code, the Cast to double is done only after the division is done (numbers are treated an Integers). If you want to achieve a double value, you could do
double dr = 3d/2d;
The Suffix d (or D) signifies type double