Here's the code:
int a;
a = 2147483648 + 2147483648;
printf("%d", a);
I know that maximum number of int variable is 2147483647. So, as I know, 2147483648 = -2147483648. But why 2147483648 + 2147483648 = 0?
Here's the code:
int a;
a = 2147483648 + 2147483648;
printf("%d", a);
I know that maximum number of int variable is 2147483647. So, as I know, 2147483648 = -2147483648. But why 2147483648 + 2147483648 = 0?
2147483648 is 1 followed by 31 zeroes. If you add it twice, it simply overflows (all 32 bits will be zero and carry will be set). Since carry is basically discarded (ignored, when you store the value into a), you don't see it, all you see are 0s.
10000000 00000000 00000000 00000000 +10000000 00000000 00000000 00000000 ------------------------------------ (1)00000000 00000000 00000000 00000000
Because your constants do not fit in an int
they are treated as a long
(or, if necessary, long long
, so:
2147483648 = 0x80000000
+ another 0x80000000
= 0x100000000
, which when you assign it to a
is truncated to 0 (assuming 4 byte int
s.)
gcc issues a warning for the assignment.
Second one of these today.
The behaviour of this code depends on:
The code could output any number or raise a signal but this must be covered in the compiler's documentation.
There are two places in which the code relies on implementation-defined behaviour: the result of the addition, and then the operation of storing the result of the addition into an int
variable.
Also I would like to point out that C arithmetic is based on values, not representations. The answer does NOT depend on 2's complement or binary carries or anything like that. 2147483648
is always a large positive integer , it is not a negative number. Adding two positive numbers cannot produce a negative number either. This is commonly misunderstood.
Here are some example cases:
long
is 32 bits, 2147483648
has type either unsigned int
or unsigned long
. According to the definition of unsigned arithmetic the result is the mathematical value of 2147483648 + 2147483648 modulo 2^32 which works out to 0
. int
is 32-bit and long
is 64-bit, 2147483648
has type long
. Then the result of the addition has type long
and value 4294967296
. Then, assigning this value to an int
is an out-of-range assignment causing implementation-defined behaviour. One common way that implementations define this is truncating higher bits. Raising a signal is another option.long
and 64-bit long long
then the case is quite similar to the previous bullet except the type is long long
.long
where the addition then causes undefined behaviour due to overflow , but we generally don't worry about that and assume nobody would ever design such a system . (There are systems with 36-bit integers though!)