-4

when i try to multiply hex values like this

int i = 0x4;
int z = i * 3;

it says z = 12 c# just treats it like a normal number (answer should be C)

Irakli
  • 19
  • 5
  • 5
    it IS just a normal number. hex is just a way of presenting it. if you want the hex presentation, try `ToString('X')`. and: the result should be 12, not 16, did you mistype? – Franz Gleichmann Apr 01 '20 at 12:54
  • No, that code give z == 12 – Jasper Kent Apr 01 '20 at 12:55
  • 1
    An `int` is always an integral number. If you want to see letters from the hexadecimal dictionary, you can use `Convert.ToString(z, 16)` wich outputs `"c"`. – Silvermind Apr 01 '20 at 12:57
  • thanks i got it i just sometimes think like a 6yr old who sat down to code yesterday – Irakli Apr 01 '20 at 13:05

2 Answers2

1

An int simply contains an integer value without any base associated with it.

By default an int is initialized to a decimal:

int x = 27;

By default an int is displayed as a decimal:

Console.Write (x); // Shows 27

An int can also be initialized to a binary or a hexadecimal:

int y = 0xAA;
int z = 0b0101;

And displayed as them as hexadecimal:

Console.Write(y.ToString("X")); // Shows AA

But the way it is initialized and the way it is displayed are entirely independent.

Jasper Kent
  • 3,360
  • 10
  • 19
0

You're having a fundamental misunderstanding here, and it's important you get this straightened out right away. Or else the wrong understanding will keep bothering you.

Numbers themselves do not have an inherent representation. A variable of type int do not hold a value that is a decimal or hex. Nor binary for that matter. It is just a number. It doesn't matter whether it's a number in your computer, or a number on paper, or in nature. A number is just a number.

Your computer needs to store the number somehow, and for that purpose we use circuits that stores the number in a way we like to think of as binary - or zero and ones as we sometimes say. Those ones and zeroes are actually voltages, currents, transistors. But still, the number stored is itself just a number, without an inherent representation. The storage format is inherently binary, but not the number itself.

When you want to see the number, then most humans are used to seeing it in the decimal notation. Or textual, like "one billion". But you might as well see it in hexadecimal representation. Or binary representation. Whichever representation you decide to use, the value is the same. If you have ten apples, that's 10 apples, or $A apples, or 0x1010 apples. That was four ways of representing the exact same value.

When you express literal values using varying notations in your source code, the notation itself is thrown away by the time your number has become executable code. Whether you chose to express the number in hex, decimal or binary, that's completely irrelevant to the execution of your program. When you multiply 3 by 4, you get 12, irrespective of what representation is in your source, and irrespective of what representation you decided your program should use to output the number. You are free to choose any representation for the output, whether it's decimal, hex, binary, text, roman literals, or spoken Klingon.

If at any point you come across a task where you need to convert a number from decimal to hexadecimal, then let's make it clear what that situation actually is. It can only mean that your program is handling a string where a number is represented in decimal notation, and then the number must be parsed into a value. It could be an int variable holding the value temporarily. Following that, the value is then expressed in its hexadecimal representation in a new string. It's clear that numbers kept in strings do have an inherent representation, because otherwise we wouldn't know what they meant. How would you know what the string "01" meant, if you didn't know whether it was decimal, hex or binary, or whatever?

Bent Tranberg
  • 2,688
  • 23
  • 31