I'm interacting with a legacy system that takes a lot of input on the bit level. It requires me to pass in octets (bytes really) with specific bits set.
To keep this readable, I declare some flags like this:
private static final byte FLAG_A = 0b00010000;
private static final byte FLAG_B = 0b00100000;
private static final byte FLAG_C = 0b00011000;
That works perfectly.
The strange thing is that when I set the highest bit (like shown below), the compiler starts complaining that is finds an int. I could cast it down, but this seems strange to me. It's still 8 bits, so I would expect it to fit in a byte (even if the two-complement notation causes it to be interpreted as negative, which is of no consequence to me)
private static final byte FLAG_D = 0b10000000;
Any idea what's going on?