It's because the errors in the internal representations of 4.64 and 2.0 combine constructively (meaning they make a larger error).
Technically speaking, 2.64 isn't stored exactly, either. However, there is a particular representation that corresponds to 2.64. Think about the fact that 4.64 and 2.0 aren't stored exactly, either, though. The errors in 4.64 and 2.0 are combining to produce an even larger error, one large enough that their subtraction does not give the representation of 2.64.
The answer is off by 3*10^-16. To give something of an example of how that can happen, let's pretend the representation for 4.64 is 2*10^-16 too small and the representation for 2.0 is 1*10^-16 too large. Then you would get
(4.64 - 2*10^-16) - (2.0 + 1*10^-16) = 2.64 - 3*10^-16
So when the calculation is done, the two errors have combined to create an even bigger error. But if the representation for 2.64 is only off by 1*10^-16, then this would not be considered equal to 2.64 by the computer.
It's also possible that 4.64 just has a larger error than 2.64 even if 2.0 has no error. If 4.64's representation is 3*10^-16 too small, you get the same thing:
(4.64 - 3*10^-16) - 2.0 = 2.64 - 3*10^-16
Again, if the representation of 2.64 is only off by 1*10^-16, then this result would not be considered equal to 2.64.
I don't know the exact errors in the real representations, but something similar to that is happening, just with different values. Hope that makes sense. Feel free to ask for clarification.