Well, if you think about it this way:
1 byte is 8 bits
1 kb = 1024 bytes
1 mb = 1024 kb
1 gb = 1024 mb
... and so on ...
It's not just 2^n
. Things in terms of memory in computing are closely related to the number eight - the number which defines one byte in most modern computers.
The main reason why bits are grouped together is to represent characters. Because of the binary nature of all things computing, ideal 'clumps' of bits come in powers of 2 i.e. 1, 2, 4, 8, 16, 32...
. (basically because they can always be divided into smaller equal packages (it also creates shortcuts for storing size, but that's another story)). Obviously 4 bits (nybble in some circles) can give us 2^4 or 16 unique characters. As most alphabets are larger than this, 2^8 (or 256 characters) is a more suitable choice.
Machines exist that have used other length bytes (particularly 7 or 9). This has not really survived mainly because they are not as easy to manipulate. You certainly cannot split an odd number in half, which means if you were to divide bytes, you would have to keep track of the length of the bitstring.
Finally, 8 is also a convenient number, many people (psychologists and the like) claim that the human mind can generally recall only 7-8 things immediately (without playing memory tricks).