The CLI specification says:
I II.1.1.3 Character data type
A CLI char type occupies 2 bytes in memory and represents a Unicode code unit using UTF-16
encoding.
There is no requirement that it be in a particular byte-order. And there are good reasons to expect that the byte order would match the byte order for other numeric types for the current architecture. I.e. on a big-endian machine, one would expect the char
type to be stored as big-endian 16-bit values.
While it's not an authoritative document, I'll note that several people who have answered or commented on How do I get a consistent byte representation of strings in C# without manually specifying an encoding? share this belief, i.e. that endianness of the char
type depends on the platform architecture. There are several statements in the comments and answers to that question that claim that char
is big-endian on big-endian systems.
It seems to me that if the endianness of your architecture is important, you would have access to a CLI implementation for a big-endian architecture and would be able to easily verify for yourself the byte order used for the char
type. Have you made any effort to do such a verification?
All that said, it is very likely that you do not need to know the byte ordering for the char
type. .NET provides character encoders for a wide variety of encodings, including both UTF16-LE and UTF16-BE. When using the char
type itself, the byte ordering is irrelevant, and in situations where the byte ordering matters, you can force a specific ordering by using the appropriate Encoding
type. If you feel that you have a situation which you believe is an exception to these general guidelines, it would be better to post a question describing exactly what that situation is and why you believe it's an exception to the general guidelines.