2

According to online documentation, there are differences in between these fixed width integer types. For int*_t we fixed the width to whatever the value of * is. Yet for the other two types, the adjectives fastest and smallest are used in the description to request the fastest or the smallest instances provided by the underlying data model.

What are the objective meanings of "the fastest" or the "smallest"? What is an example in where this would be advantageous or even necessary?

ShadowRanger
  • 108,619
  • 9
  • 124
  • 184
Eduardo
  • 619
  • 5
  • 23

1 Answers1

4

There is no objective meaning to "fastest"; it's basically a judgement call by the compiler writer. Typically, it means expanding smaller values to the native register width of the architecture, but that's not always fastest (e.g. a 1 billion entry array would probably be processed quicker if it were 8 bit values, but uint_fast8_t might be a 32 bit value because the CPU register manipulation goes faster for that size).

"smallest" usually means "the same size as the bits requested", but on weird architectures with limited size values to choose from (e.g. old Crays had everything as a 64 bit type), int_least16_t would work (and seamlessly become a 64 bit value), while the compiler would likely error out on int16_t (because it's impossible to make a true 16 bit integer value there).

Point is, if you're relying on overflow behaviors, you need to use an exact fixed width type. Otherwise, you should probably default to least types for maximum portability, switching to fast types in hot code paths, but profiling would be needed to determine if it really makes any difference.

ShadowRanger
  • 108,619
  • 9
  • 124
  • 184