2

In C, it is common (or at least possible) to target different processor architectures with the same source code. It is also common that the processor architectures define integer sizes differently. To increase code portability and avoid integer size limitations, it is recommended to use the C standard integer header . However, I'm confused on how this is actually implemented.

If I were write a little C program written for x86, and then decide to port that over to an 8 bit microcontroller, how does the microcontroller compiler know how to convert 'uint32_t' to its native integer type?

Is there some mapping requirement when writing C compilers? As in, if your compiler is to be C99 compatible, you need to have a mapping feature that replaces all uint32_t with the native type?

Thanks!

Izzo
  • 3,366
  • 10
  • 29
  • 57

3 Answers3

2

Typically <stdint.h> contains the equivalent of

typedef int int32_t;
typedef unsigned uint32_t;

with actual type choices appropriate for the current machine.

In actuality it's often much more complicated than that, with a multiplicity of extra, subsidiary header files and auxiliary preprocessor macros, but the effect is the same: names like uint32_t end up being true type names, as if defined by typedef.

You asked "if your compiler is to be C99 compatible, you need to have a mapping feature?", and the answer is basically "yes", but the "mapping feature" can just be the particular types the compiler writer chooses in its distributed copy of stdint.h. (To answer your other question, yes, there are at least as many copies of <stdint.h> out there as there are compilers; there's not one master copy or anything.)

One side comment. You said, "To increase code portability and avoid integer size limitations, it is recommended to use the C standard integer header". The real recommendation is that you use that header when you have special requirements, such as for sizes with an exact type. If for some reason you need a signed type of, say, exactly 32 bits, then by all means, use int32_t from stdint.h. But most of the time, you will find that the "plain" types like int and long are perfectly fine. Please don't let anyone tell you that you must pick an exact size for every variable you declare, and use a type name fro stdint.h to declare it with.

Steve Summit
  • 29,350
  • 5
  • 43
  • 68
2

The handling of different architectures is most likely implemeted by conditional preprocessing directives such #if or #ifdef. For instance, on GNU/Linux platform it might look as:

# if __WORDSIZE == 64
typedef long int        int64_t;
# else
__extension__
typedef long long int       int64_t;
# endif
Grzegorz Szpetkowski
  • 35,042
  • 4
  • 82
  • 127
  • This exactly addresses my question. So does only one version of stdint.h exists? And due to preprocessor directives, it can work on any machine architecture type? Furthermore, where would "__WORDSIZE == 64" be defined? – Izzo Nov 05 '16 at 14:49
  • 1
    There are many different versions of `stdint.h`, written by many different people for many different needs. Some versions use conditional compilation, such as Grzegorz Szpetkowski described, so that they can target multiple environments, but there will never be one universal copy or anything like that. – Steve Summit Nov 05 '16 at 14:54
0

There's no magic happening, in the form of "mapping" or "translating": stdint.h simply contains a list of typedef statements. The difference is in the code generator, not the compiler front end.

For an 8-bit target, the code generator will use native instructions for arithmetic on any types which it natively supports (perhaps there is a 16-bit add instruction). For the rest, it will insert calls to library routines to implement the larger data types.

It's not uncommon for the RTL of an 8-bit compiler to contain routines like "long_add", "long_subtract" and so on.

kdopen
  • 7,617
  • 7
  • 37
  • 49