3

A Just-in-Time (JIT) compiler can optimize a program based on runtime information that is unavailable to an Ahead-of-Time (AOT) compiler.

The most obvious example of this runtime information is the target platform, e.g. the exact CPU on which the program is running, or any accelerators such as GPUs that might be available. This is the sense in which OpenCL is JIT-compiled.

But suppose we do know ahead of time what the target platform is: we know which SIMD extensions will be available, etc. What other runtime information can a JIT-compiler exploit that is unavailable to an AOT-compiler?

A HotSpot-style JIT-compiler will automatically optimize a program's hot spots... but can't an AOT-compiler just optimize the whole program, hot spots and all?

I would like some examples of specific optimizations that a JIT-compiler can perform which an AOT-compiler cannot. Bonus points if you can provide any evidence for the effectiveness of such optimizations in "real world" scenarios.

phuclv
  • 27,258
  • 11
  • 104
  • 360
c--
  • 368
  • 2
  • 11
  • 2
    Have you read this: https://stackoverflow.com/questions/2106380/what-are-the-advantages-of-just-in-time-compilation-versus-ahead-of-time-compila – Anubhav Srivastava Sep 25 '18 at 21:04
  • @AnubhavSrivastava Thanks for that link. It is a similar question but neither the top-rated answer nor the accepted answer actually answer my question. There are a couple of examples of JIT-only optimizations in the other answers though: optimization across libraries and dynamic inlining with trace trees. I'd love to know how much difference those make in practice. – c-- Sep 25 '18 at 21:41
  • Reflection is the notorious problem, can't statically determine what type is needed from just a string. – Hans Passant Sep 26 '18 at 15:57

3 Answers3

3

A JIT can optimize based on run-time information which result in stricter border conditions which were not provable at compile time. Examples:

  • It can see that a memory location is not aliased (because the code path taken never aliased it) and thus keep the variable in a register;
  • it can eliminate a test for a condition which can never occur (e.g. based on the current values of parameters);
  • it has access to the complete program and can inline code where it sees fit;
  • it can perform branch prediction based on the specific use pattern at run time so that it's optimal.

The inlining is principally also open to link time optimization of modern compilers/linkers but may lead to prohibitive code bloat if applied throughout the code just in case; at run time it can be applied just where necessary.

The branch prediction can be improved with normal compilers if the program is compiled twice, with a test run inbetween; in a first run the code is instrumented so that it generates profiling data which is used in the production compilation run to optimize branch prediction. The prediction is also less than optimal if the test run was not typical (and it's not always easy to prduce typical test data, or the usage patterns may shift over the life time of the program) .

Additionally, both link time and run time data optimization with static compilation need significant effort in the build process (to a degree that I have not seen them employed in production in the 10 or so places where I have worked in my life); with a JIT they are on by default.

Peter - Reinstate Monica
  • 12,309
  • 2
  • 29
  • 52
2

What can a JIT compiler do that an AOT compiler cannot?

In theory; nothing, because the AOT compiler can insert a JIT compiler into the resulting code if it wants to (and/or can generate self-modifying code, generate 123 alternative versions and select which version to use based on run-time information, ...).

In practice; an AOT compiler is limited by how much complexity the compiler designer felt like dealing with, the language it's compiling, and how the compiler is used. For example, some compilers (Intel's ICC) will generate multiple versions of the code and (at run-time) decide which version to use based on which CPU it's running on but most compilers aren't designed to do this; lots of languages don't provide any way to control "locality" (and reduce the chance of TLB misses and cache misses); and often a compiler is used in a way that creates barriers that prevent optimisation (e.g. separate "compilation units"/object files that are linked together later, possibly including dynamic linking, where it's impossible for an AOT compiler to do whole program optimisation and only possible to optimise parts in isolation). All of these things are implementation details and not a restriction of AOT.

In other words; in practice "AOT vs. JIT" is a comparison of implementations and not a true comparison of "AOT vs. JIT" itself; and in practice AOT gives poor performance because of implementation details, and JIT gives sightly worse than poor performance because JIT itself is bad (expensive optimisations aren't viable at all because they're being done at run-time); and the only reason that JIT seems "almost as good" is that it's only "almost as good as bad".

Brendan
  • 26,293
  • 1
  • 28
  • 50
  • If an AOT compiler inserts a JIT compiler into the resulting code, I think we can say the result is JIT compiled. The point of the question is to ask why it would do that - what can a JIT compiler do that the AOT compiler couldn't just do itself? Peter A. Schneider gave some examples in his answer that aren't merely implementation details. – c-- Oct 06 '18 at 09:30
  • @c--: If the AOT compiler actually inserts a JIT (for some parts of a program where it decided its beneficial) then it'd be practically indistinguishable from AOT compiled code that used other tricks to get the same benefits. In other words, it's still AOT compiled, and everything after that is splitting hairs over hypothetical implementation details. For Peter A. Schneider's answer I can find a single example that isn't wrong - an AOT compiler can do all of the things listed (without inserting a JIT into the AOT compiled code) and do it all more efficiently than using a JIT. – Brendan Oct 07 '18 at 01:59
  • I'd love to know how an AOT can "eliminate a test for a condition which can never occur ... based on the current values of parameters", given that it doesn't know the "current values of parameters". The best an AOT can do is eliminate unreachable code based on constant propagation, surely? – c-- Oct 07 '18 at 07:40
  • @c--: I'd love to know how people can be stupid enough to believe "test/s to determine if (based on current values) a test can be eliminated" makes sense. Do they think you can decide if a test can/can't be eliminated without an additional equal or worse test? Do they realise the branch prediction in modern CPUs makes eliminating the test a futile exercise in destroying performance (even without the idiocy of "test/s to potentially avoid a test")? – Brendan Oct 07 '18 at 11:01
  • @c--: A sufficiently advanced AOT can determine if a test is always unnecessary, but can also determine more complex patterns (simple example; a "write once" variables that cause a test to go from "initially always fails" to "eventually always passes", where the compiler can modify a function pointer when the write-once variable is set so that there's never a test). These things only depend on the complexity of the compiler, not delusional fantasies of JIT advocates that aren't smart enough to realise most of the performance of JIT comes from AOT compiled native code in libraries, etc. – Brendan Oct 07 '18 at 11:03
  • There's no need to call people stupid or rant about "delusional fantasies". This question had two parts: (a) what can a JIT compiler do that an AOT compiler cannot; and (b) what evidence is there that the answers to (a) are actually effective in "real world" usage. It is valid to provide answers to (a) but concede that they rarely help in practice. – c-- Oct 20 '18 at 13:28
0

One advantage is that JIT compilers can profile code continuously and optimize the output, for example align/unalign some blocks of code, unoptimize some functions, reorder the branches to reduce mispredictions...

Sure, AoT compilers can do profile-guided optimizations too, but they're limited in the test cases executed by the developers and testers which might not reflect the dynamic nature of the real inputs

For example Android has went back from AoT-only when they introduced ART in Kitkat to a hybrid approach in Nougat and later versions, where some parts of the app are compiled ahead-of-time quickly with less optimizations and then after running the profile result will be used to optimize the app again while the phone is being charged

Android 7.0 Nougat introduced JIT compiler with code profiling to ART, which lets it constantly improve the performance of Android apps as they run. The JIT compiler complements ART's current Ahead of Time compiler and helps improve runtime performance.[9]

https://en.wikipedia.org/wiki/Android_Runtime

Some related questions:

Community
  • 1
  • 1
phuclv
  • 27,258
  • 11
  • 104
  • 360