5

Possible Duplicate:
JIT compiler vs offline compilers

So until a few minutes ago I didn't really understand what the difference between a JIT compiler and an interpreter is. Browsing through SO, I found the answer, which brought up the question in the title. As far as I've found, JIT compilers have the benefit of being able to use the specific processor it's running on and can thus make better optimized programs. Could somebody please give me a comparison of the pros and cons of each?

Community
  • 1
  • 1
Maulrus
  • 1,661
  • 1
  • 17
  • 26
  • 1
    See [JIT compiler vs offline compilers](http://stackoverflow.com/questions/538056/jit-compiler-vs-offline-compilers) – Matthew Flaschen Jul 11 '10 at 04:31
  • 1
    I don't think this is at all a duplicate of the other question, given that the other question asks the exact opposite of this question: "Are there scenarios where JIT compiler is faster than other compilers like C++?" (I will vote to reopen if this gets closed.) – Ken Bloom Jul 11 '10 at 05:10

6 Answers6

12

Interpreter, JIT Compiler and "Offline" Compiler

Difference between a JIT compiler and an interpreter

To keep it simple, let's just say that an interpreter will run the bytecode (intermediate code/language). When the VM/interpreter decides it is better to do so, the JIT compilation mechanism will translate that same bytecode into native code targetted for the hardware in question, with focus on the type of optimizations requested.

So basically a JIT might produce a faster executable but take way longer to compile?

I think what you are missing is that the JIT compilation happens at runtime and not compile time (unlike an "offline" compiler)

JIT Compilation has overhead

Compiling code is not free, takes time also. If it invests time on compiling it and then goes to run it only a few times, it might not have made a good trade. So the VM still has to decide what to define as a "hot spot" and JIT-compile it.

Allow me to give examples on the Java virtual machine (JVM):

The JVM can take switches with which you can define the threshold after which the code will be JIT compiled. -XX:CompileThreshold=10000

To illustrate the cost of the JIT compilation time, suppose you set that threshold to 20, and have a piece of code that needs to run 21 times. What happens is after it runs 20 times, the VM will now invest some time into JIT compiling it. Now you have native code from the JIT compilation, but it will only run for one more time (the 21), which may not have brought any performance boost to make up for the JIT process.

I hope this illustrates it.

Here is a JVM switch that shows the time spent on JIT compilation -XX:-CITime "Prints time spent in JIT Compiler"

Side Note: I don't think it's a "big deal", just something I wanted to point out since you brought up the question.

bakkal
  • 50,069
  • 10
  • 111
  • 100
  • So basically a JIT might produce a faster executable but take way longer to compile? – Maulrus Jul 11 '10 at 04:36
  • I wouldn't say that it is way longer, but there is overhead while the program is running. – TofuBeer Jul 11 '10 at 04:45
  • Oh I think I misunderstood what a JIT compiler is. I has assumed it would just fully compile the program when it was first run. Does it also function like an interpreter? – Maulrus Jul 11 '10 at 05:03
  • 1
    @Maulrus: it depends on the goals of the JIT compiler, and the kind of optimizations the designers wanted to support. Some JITs do a full recompilation at startup, others compile parts as they determine what needs the optimization the most. – Ken Bloom Jul 11 '10 at 05:13
  • Okay, I think I understand well enough now. Thanks! – Maulrus Jul 11 '10 at 05:17
  • is that possible/why i don't see any solutions to pre-compile JIT? Like i have a spec machine, i want boost the speed of my application (NodeJs) to maximum, so can I let the machine pre-compile and then use it later? – Đinh Anh Huy Aug 10 '20 at 17:35
2

JIT compilation doesn't inherently mean it is easy to disassemble. That is more implementation-dependent, such as with Java binaries. Note, however, that JIT can be applied to any kind of executable, whether it is Java, Python or even an already-compiled binary from C++ or similar. (IIRC, the Dynamo project involved re-compiling such binaries on-the-fly to increase performance.)

The trade-off for JIT compilation is that while the process's goal is to increase runtime performance, the process actually occurs at runtime as well, and so it incurs overhead while analyzing, compiling, and validating code fragments. If the implementation is inefficient or not enough optimizations occur, then it actually produces a performance degradation.

The other trade-off is that in some cases the JIT compilation can be very wasteful. For example, consider a self-modifying executable. If you compile a fragment of code, and then the executable modifies that fragment, you have to throw away the compiled fragment and then re-analyze that segment to determine if it is worth re-compiling. If this happens frequently, there is a significant performance hit.

Finally, there is a hit in memory consumption, as compiled code fragments must reside in memory in order to be effective. This can make it impractical for devices with limited amounts of memory, or else extremely difficult to implement well.

Zac
  • 772
  • 1
  • 5
  • 18
  • Apple's Rosetta JIT-compiles PowerPC code to x86. – Ken Bloom Jul 11 '10 at 05:00
  • Apple's Mac 68K emulator (on PCI PowerMacs) also uses JIT compilation. – Ken Bloom Jul 11 '10 at 05:21
  • Both examples are a special form JIT compilation known as Binary Translation. (See http://en.wikipedia.org/wiki/Binary_translation ). I'm not particularly familiar with either ("I'm a PC") but I imagine both employ Dynamic Binary Translation. – Zac Jul 14 '10 at 16:06
2

For me, at least, lack of inline ASM is a big one. Once in a while, you just want complete control over every detail of the CPU for some small part of your program. Even when I don't need it for the task at hand, I like the idea that everything that my computer is capable of can, in principle be done within my language.

dsimcha
  • 64,236
  • 45
  • 196
  • 319
1

JIT compilers have a lot more memory overhead since they need to load a compiler and interpreter in addition to the runtime libraries and compiled code that an AOT (ahead-of-time) compiled program requires.

Ken Bloom
  • 52,354
  • 11
  • 101
  • 164
1

JIT compilers are harder to write (not the whole story, but worth mentioning).

Hogan
  • 63,843
  • 10
  • 75
  • 106
0

I would say one real disadvantage of using a JIT compiler (more of a side effect really), is that is it easy to dissassemble the IL into human readable code.

Mitch Wheat
  • 280,588
  • 41
  • 444
  • 526
  • Not so! The Java language and Java bytecodes demonstrate this property, but JRuby programs compiled to Java bytecode can't be comprehensibly decompiled. Ditto for PowerPC programs JITted into x86 machine code using Apple's Rosetta. – Ken Bloom Jul 11 '10 at 04:51
  • It's also not the *only* disadvantage. – Ken Bloom Jul 11 '10 at 04:51
  • Scala programs go through several levels of syntactic desugaring (which makes them harder, though not impossible, to read) before being compiled into Java bytecode. – Ken Bloom Jul 11 '10 at 04:58
  • @Ken Bloom: sure, there are obfuscating programs for .NET, but not as vanilla. – Mitch Wheat Jul 11 '10 at 05:01
  • JITs don't have to be for IL. HP Labs' Dynamo project was basically a JIT for HP/UX machine code. The JIT and the underlying architecture are completely independent. – Ken Jul 11 '10 at 05:18
  • @Ken: I don't recall saying that "The JIT and the underlying architecture were dependent"? If you are JIT'ing then you have an intermediate language (IL) – Mitch Wheat Jul 11 '10 at 05:41
  • Mitch: I can't figure out what you're implying. When Dynamo was created, PA-RISC machine code *became* an IL? Or Dynamo is not a JIT because PA-RISC chips exist? (Though hardware JVM chips exist, and I don't think anybody claims they make the Hotspot JIT not-a-JIT.) Or something else? – Ken Jul 11 '10 at 14:49