Optimising trace-based JIT compilation of an abstract machine based interpreter
Instead of compiling an entire program upfront, a JIT compiler only compiles parts of the code that are frequently executed, avoiding unnecessary compilation and at the same time opening up opportunities for a number of new optimisations.
Different approaches for JIT compilation exist. A traditional method-level JIT compiler will compile frequently called methods. A tracing JIT however will compile specific code paths, and does so across intervening method calls, loops, and conditionals [1, 3].
A tracing JIT starts out by monitoring the interpretation of a program to find loops. Thus, it tries to determine jumps in the control flow that target already executed code. When the same jump has been taken a number of times, the JIT starts recording the activity of the interpreter into a trace. Whenever a jump is encountered to the same target that initiated the tracing, the JIT will compile the obtained trace and associate the resulting native code with that jump target. From then on, whenever the interpreter reaches a jump target for which a native trace is available, it will execute that trace instead of continuing regular interpretation.
While the basic operations of a tracing JIT compiler are fairly straightforward to comprehend, the devil is in the details. Since a trace represents a snapshot of program execution at a particular point in time, it inevitably has certain assumptions attached to it that need to be guarded when re-executing that trace. Therefore it is possible that trace execution is prevented, or even aborted while in progress, due to failing guard conditions installed during recording.
Existing tracing JIT compilers apply a wide variety of different optimisations and are generally successful at executing programs with high performance. However, these compilers usually work with bytecode interpreters. These kinds of interpreters first compile a source program to bytecode and then execute the program by dispatching over each bytecode instruction.
An abstract machine based interpreter on the other hand executes a program by explicitly storing a program state and updating this state at each program point. Abstract machine based interpreters are arguably easier to design, but are also usually less performant. Additionally, our anecdotal experience indicates that it is more difficult to implement optimisations for traces gathered from an abstract machine.
Recently, STRAF, an extensible Scala framework for executing trace-based JIT compilers, was developed at the Software Languages Lab. This framework is coupled with an abstract machine (specifically, a CESK machine [2, 4]) based interpreter for a Scheme-like language.
STRAF gathers traces of the execution of this interpreter and already optimises them using a limited set of strategies, including constant and variable folding. However, these optimisations do not always provide the performance improvement one might expect.
The goal of this thesis is therefore to investigate which optimisations can be applied by STRAF and consequently implement these.
Concretely, the student is expected to:
- Investigate the hypothesis that abstract machine based interpreters are harder to optimise.
- Design and implement optimisations for traces of abstract machines in STRAF.
- Potentially adapt the instruction set used by STRAF's interpreter so it can be better optimised.
- Validate their work by comparing the performance of STRAF with and without these new optimisations.
 A. Gal, C. W. Probst, and M. Franz. HotpathVM: An effective JIT compiler for resource-constrained devices. In Proceedings of the 2nd international conference on Virtual execution environments, 2006.
 M. Felleisen, and D. P. Friedman. A calculus for assignments in higher-order languages. In Proceedings of the 14th ACM SIGACT-SIGPLAN symposium on Principles of programming languages (POPL '87).
 C. Häubl, and H. Mössenböck. Trace-based compilation for the Java HotSpot virtual machine. In Proceedings of the 9th International Conference on Principles and Practice of Programming in Java, 2011.
 D. Van Horn, and M. Might. Abstracting abstract machines. ACM Sigplan Notices. 45(9) ACM, 2010.