[Mono-list] Future of JIT
Sun, 02 Feb 2003 02:34:02 +0100
Have you considered VCODE/ICODE to become a basis for your JIT backend?
It was developed/used for runtime code generation in Tick C Compiler,
which is in turn based upon LCC. And since its command set is tuned for
LCC, it should also fit the .NET architecture if i'm not wrong. I
haven't read many important papers because of lack of time, so don't
consider what i tell here sole truth, it can have rumors/
misconceptions/ stuff mixed in.
It has been well ported to all (widespread) RISC CPUs, and there is a
half-broken x86 port. I have been intending to make a "real" x86 port
for use in some project of mine. But i think MONO could be a better use
for me, and i would like to be of use if i have time and our targets match.
VCODE is a set of C macros, generating target machine code out of the
generalized RISC set without intermediate representation, like GNU
Lightning does. With a difference that GNU Lightning is broken.
ICODE is a form of binary representation for VCODE, which also
integrates a number of generic optimisations and register allocation,
reaching considerable execution speeds.
The thing to check: *licensing issues* (?), i don't know under which
condition it has been licensed.
- checked already. It's "fair use" - not to sell, else use as desired,
retain copyright. It even allows usage in commercial products but it
I think the system should be as staged as possible, so that only a minor
part should be ported to another architecture, and that improving some
actually platform-independant stuff wouldn't break some platform while
leaving others working. Besides, this VCODE already contain a lot of
work done by others.
I also think that it should be possible to attach a peephole optimiser
to VCODE. Any time a label is issued, if optimiser is enabled, it would
disassemble (simplified, extracting a minimum of information) all the
code generated since the last label, and write "tags", noting where each
CPU command starts and some basic properties of it, like LCC-Win32 does
it. Then, peephole optimisation is basically pattern-matching and
disassembling certaing commands as this information is needed, and so on
as usual. Finally, the last label should be moved back.
Then i think, compilation should be "lazy". The first time a function is
compiled, it is done the fastest way, optimising nothing. Functions
which it requieres are not compiled. Inputs are placed on stack as
usual, but with a CALL to some (assembly-written) dispatcher function.
This dispatcher when called does the folowing:
- Looks at return adress. Using that, finds out what function exactly
is to be called in this place. (This information needs to be generated
when compiling that original function)
- If the function to be called is already compiled, then change the
call adress (which is just placed before the return adress) with the
actual function adress to be called, and call this function, then return.
- If the function is not yet compiled, compile it, then procede as in
The next time a function will call another function directly
First time a function is compiled it is compiled without optimisations.
I guess it would be OK to place a call to counter in the beginning of
such a function - since it's not compiled for speed...
As soon as this counter decides that the function has been called "often
enough", it should recompile the function with all optimisations on,
this time without embedding the counter. The system to replace adresses
might either be the same as previous, or more a more rapid one so that
the unoptimised function can be deleted at once.
Of course, there can just as well be multiple optimisation steps, but it
might be the next step.