[Mono-devel-list] JIT profiling/benchmarking
Paolo Molaro
lupus at ximian.com
Tue Dec 30 06:29:50 EST 2003
On 12/29/03 ppham at mit.edu wrote:
> I am interested in profiling/benchmarking the JIT compiler in the Mono runtime
> to see how long JITting takes as a function of method length (in IL
> instructions). My objective is to find (roughly) the break-even point where
> pure interpretation and JIT compilation are equally fast.
The method length can't be the only variable, you also need to take into
account the number of times a method is executed. Also, any such
measurement is greatly influenced by how much the operations in the
method are optimized in the interp or in the jit, so you have to
define what kind of operations you're insterested in measuring or the
number won't have much meaning.
> My benchmark program includes several methods of varying length, which generate
> from 2000 IL instructions to about 20,000, and I am measuring times (crudely)
Is your benchmark supposed to have any meaning for real world programs?
You can use the monograph program to get some statistics from commonly
used programs and assemblies: it turns out that in the few assemblies
I have here, the average method length is less than 100 bytes, with some
peak at about 4000 (25k in mcs, because of the huge parse method).
> using System.DateTime.Now.Milliseconds within the program. Each method is
You should have find out that using Environment.TickCount is way faster
to execute than DateTime.Now.Milliseconds and hence has a much lower
impact on measurement errors.
> It was my assumption that the first time a method is run, it is JIT compiled and
> cached, and for every call after that the cached/compiled version is run.
> However, that seems not to be the case as I cannot measure any significant
> difference between the first time a method is run and every subsequent time.
Currently when you create a delegate the delegated method is compiled
right away, so when you execute it, it's already compiled.
> Also, mint, the Mono interpreter, appears to run these methods faster than
> mono, which has JIT by default. I am further puzzled by mint's profiling
How much faster are we talking here? How much time does the method take
to execute? Are the differences consistent? Without more info it's hard
to tell. From the looks of your test cases, you're just testing the
Decimal implementation that is mostly implemented in C code, so neither
the jit nor the interp are likely going to significantly change the
performance numbers. Also note that correctness may have it's costs and
much more effort has been put into getting the jit to work correctly in
many cases, so the interp may give lower numbers just because it doesn't
bother to do a number of checks or deal with some special cases (the
mono interp is not and should not be considered a CLR engine as far as
I'm concerned: it's more of a porting aid for the rest of mono).
> output which includes statistics such as "Time spent in compilation" and
> "Slowest method to compile", which shouldn't be in an interpreted CLR at all.
IL code is not suitable for direct iterpretation, so the interp needs to
do a prepass on the code and we use that time as 'compilation time'.
> If someone could explain how to better benchmark JIT times in Mono, or even
> in .NET on Windows for comparison, I would greatly appreciate it.
> Snippets of my benchmarking program are below.
*) measure on the actual runtime if Environment.TickCount or
DateTime.Now is faster: use the fastest of the two (or use other ways,
too, like pinvoking into gettimeofday() etc).
*) make sure you _can_ measure something with the instrument you have
(jit times are usually very small, so it's hard to measure them in the
context of an app executing a method, since you can do it only once
anyway). You need to repeat a compilation many times with something
like:
time ./mono --ncompile 10000 --compile Test:method_name test.exe
Or you may want to change the source and add your own timing
measurements so that startup time is not taken into account.
*) make sure you're really measuring the things you want to measure
(like, the time it takes to execute or jit DateTime.Now...). Also, if you want
to measure the jit times and the time taken to execute some code, you
better choose code that is actually jitted and not code that is
implemented as internal calls (C code compiled by your C compiler for
both execution engines). You also probably want to take the lower
number of a few (>5) runs instead of the average.
Unless you're measuring the exception handling code, benchmarks should
not throw exceptions.
*) measure something that has relevance to the real world:-)
*) use mono -O=all to enable all the currently implemented
optimizations. Play with different optimization options to see how they
affect compile and run times.
*) if you want to use constants for the loop counts, use:
const int LOOPCOUNT = 100;
instead of
readonly static int LOOPCOUNT = 100;
lupus
--
-----------------------------------------------------------------
lupus at debian.org debian/rules
lupus at ximian.com Monkeys do it better
More information about the Mono-devel-list
mailing list