[Mono-dev] Incremental C# compiler

Jonathan Gilbert 2a5gjx302 at sneakemail.com
Wed Jul 12 11:45:37 EDT 2006


At 11:55 AM 12/07/2006 -0300, Rafael Teixeira wrote:
>> What if we add the constraint that only the bodies of methods can
>> change? The metadata of the new code would be determined on the first
>> run and then it would never change and thus it would not need to be
>> invalidated. Also the preciously done semantic analysis for any
>> unchanged functions would still be valid.
>
>That surely would be a good start, but day-to-day use of
>edit-an-continue normally entails adding methods/properties, besides
>changing methods/acessors internals, so probably a class recompile
>would be a better target, the constrain being that no breaking of ABI
>(just additions/internal changes) would be acceptable.

I don't see any problem with removing methods -- it would just require an
extra pass over all of the presently-loaded IL to ensure that no code calls
the method being removed. If it's easy to walk the heap and find delegates,
that would also be a necessary test, and if it isn't possible, then any
code that constructs a delegate will need to be enough to cancel the
edit-and-continue pass. Also, the heap will need to be checked for
System.Reflection.MethodInfo objects which might call the old code. These
are all relatively simple tests, as I understand it. As for the actual
removal, some flag such as replacing the pointer to the underlying
MonoMethodHeader's IL code with a NULL pointer should be enough to flag the
method as "no longer available", which should be enough to prevent new code
from being loaded which might want to call the method. I don't think it
would be an unreasonable restriction that we not allow users to remove a
method and continue, and then subsequently add a new method with exactly
the same signature and continue again, but even there, if we find an
existing object for that method signature, all we need to do is fill in its
IL code with the replacement code to make it active again.

>Also some refactorings may be possible should be supported like
>renaming methods (very easy for the private members, harder for the
>internal/protected/public sets).

As this only needs to be done for the assemblies that are currently loaded
(and existing S.R.MethodInfo objects), I don't see how it would be any
harder from outside of a class than within it. Also, I don't think we have
to worry too much about possible memory leaks caused by edit-and-continue,
since edit-and-continue passes only happen when the user requests them,
and, in my own experience at least, are almost always followed by a proper
recompile, once the code has been massaged into a working state.

Adding, replacing and removing methods seem to be relatively easy, provided
that inlining is disabled in the JIT (existing calls to methods can be
patched easily by turning the old method's code into a trampoline; no
rewriting of unchanged code is necessary). What's really difficult is what
I frequently use in Visual Studio 2005: editing a method's body while that
method is executing, and then simply setting the next line to execute to
another line in the method and continuing, with the values of all local
variables preserved. The only way this could be "easy" is with a naive JIT
output that never tried to put anything into registers and always went to
the stack for the values of variables. This would be one way to proceed --
to put the JIT into such a mode, if this is actually possible with the
current design -- but the other alternative is to wait for the new garbage
collector which, as I understand it, has been in slow development for a
rather long time now. Basically, a non-conservative GC requires exactly the
same tracking data that edit-and-continue of a running method would
require: a timeline showing where every variable's value is stored at each
instruction of the JITted output. It would certainly not be an *easy*
undertaking, but with such data, it would be possible to first collect the
values of all local variables, then recompile the method, then match up the
variables which still exist and munge the execution context as necessary to
jump to the same line of code in the new JIT output. I think this should be
the ultimate goal of any edit-and-continue efforts made in Mono, not that
they need to achieve this before they can be considered working and made
available, but that any changes which are made should plan ahead with the
possibility of this in mind.

One other possibility which should not be discounted out-of-hand, I think,
is the possibility of resurrecting the interpreter and bringing it
up-to-date. Certainly the hardest part of edit-and-continue of a running
method is reconstructing the execution context for a new version of a
method, and with an interpreter available to take over execution of the
method -- even if only for that particular call -- would make that step
much, much easier. With an interpreter, edit-and-continue of a running
method would boil down to: 1) collect data on local variables, 2) emit the
new IL, 3) set the local variables in the interpreter's context and then
set the instruction pointer to the next IL instruction. No nasty unportable
native stack munging required :-) Of course, local variable tracking data
is still necessary for this approach.

One thing I almost forgot to mention is that it is probably quite important
to allow a failed edit-and-continue pass to roll back the state, so that
the user can make more changes and try again (e.g. if the changes they made
were incompatible with edit-and-continue and they know how to make
alternate changes that could be compatible). It would be rather a
disappointment, I think, if the user went to edit-and-continue, and because
of a pedantic mistake which they could easily have avoided loses their
entire debug session. :-)

Just some ideas :-)

Jonathan Gilbert



More information about the Mono-devel-list mailing list