[Mono-dev] Incremental C# compiler

David Srbecky dsrbecky at gmail.com
Wed Jul 12 11:25:46 EDT 2006


Addition of new members should not a problem since it does not involve 
invalidation of metadata. Deletion must be forbidden. This could lead to 
simple form of updating: changed source files are send to the complier 
which will compile them and merge them into the exiting tree overriding 
any exiting members. Thus if only one method is changed, the IDE can 
strip out any other methods from the source file so that compiler does 
not have to compile them again.

I would not support renaming as it is basically equivalent to deletion 
and addition - you have to check that no-one is using the old name 
anymore and thus you have to do semantic analysis on everything again.

David

Rafael Teixeira wrote:
> On 7/12/06, David Srbecky <dsrbecky at gmail.com> wrote:
>> Thank you,
>>
>> So semantic analysis is the part that takes vast majority of the time
>> and the problem is that gmcs can not easily invalidate previously added
>> metadata. Right?
> 
> That is my bird's view understanding, but it surely is a very simplified 
> view.
> 
>> What if we add the constraint that only the bodies of methods can
>> change? The metadata of the new code would be determined on the first
>> run and then it would never change and thus it would not need to be
>> invalidated. Also the preciously done semantic analysis for any
>> unchanged functions would still be valid.
> 
> That surely would be a good start, but day-to-day use of
> edit-an-continue normally entails adding methods/properties, besides
> changing methods/acessors internals, so probably a class recompile
> would be a better target, the constrain being that no breaking of ABI
> (just additions/internal changes) would be acceptable.
> 
> Also some refactorings may be possible should be supported like
> renaming methods (very easy for the private members, harder for the
> internal/protected/public sets).
> 
> Fun,
> 
>>
>> David
>>
>> Rafael Teixeira wrote:
>> > Lexing and parsing normally are very fast and depend only on the size
>> > of the code being parsed. Semantic analysis is normally the most time
>> > consuming step as loading referenced assemblies and sifting around the
>> > huge metadata to resolve symbols and types is really the meat of the
>> > compiler, also, new "compiled" code is "appended" to this metadata/AST
>> > what increases the complexity of resolving symbols over time. Emission
>> > of code is done in memory first so it is fast. Saving to disk is slow
>> > but depends on emitted code size.
>> >
>> > For incremental compiling, caching the metadata, would make everything
>> > very fast, as normally very little would be changed from one
>> > compilation to the other. But gmcs would have to invalidate only part
>> > of the metadata/AST, what it wasn't built for.
>> >
>> > Hope it helps,
>> >
>> > On 7/12/06, David Srbecky <dsrbecky at gmail.com> wrote:
>> >> Hello,
>> >>
>> >> It seems that my whole Edit and Continue effort boils down just to one
>> >> thing: Being able to recompile as quickly as possible.
>> >>
>> >> The idea is that gmcs would not be used as a command line tool but 
>> as a
>> >> library. After compilation it would keep all usefully data in 
>> memory so
>> >> it could use them during an incremental compilation. For example, I do
>> >> not think that it is necessary do parse again file that have not 
>> changed.
>> >>
>> >> I actually do not know what takes so long on compilation. Can anyone
>> >> give me a rough estimate on how long the compiling stages take please?
>> >> - lexing, parsing, semantic analysis and such
>> >> - emission of code to System.Reflection.Emit
>> >> - Saving of the assembly on disk
>> >>
>> >>
>> >> David
>> >> _______________________________________________
>> >> Mono-devel-list mailing list
>> >> Mono-devel-list at lists.ximian.com
>> >> http://lists.ximian.com/mailman/listinfo/mono-devel-list
>> >>
>> >
>> >
>>
>>
> 
> 




More information about the Mono-devel-list mailing list