[Mono-list] Silly question (for documentation)
Zac Bowling
zac at zacbowling.com
Mon Mar 6 13:41:59 EST 2006
Yeah,
I'm documenting compiler design and not end user's experience. (like
using variations of jay, flex, cup, etc.. and models and terms used in
generation of CLI assemblies as com paired to generating something like
javabyte code or traditional apps that do some type of interpreting).
I know that CSC creates a temp directory in where ever %TEMP% is
pointed to, to create the object files and deletes them when its all
done building but the compiler inlined in Visual Studio creates a local
directory to store the object files (the lovely 'obj' dir you find
lerking around).
IIRC, doing that method allows them to do incremental builds on only
the parts that change. It seems as if they build each
object/file/unit/whatever independently, then sort it all out after
everything is built by linking everything up. I'm just assuming that
from the filenames it generates and becuase you see some very creative
error messages in VS that when a build fails that something like this
might be happing. Like you said, it could be its own mechnisim, that
allows them to store the generated backends to file for temporary
storage instead of holding in memory (to gain speed or something for
their 'enterprise level' versions of their compiler or something, who
knows?), but its interesting stuff to watch. I really like when putting
VS in a debugger and walk it through step by step while watching the
directory for changes during a build to see what it creates and deletes
really quickly.
This is only just from what I see from dumps, but it always seemed that
CSC (and VS's embedded version) always seemed to generate output in
some circumstances that would be really hard to generate strictly from
reflection.emit if at all possible (but that is just from my experience
using reflection.emit) although its not nessiary to do it the way its
done in those differences to get the job done.
Not sure how credible this is but it says on the DotGNU wiki about how
their compiler works compared to ours: "..CSCC is a 3 step compile,
assemble and link compiler, while MCS is a direct codegeneration in
memory with Reflection.Emit." They might be doing something completely
different system then anyone else out there (since they also target
javabyte code) or it might be just a creative spin to the same ideas
here but some where they mentioned not even having a barely complete
reflection.emit. It sort of seemed like this was almost like the method
that VS was using with the whole 'obj' directory thing but maybe not so
much. What I'm getting at is the reflection.emit model would seem to
build from the bottom up in a more logical order, instead of building
laterally (for lack fof a better word). When it comes to the design of
the compiler, this would make a big difference.
--
Zac Bowling
http://zacbowling.com/
----- Message from jonpryor at vt.edu ---------
Date: Mon, 06 Mar 2006 07:15:49 -0500
From: Jonathan Pryor <jonpryor at vt.edu>
Reply-To: Jonathan Pryor <jonpryor at vt.edu>
Subject: Re: [Mono-list] Silly question (for documentation)
To: Zac Bowling <zac at zacbowling.com>
> On Mon, 2006-03-06 at 03:53 -0600, Zac Bowling wrote:
>> This is a silly question. Does anyone know of a good term or really
>> good short name that sums up the difference between a compiler that
>> uses reflection.emit like mcs does and one that uses a traditional
>> object compile, link, and execute method like DotGnu's or Microsoft's
>> C# compilers do?
>
> I think there is less difference than you think there is.
>
> First of all, "object compile, link, and execute" best describes GCC and
> CL.EXE, compilers which actually have intermediate object code and a
> linker (ld and LD.EXE, iirc) to link the object files together.
>
> CSC.EXE doesn't do this -- it's the same as mcs, in that it takes
> all .cs files at once and produces a .dll, .exe, or .netmodule file.
> There are no intermediate object files. CSC.EXE doesn't use
> System.Reflection.Emit, choosing instead to use its own internal
> mechanism (probably because CSC.EXE predates System.Reflection.Emit),
> but that's not something that's really visible to us mere users.
>
> I can't speak to DotGnu's C# compiler, but I imagine it also has a "take
> all .cs files and produce an assembly from them" mode as well; having
> separate object files is frequently considered to be annoying (since you
> can have .o files which are out of sync with each other). The only
> advantage to object files is faster compiling (less code to parse &
> compile), but mcs is already really fast...
>
> So I don't think focusing on a SRE vs. "object compile, link, execute"
> model makes sense. It's more a difference between using SRE and an
> internal IL generation mechanism, in which case this could be further
> distinguished between SRE, PEAPI, Mono.Cecil, custom code, or some other
> mechanism (generate IL directly and call ilasm?). But these are all
> differences in how the compiler is implemented, and don't impact how the
> programmer uses the compiler at all.
>
> - Jon
>
>
>
>
----- End message from jonpryor at vt.edu -----
More information about the Mono-list
mailing list