[Mono-list] Re: C# -> Binary .. MCS to C++ cross compiler

Ben Cooley bencooley@cinematix.com
Sat, 26 Oct 2002 12:52:04 +0900

> > The problem with a CLI implementation on many platforms is simply speed.
> > You can say what you want about the performance of IBM's JIT on x86, but
> > deployment on memory limited and performance limited embedded systems, a
> > simply isn't a good idea.
> Well, it depends on the end of the spectrum you are at.  We could go as
> far as saying that C for some systems is too much, or that even a stored
> program is too much, and its better to just wire a state machine for a
> particular problem.

Yes, I guess that's true.  I've certainly worked on projects that were of
that sort.  The projects I work on nowdays are rather different, however.

> Today you can purchase cell phones running Java and the Danger Hiptop
> also uses it, and really, you can not complain about their speed.

Yes, but a cell phone is not a 3D console game. ;-)

> Probably a JIT customized for server performance is not a good idea, but
> its open source: you can shrink it ;-)
> > There is an especially large performance hit when you are talking about
> > platforms like the Playstation 2 or Gamecube, where the slower speed of
> > host processor and the lack of vectorization and other C and C++
> > complex static optimization techniques really does make a big
> For those platforms you should probably run your time sensitive code in
> a thread with finely tuned assembler/C/C++, and just keep some of your
> logic in a higher, slower, non-time-sensitive portion of your code, so
> it should not be an issue anyways.

The problem is that putting the base level time sensitive code only gets you
so far.  Certainly there are gross optimizations you can make to the
inner loops, but the age of hand tuned assembly is long past (as most hand
tuned inner loops are now implemented in hardware).

What we see in our projects is across the board time issues.  We have
physics implementation, scene traversal, culling, ai pulsing, etc. etc. etc.
If we were to "hand tune" each of these, that would constitute a large
percentage of our project.

Another aspect to console programming is the relative slow speed of the
main processor.  The PS2 has a modified MIPS processor which runs
at about 1/4 to 1/8th the speed of modern processors, and function
call overhead and lack of good inlining and other time sensitive static
optimization is also a problem.

What we've actually seen in our work is noticable framerate increases just
by the implementation of better optimization strategies in the Codewarrior
compiler we use.  The earlier versions seemed to make poor choices on
inlining and expression optimization.  The later versions which produced
better optimized C++ code were actually much stronger.

> If you care too much about performance, you are better off using the
> tools provided by the CPU vendor, and not a general purpose compiler
> (ie, not Microsoft, not Borland's, not GCC's and not Ximian's).

There are many aspects to creating a sucessful project.  One of the
most important in creating games is simply speed.  There are memory
management issus on consoles as well that must be addressed, as
well as load latency, streaming, etc.

But then there are programmer productivity and code quality issues.
These issues are best addressed by a more powerful and expressive,
and "safe" language (i.e. C#).  The fact that C# allows you to program
safely "most" of the time, and also permits the use of pointers and
other unsafe techinques is a very compelling argument for using it
in game software.

> > Likewise, it is simply not practical to re-engineer all of the static
> > optimization techniques for every possible target processor into the JIT
> > system.  It will never be any more than a "good" solution for most
> > systems, and the fact that it is required to dynamically compile at load
> > time is also another problem.
> Thats what an ahead-of-time compiler would achieve (See NGen's code from
> Zoltan for a proof of concept implementation).

I agree.  My project is similar to his except that it translates the C# to
C++ code
directly, and bypasses the IL machine.  I'm not saying that this is a
"better" way
to do it, and certainly it would not be for a majority of applications.  But
are several advantages...

1. The resulting code is human readable.
2. The resulting code integrates well with existing C++ infrastructure such
as IDE's and
source debuggers and native linkable libs.
3. The resulting code is not affected by the negative aspects of an
intermediate tranlation to
the IL state machine. (local names are preserved, expressions, statements,
4. Static optimization in the back end compiler is facilitated by divorcing
the code generation
from the stack based state machine in Zoltan's project.

These may not be important to everyone, but they are important to the types
projects we do.  And many people simply find a precompiled statically
C or C++ binary executable direct from only slightly modified translated
as a very desirable thing when speed and code size are considered.

> In terms of speed, there is nothing stopping the engine of a JIT
> compiler from being as good as a native compiler.  Typically JIT engines
> have to make a trade off between code quality and compilation time, but
> this issue becomes a moot point with an ahead-of-time compiler.
> All of this, of course, within the scope of the .NET Framework, if you
> do not want some of its features, then yes, you could live with a
> simpler hack.

Well, a hack that works as well as the real thing isn't to bad of a hack.
We still
get dynamic compilation, dynamic loading of IL modules, IL support, etc.
is still precisely the same.  The only difference is we have a beefed up
compiled mono runtime which has a the parts of corlib and the application
statically compiled into maximally  optimized native code as "internal"

It's not right for everything and everybody, but in the niche that it's
intended for,
it will work fine, with no compromises as to functionality.