[Mono-list] Why did you use (or created) CLI or CLR if there is already "byte-code" for example?

Jonathan Pryor jonpryor@vt.edu
22 Sep 2002 10:54:15 -0400


I'm not sure I fully understand your question, but I'll take a stab at
it anyway.

Simple answer (inaccurate, but fun anyway): "byte code" was developed by
Sun, .NET's Intermediate Language (CIL) was developed by Microsoft.  Do
you really think Microsoft will use a Sun product if they can avoid it?
;-)

Complex answer: "byte code" is simply an implementation of an
intermediate language.  There are *lots* of intermediate languages that
have been developed over the years.  Pascal (IIRC) used to compile down
to an intermediate language, as did Visual Basic 1.0-5.0.  The only
differences in intermediate languages is the trade-offs they make.

Java byte-code is good for low memory interpreted systems.  It was
originally designed for TV set-top boxes (when it was called Oak).  Low
memory and easily interpreted were key design points.

Then they started running it on desktops, and we (the users) wanted it
to run fast because we don't like waiting.  So we had Microsoft, Sun,
Netscape, IBM, etc., working on ways to make Java go fast, leading us to
Just-In-Time (JIT) compilers (which themselves had been developed
before).  For years, Microsoft had the fastest JIT compiler (according
to most benchmarks I saw), but Java performance appeared to hit a wall;
the performance stagnated.

(It's been a few years since I actually paid attention to Java
performance benchmarks, so the performance may no longer be stagnating. 
This is just what I remember.)

At about the same time, Microsoft was probably working on .NET, and used
this experience in designing its new intermediate language, MSIL/CIL
(Microsoft Intermediate Language/Common Intermediate Language).  One of
the design points for CIL was that it be easy to JIT compile -- it was
designed exclusively for JIT environments, and they had no intention for
it to be interpreted, as Java was.  Different requirements == different
design and a different intermediate language.

Probably another difference (just guessing) is that they knew they
wanted to add Generics (C++ templates) to the runtime, and so designed
CIL to easily support generics in the future.

This can be readily seen by considering how two integers are added
together.  In java, the op-code for addition is different for each type
-- you'd have a ``add.i'' for ints, a ``add.f'' for floats, etc.  In
CIL, there is a single ``add'' op-code.  (OK, CIL has a ``add.ovf'' for
overflow detection, but the point remains that it doesn't have a
different ``add'' for each type.)

What's this mean?  It means that Java can be easily interpreted, as the
interpreter knows what types to add, and can do that easily.  CIL
requires flow analysis to determine what types are on the stack, so it
can use the correct form of addition (int vs. float vs...).  Java is
simpler, making it preferable for embedded environments.

However, when generics are introduced, it means that any existing code
should work properly without needing to be recompiled.  The ``add''
op-code will accept any type on the stack, as long as they're among the
accepted data types for binary numeric operations.

This is probably why Generic support can be added to CIL by adding only
a few op-codes 
(http://research.microsoft.com/projects/clrgen/generics.pdf).

Additionally, CIL has other features that Java byte-code doesn't have,
such as the versioning constructs, explicit overriding of virtual
functions, and other features.  (However, I'm no JVM expert, so Java
byte-code could contain these and I wouldn't know about it.  However,
given the Java language, I highly doubt it.)

Hope this answers your question.

 - Jon


On Sat, 2002-09-21 at 20:18, Lenin Villeda wrote:
> I want to know why did they (the people who made .NET) use (or created) CLR (or CLI) if there is "byte-code" for example?
> 
> 
> ---------------------------------
> Do you Yahoo!?
> New DSL Internet Access from SBC & Yahoo!