[Mono-list] Philosophical Question - Why .NET on UNIX?

Alan McGovern alan.mcgovern at gmail.com
Tue Jul 3 01:37:37 EDT 2007


>
> One obvious security vulnerability (assumably of all .Net implementations,
> because all implementations generate IL), is that the very IL will regularly
> pave the way to security breaches. Likewise, IL makes it very difficult if
> not impractical to protect intellectual property.
>

I wouldn't consider that a security vulnerability at all. There are ways to
obfusticate and/or encrypt code if you so wish. IL itself may be easier or
harder to reverse engineer, i don't know. However there are many techniques
available to make it hard for people to decompile/reverse engineer your IL.

An example of the former for instance would be an enterprise application
> submitting password access for whatever purpose. Even if encryption
> libraries are called, if the IL falls into the wrong hands, the data
> processed by IL calls into encryption libraries make it a relatively easy
> matter to break the ostensible security of the system.
>

Not really. All you'd find out is that your application is sending a SHA1
hash of a string that the user enters to a server which then checks to see
if it is valid or not. I'd hardly call that a security breach. Of course, if
you're hardcoding passwords *into* your enterprise application then you have
a much bigger problem on your hands.


The .Net environment imposes relatively basic security measures. Yet keys
> can be stolen, signatures can be impersonated, new assemblies can be
> distributed to impersonate others... etc
>

I didn't believe it was possible to easily impersonate a signed .NET
library. Do you have any links for that? It'd be interesting reading!

. I don't think any .Net engineer claims the security is invulnerable to a
> variety of possible schemes. I don't believe it can be truly said that
> malware of any kind, or even flaws of any kind, are wholly obstructed from
> affecting our systems.
>

No, but it's still safer than buffer overruns in C. Nothing is secure. Show
me one 100% secure system for unmanaged code. (You're not allowed choose
that system that uses a USB woggle thing for security).

The best applications I use have rarely been updated. They sit behind a
> firewall. I fully trust them because any problems would be adverse to the
> purposes of the vendor, and because the integrity of the applications is a
> long proven matter. I don't see how you can better, more practical security
> than that.
>

So, the only way you judge an applications security is that it has a history
of being secured? Or do you call an application secure when you don't update
it? I fully trust a lot of applications i use, or i wouldn't have em
installed, but i still wouldn't put it past them to have an easily
exploitable security hole.

GetPixel() and SetPixel() have to be used for some purposes, but so far I
> have never had to use them in performance critical code.
>
 Unsafe code is another thing to stay away from however if end users are to
> enforce in-house restrictions based on security claims made for .Net.


I never mentioned unsafe code funnily enough. It can all be done very safely
in managed code: http://msdn2.microsoft.com/en-us/library/5ey6h79d.aspx

By "natively" (in the .Net sense), you mean the code runs in an environment
> which relates to the native system. IL *does not* compile into native calls
> of the OS.


No, what i mean is that the IL is compiled to native code and that native
code is then executed. Therefore the application is "native" when it is
executed. You are not interpreting IL on the fly.


.Net implementations on Windows are far slower than their native-compiled
> counterparts. Some claims are made that .Net math operations are something
> like 80% as fast as native operations, but these claimed cases are only
> possible if no boxing or unboxing are involved. Still, that's a substantial
> performance disadvantage. But if of course any boxing or unboxing are
> required, then of course the disparities are going to be huge.
>

I think i've managed to avoid unwanted boxing in pretty much every program
i've written in C# except in cases where i didn't care enough to write the
code needed to avoid boxing, for example methods which are called two or
three times in the lifespan of the program.

There are net BrowseForFolder dialogs in Visual Studio which, on my brand
> new Vista system, take 9 seconds to display file content. This is a huge
> delay, and something I certainly would never care to suffer in my own
> applications. The early days of Delphi 1 had dialogs which were
> instantaneous on Win95. So many years and orders of hardware efficiency
> later, to suffer these delays is to me, incredible.


Generally speaking, i get a BrowseForFolder dialog to appear within
milliseconds of clicking on the required button. Your system may need to be
checked up. It's no flaw in .NET or mono that your system is slow. It may be
worthwhile checking for malware.


Well thanks for the tip, but imho, "managed" is quite an overstated virtue
> or advantage. Few C# developers probably truly know for instance what the
> best way is to implement dispose patterns. There is conflicting information
> on it. Visual Studio doesn't agree with Richter. Richter doesn't agree with
> another source I rely on. Then again, how many can expertly reply when and
> how to implement finalize?
>

Once again, that's not an isse with the .NET framework. I can argue much
more usefully that there are a huge number of developers in C/C++ who can't
write code that doesn't leak. It's much much easier to do that in C#. All
you do is override the finalizer in any class that holds unmanaged
resources. Thats when and where.

What was the ostensible technical challenge implementing Free or FreeAndNil
> in C++Builder or Delphi? There was no challenge or mystery when and how to
> use them. When you freed an object the memory was instantly available to
> other processes. FreeAndNil obviously freed the memory and wrote null/nil to
> it. When you wrote an aggregate component/class, you routinely moved to your
> destructor and freed subcomponents in the reverse order you created them.
> These things were routine.


Now you don't need to do that. Funnily enough quite a lot of programmers
can't get this write. The reason is that objects can have quite complex life
cycles. Quite a lot of time is wasted  getting this right whereas with a
managed language you can forget completely about this kind of issue. There
is always a challenge in making sure that your objects are free'd correctly
under all circumstances.


What is the ostensible advantage of ForEach? Underneath your call to ForEach
> is an iterative process which has to make the well known, routine call to
> iterate count minus 1. The same thing has to be done.


Yes, but it's *nicer*. It's syntactic sugar. There's no point to it as such.
You could just as well argue that we don't need a "for" loop and a "while"
loop because they both do the same thing. One of them could easily replace
the other.


There's nothing managed for you at this level. If the count changes before
> iteration is complete, the only thing you can hope to account for that is
> the compiler


No, you'll get an exception as it's illegal to modify a collection while
iterating over it. You will find out very quickly if your collection is
modified when enumerating over it.


Now, if you're trying to tell me MONO runs as fast as Cocoa on OS X, I can
> be made a believer of that part of your assertion. "Managed" code little
> convenience -- but in many cases it certainly is one. In the case of dispose
> or finalize patterns, ambiguities can lead to further problems, and
> incomplete familiarity with these ambiguities can lead to inferior design
> and poor performance.


Cocoa is a programming  environment, not a language. I believe you mean
"Objective C". I never said that managed languages ran faster than unmanaged
ones. Once again, you can't blame Mono or .NET for the programmer
mis-understanding the Dispose or Finalize pattern. It's not ambiguous, It's
a fairly simple concept, much simpler than pointers. Of course, if the
programmer was equally unfamiliar with C or C++ i'm sure they'd write even
worse code than they would in C# ...

If we have to breech the barriers of ostensibly "safe" code to *hope* to
> achieve ostensibly comparable speed, hardly is it the case then that .Net
> delivers what I consider to be acceptable performance. Your remarks about
> GetPixel() and SetPixel(), I take to be agreement.


Nope, i never once mentioned unsafe code. GetPixel and SetPixel are slow as
there are several layers of indirection involved in getting the actual data.
The lockbits method provides a way to get at the actual data, copy it into a
managed byte[] and then in managed code, access that array as you wish. Of
course you *could* use unsafe code and pointers if you wanted to, which
would be slightly faster.


As to your final remarks, you do not understand me. Assemblies call into
> .Net libraries. What I am proposing is to map those calls into libraries
> supporting the .Net calls, but compile the output into native calls into the
> operating system -- using native OS dialogs, whatever.


C# compiles into native code when run. What you're talking about is writing
an implementation of .NET which is just a thin wrapper around native OS
calls, which is what i was asking earlier.

 I won't have my users waiting 9 seconds for a BrowseForFolder or
> BrowseForFile dialog to display content. I would even anticipate getting
> support calls for that.


And i'd laugh if you did. My system can manage to display a
FolderBrowserDialog in about 250ms (give or take). I think that's perfectly
acceptable.

Using managed languages removes the need to worry about memory management.
You can never have leaks. If you need to access unmanaged resources then you
have several good patterns for safely disposing them. If someone can't apply
those simple patterns, then they haven't a hope in hell of managing their
own resources in C/C++. You also gain type safety, array bounds checking and
misc other things.

What it also gives you is slightly higher memeory usage and slightly slower
code. If anyone tried to write a commercial video encoder using pure managed
C# i'd have to laugh at them. It'd never work. If anyone said they were
writing a new enterprise GUI application in C, i'd laugh at them because i
could write the same GUI in half the time with half the bugs in C# simply
because i don't have to worry about memory management issues and i have a
rich API at my disposal. This gives me more time for bug quashing and
feature extending.


So, if you can stand the performance reduction as compared to C when
balanced against the productivity gain moving to C#, then use C#. Otherwise
use your native language with the benefits and pitfalls it entails.

Alan.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.ximian.com/pipermail/mono-list/attachments/20070703/0986c6a9/attachment-0001.html 


More information about the Mono-list mailing list