[Mono-list] Mono and .Net Floating Point Inconsistencies

russell.kay at realtimeworlds.com russell.kay at realtimeworlds.com
Fri Jan 16 11:24:05 EST 2009

If they are running under different architectures[*] then you will
definitely see difference in floating point arithmetic.

i.e. 32 bit x86 code uses the Floating Point Unit to do all the float
and double calculations, 64 bit x64 code uses SIMD to do all the
floating point and double calculations.

This leads to differences across architectures, so if you were checking
mono x86 against mono x86 there should be little that is divergent,
however if you are using .NET x86 against mono x86 there may well be
differences because of libraries that are implemented differently (to
different accuracies).

If you need them all to be the same you would be best to add traces to
the code to track down where things start to diverge and I would suspect
any library functions until you convince yourself that they are not the
source of any problem.

Also some optimisations may well change the ordering of floating point
code which may change the result in subtle ways.

Basically it is a bad idea to rely on absolute accuracy across machines
(even between intel cpus of different generations) in a network
situation, you have to relax the requirement for accuracy and go for
something that "is good enough".

Game programmers have been living with this for quite some time now.


[*] You will also see differences with CPUs from different manufacturers
(intel or amd) and with CPUs of different generations (Pentium4 agains

-----Original Message-----
From: mono-list-bounces at lists.ximian.com
[mailto:mono-list-bounces at lists.ximian.com] On Behalf Of ddambro
Sent: 13 January 2009 22:29
To: mono-list at lists.ximian.com
Subject: Re: [Mono-list] Mono and .Net Floating Point Inconsistencies

kuse wrote:
> ddambro wrote:
>> Hello,
>> I have a floating point heavy simulation written in C# that I am
>> interested in running in Linux.  The simulator runs fine in mono, but
>> I've noticed that when I take the same binary and run it with the
>> inputs it produces different outputs if it is run on mono and .Net.
>> I can tell, these inconsistencies are the result of slight
differences in
>> the floating point calculations.  It is important to my experiments
>> an arbitrary machine (running .Net or mono) can reproduce the same
>> results as another arbitrary machine.  Thus, I am curious as to if
>> is a known issue and if there is any way to force .Net and mono to
>> produce the same output with respect to floating point calculations.
>> Thanks,
>> David
> Provide a simple test case so other people can test it and try to find
> whats causing this.

Unfortunately, the program in question is fairly large, complex, and
multi-threaded, so it's difficult to pinpoint the exact section of code
where the two begin to diverge.  I'll keep looking and will certainly
an example if I find one.  For now though, are there any known
that cause these inconsistencies?  I make use of many functions in
System.Math, and do some double to float casting, could either of these
to my problems?
View this message in context:
Sent from the Mono - General mailing list archive at Nabble.com.

Mono-list maillist  -  Mono-list at lists.ximian.com

This email has been scanned by the MessageLabs Email Security System


This message and any attachments contain privileged and confidential information intended for the use of the addressee named above. If you are not the intended recipient of this message, you are hereby notified that any use, dissemination, distribution or reproduction of this message is prohibited. Please note that we cannot guarantee that this message or any attachment is virus free or that it has not been intercepted and amended. The views of the author may not necessarily reflect those of Realtime Worlds Ltd.


Realtime Worlds Ltd is registered in Scotland, number 225628. Registered Office: 152 West Marketgait, Dundee, DD1 1NJ.

More information about the Mono-list mailing list