[Mono-list] Mono and .Net Floating Point Inconsistencies

Dallman, John john.dallman at siemens.com
Fri Jan 16 12:46:09 EST 2009


> when I take the same binary and run it with the same inputs it 
> produces different outputs if it is run on mono and .Net.

This is with Mono on Linux, and .NET on Windows? The executable 
is 32-bit .NET code? 

I suspect that you've hit a misfeature that exists for most 
floating-point code on 32-bit x86 Linux. It goes like this.

The 32-bit floating-point registers (the "x87 registers") are 
quite large, 80 bits long. This allows more precision than 
conventional "double" variables, which are 64 bits. There's a 
control register for setting what precision the processor should 
evaluate to, which has options of 32 bits ("float"), 64 bits 
("double") or 80 bits ("long double"). The idea was that you set 
up the precision you wanted to use, and that lower precision was 
faster. The power-up default is 80 bits, and Linux doesn't change 
that.

However, lower precision is not faster any more. With tens of 
millions of transistors available, rather than tens of thousands, 
chip designers can use different floating-point methods that are 
much faster. So using 80bit precision is obviously the right 
thing to do? Well, no. 

You see, doubles in memory are 64 bits. And if you use 80 bit 
precision in the floating point processor, that means that when 
intermediate values get saved out to memory, they get rounded off 
to 64 bits, and then extended back to 80 bits when they're re-
loaded. And those extension bits won't be the same as the ones 
that were discarded. Sadly, this is the way that floating point 
behaves by default on 32-bit x86 Linux. It introduces some noise 
into the results; one can't so much say that they are wrong, as
that they aren't quite consistent with other platforms that use
64 bits consistently throughout the calculation process. 

Sparcs, PowerPCs, ARMs, and 64-bit x86 use 64-bit floating-point 
consistently. 32-bit Windows used to do floating-point the same
way as 32-bit Linux does, but they changed it, so that when a 
Windows program starts up, the floating point has been set up to 
do 64 bits throughout. That's much more consistent with other 
platforms, and is an interesting instance of MS doing something 
arguably "right even though awkward" instead of "consistent with 
past errors". 

But why not change it to whatever you need? Well, changing the 
floating point precision is Very Slow on modern processors. You
have to completely flush the pipeline at minimum. It's much easier 
for it to be set at program start-up, and used consistently after
that. 

To get this to work right in Mono, you need to write a small C 
function, and call it with P/Invoke on Linux. That means your 
program won't be pure .NET code anymore; how to best cope with 
that depends on your program.

The code to be run in the C function is:

#include <fpu_control.h>    /* Mask the Denormal, Underflow and Inexact
exceptions,
        				leaving Invalid, Overflow and
Zero-divide active.
				      Set precision to standard doubles,
and round-to-nearest. */    
fpu_control_t desired = _FPU_MASK_DM | _FPU_MASK_UM | _FPU_MASK_PM |
_FPU_DOUBLE | _FPU_RC_NEAREST ;    
_FPU_SETCW( desired);

This needs to be a C function because everything in uppercase in 
that code is a macro, from fpu_control.h. You may want to leave out 
enabling floating point traps, in which case the code becomes:

#include <fpu_control.h>    /* Set precision to standard doubles, and
round-to-nearest. */    
fpu_control_t desired =	_FPU_DOUBLE | _FPU_RC_NEAREST ;    
_FPU_SETCW( desired);

It would be good, really, if Mono had a standard call for setting 
up consistent floating-point on all its platforms. 

-- 
John Dallman
Parasolid Porting Engineer

Siemens PLM Software
46 Regent Street, Cambridge, CB2 1DP
United Kingdom
Tel: +44-1223-371554
john.dallman at siemens.com
www.siemens.com/plm


More information about the Mono-list mailing list