[Mono-dev] Investigating mono crashes on linux 4.1

Taloth Saldono talothsaldono at gmail.com
Tue Aug 11 17:39:39 UTC 2015


Time for a status update on this issue. Sadly no good news at all.

Long story short on my mono tests. Managed to get no-managed-allocator
working, but disabling TLABs didn't have any noticeable effect. Neither did
adding a memory barrier to the mono_100ns_ticks method and some other
attempts.
Btw already checked boehm earlier, test runs a lot longer but succeeds
without crash.

At that point I decided it was better to investigate the kernel again to
see what really changed under the hood. In the hopes of finding out where I
should focus my investigation in mono.

I've been looking at the gcc compiled assembly code. Also checked during
runtime, the lfence instruction is properly emitted, so the theory about
the alternative_2 not emitting it properly is off the table too.
After a few days I discovered that the commit indirectly caused gcc to
inline another method (vread_pvclock), this obviously changed the assembly
code.

Ever since then, I've been playing around with those vdso methods. Quite
literally compiled the kernel dozens of times.
With __always_inline on vread_pvclock, mono crashed. With noinline on
vread_pvclock, mono doesn't crash. Weirdest part is that the pvclock isn't
even used during my tests.
During inline the vdso object increases in size, but if I force noinline
and add nops to increase the vdso size, mono doesn't crash either. Another
theory shot down.
I've looked at the assembly code differences between compiles, but so far I
haven't been able to find a functional difference. I'm planning to go over
the entire assembly code one more time, see if I missed something, but
that's a rather tedious process.

So I'm pretty much clueless about how this could possibly affect the way
mono runs, yet it does.

I'm aware my tests with the kernel doesn't directly involve mono, but if
there is anyone with some expertise willing to hop in or give suggestions
on how to proceed. Please do tell.
I'm already miles outside of my area of expertise and I fear what happens
if this kernel version becomes mainstream and hundreds of mono users start
being affected by odd, untraceable crashes.


On Fri, Jul 24, 2015 at 3:19 PM, Rafael Teixeira <monoman at gmail.com> wrote:

> AFAIR, memory barriers are CORE to the new sgen GC logic for managing
> what/when to collect. Not sure if you still can build with the conservative
> (Boehm) GC to compare results, or gain more time to find the real
> solution...
>
> On Thu, Jul 23, 2015, 18:06 Taloth Saldono <talothsaldono at gmail.com>
> wrote:
>
>> Hey Greg,
>>
>> With my current test case it crashes anywhere between 0.1 and 30 sec,
>> occasionally longer.
>> If I run my test case till it crashes, 10 times in a row, measuring the
>> total run time:
>> vanilla = 3m9.216s 2m26.571s 2m31.168s 3m8.670s
>> clear-at-gc = 1m50.81s 2m01.85s 1m10.21s 1m10.21s
>> disable-minor = 0m16.74s 0m16.32s (duh, more major collections. the
>> reverse happens if you increase the nursery size.)
>>
>> So.... yeah, clear-at-gc actually makes it worse. ;)
>>
>> It quite possibly has something to do with the GC, but i'm trying to find
>> the link with that rdtsc instruction.
>> Assuming the tsc isn't used in some convoluted way, it means it should be
>> a missing memory barrier somewhere.
>>
>> Taloth
>>
>>
>> On Thu, Jul 23, 2015 at 2:11 PM, Greg Young <gregoryyoung1 at gmail.com>
>> wrote:
>>
>>> We have seen some similar crashes of mono in linux (ubuntu and amazon
>>> linux).
>>>
>>> One thing we have done that greatly reduces the frequency of the
>>> crashes so far (removed 95%+ of them) is. MONO_GC_DEBUG=clear-at-gc
>>>
>>> There is an issue here as well
>>> https://bugzilla.xamarin.com/show_bug.cgi?id=18151 that is likely
>>> related.
>>>
>>> On Thu, Jul 23, 2015 at 3:03 PM, Taloth Saldono <talothsaldono at gmail.com>
>>> wrote:
>>> > Hey guys,
>>> >
>>> > (Initially I incorrectly posted this to the mono-list, so for those
>>> > receiving this message twice, my apologies.)
>>> >
>>> > I'm looking for a mono expert on the managed threading system,
>>> hopefully you
>>> > can give me a pointer to where to look.
>>> >
>>> > The problem a couple of my users experience is that since linux kernel
>>> 4.1
>>> > mono crashes in a reproducible manner. (Using test case bug-18026 in a
>>> loop,
>>> > which is a threadpool stress-test)
>>> >
>>> > A similar problem occurred in 3.13.0 but that was fixed by backporting
>>> some
>>> > commits in the ubuntu kernel. (See
>>> > https://bugzilla.xamarin.com/show_bug.cgi?id=29212)
>>> >
>>> > Initially I believed that in 4.1 those commits were reverted, but tests
>>> > indicated that wasn't the cause.
>>> > So I did a full bisect on linux 4.0-4.1 on a 64-bit Ubuntu 14.04.2
>>> > Virtualbox. (~13 compiles of the kernel, took a couple of days)
>>> > And it ended up on
>>> >
>>> https://github.com/torvalds/linux/commit/c70e1b475f37f07ab7181ad28458666d59aae634
>>> .
>>> >
>>> > The problem seems to cause NullReferenceException and possibly native
>>> > SIGSEGVs in a variety of places. (I can dump some stacktraces if
>>> desired,
>>> > but I suspect that won't be helpful coz the corruption is likely caused
>>> > elsewhere.)
>>> >
>>> > To me it seems impossible that reading the tsc in any way could result
>>> in
>>> > the nullrefs. So my guess would it a side-effect of the memory
>>> barrier. From
>>> > what I understand from the commit, the 'mfence+lfence' changed to
>>> 'mfence or
>>> > lfence' (depending on what the cpu supports) and mfrence=lfence+sfence
>>> (not
>>> > entirely true, but close), so I have no idea what the heck is going on
>>> > there.
>>> > But if I would venture a guess that somewhere, indirectly, mono
>>> unknowingly
>>> > relies on that barrier to be there.
>>> > Theoretically it still means other native apps could experience the
>>> same
>>> > problem, but I would've expected reports about that already.
>>> >
>>> > My experience in these matters is pretty much non-existent. But dumping
>>> > issues on devs is the least productive way to get them fixed, so I try
>>> to
>>> > investigate as far as I can. Especially since it involves an issue that
>>> > could be caused by either mono or the kernel.
>>> >
>>> > So my question is: Is there a likely candidate in mono where it uses
>>> the tsc
>>> > (possibly for profiling) where the changed barrier could cause this odd
>>> > behavior? And obviously, is there anything in particular I could try to
>>> > narrow this down further?
>>> >
>>> > Almost forgot, but I did the bisect using mono 4.0.2.5, but I tested
>>> the
>>> > nightly version as well.
>>> >
>>> > Thank you for your time.
>>> >
>>> > Taloth
>>> >
>>> > _______________________________________________
>>> > Mono-devel-list mailing list
>>> > Mono-devel-list at lists.ximian.com
>>> > http://lists.ximian.com/mailman/listinfo/mono-devel-list
>>> >
>>>
>>>
>>>
>>> --
>>> Studying for the Turing test
>>>
>>
>> _______________________________________________
>> Mono-devel-list mailing list
>> Mono-devel-list at lists.ximian.com
>> http://lists.ximian.com/mailman/listinfo/mono-devel-list
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ximian.com/pipermail/mono-devel-list/attachments/20150811/bb583402/attachment.html>


More information about the Mono-devel-list mailing list