[Mono-dev] Mono crash

mono user mono.user789 at gmail.com
Sat Sep 6 14:39:42 UTC 2014


I am afraid that max-heap-size might not be the reason. I am seeing the
crashes at different levels of memory usage.

I have checked that setting the heap size to a small value does not result
in a crash - an exception is thrown instead. It does not appear to be the
same OOM exception as under .net, but there is no crash with mono
stacktraces etc. In contrast, the issue I need help with produces several
native stacktraces and no IL stacktrace.

Also, it would be be hard to explain why running under a debugger might
make the problem go away if that was the reason.

BTW, The message at the end of this stacktrace didn't use to happen under
mono 3.6 and might be an additional indication that some mono internals are
rather ill at crashtime, though in principle it could be unprovoked and/or
unrelated gdb breakage.

Thread 1 (Thread 0x7fedacfca780 (LWP 30016)):
#0  0x00007fedac1b19e4 in sigsuspend () from /lib64/libc.so.6
#1  0x00000000005cbf54 in suspend_thread (sig=<value optimized out>,
siginfo=<va            lue optimized out>, context=0x7fffaa5a8440) at
sgen-os-posix.c:113
#2  suspend_handler (sig=<value optimized out>, siginfo=<value optimized
out>, c            ontext=0x7fffaa5a8440) at sgen-os-posix.c:140
#3  <signal handler called>
#4  0x00007fedac51e5ba in pthread_cond_wait@@GLIBC_2.3.2 () from
/lib64/libpthre            ad.so.0
#5  0x000000000060d99c in _wapi_handle_timedwait_signal_handle
(handle=0x280a, t            imeout=0x0, alertable=1, poll=<value optimized
out>) at handles.c:1596
#6  0x000000000061fe4b in WaitForSingleObjectEx (handle=0x280a,
timeout=42949672            95, alertable=1) at wait.c:194
#7  0x000000000058233c in ves_icall_System_Threading_Thread_Join_internal
(this=            0x7fedacf242d0, ms=-1, thread=0x280a) at threads.c:1306
#8  0x00000000414e04de in ?? ()
#9  0x00007feda5c50908 in ?? ()
#10 0x00007fffaa5a8fa0 in ?? ()
#11 0x0000000000000001 in ?? ()
#12 0x00007fffaa5a8fa0 in ?? ()
#13 0x00000000414c5c40 in ?? ()
#14 0x0000000000a0ba50 in ?? ()
#15 0x00000000414e046c in ?? ()
#16 0x00007fffaa5a8d40 in ?? ()
#17 0x00007fffaa5a8b30 in ?? ()
#18 0x00007feda45a51b3 in System.Threading.Thread:Join
(this=../../gdb/dwarf2-fr            ame.c:694: internal-error: Unknown CFI
encountered.
A problem internal to GDB has been detected,
further debugging may prove unreliable.
Quit this debugging session? (y or n) [answered Y; input not from terminal]
../../gdb/dwarf2-frame.c:694: internal-error: Unknown CFI encountered.
A problem internal to GDB has been detected,
further debugging may prove unreliable.
Create a core file of GDB? (y or n) [answered Y; input not from terminal]





On 6 September 2014 08:58, Andrea Francesco Iuorio <
andreafrancesco.iuorio at gmail.com> wrote:

> Stupid question but someone have to ask: have you set MONO_GC_PARAMS for
> use a bigger heap ? You can set mono heap size appending
> "max-heap-size=xxxx" to your MONO_GC_PARAMS.
>
>
> 2014-09-05 20:24 GMT+02:00 mono user <mono.user789 at gmail.com>:
>
>> Sorry, I meant to say I was using 3.8.0, not 3.0.8. I'll try the git
>> version later.
>>
>>
>> On 5 September 2014 19:19, Juan Cristóbal Olivares <
>> cristobal at cxsoftware.com> wrote:
>>
>>> I think you should try with mono 3.8 or, even better, with the git
>>> version.
>>>
>>>
>>> On Fri, Sep 5, 2014 at 1:26 PM, mono user <mono.user789 at gmail.com>
>>> wrote:
>>>
>>>> It was suggested I try the boehm garbage collector. My code dies
>>>> quickly, saying "Too many heap sections: Increase MAXHINCR or
>>>> MAX_HEAP_SECTS"
>>>>
>>>> It was also suggested the reason might be that I am running out of
>>>> memory. That is a possibility, though I now have a crash that happens when
>>>> Mono is using about 12G of virtual space on a 64G machine. I still wanted
>>>> to verify if the reason was a failed allocation, and ran mono in a
>>>> debugger. The code then executed fine, suggesting that lack of resources
>>>> might not be the reason for the crashes. The same code fails reliably when
>>>> started from the command line. Having said that, something probably does
>>>> think that memory has run out. I have seen error messages along the lines
>>>> of "Error: Garbage collector could not allocate 16384 bytes of memory for
>>>> major heap section." even though there is plenty of memory available. I
>>>> have also seen clean out-of-memory crashes (i.e. my code terminates cleanly
>>>> with a clear error message and no mono stacktrace(s)).
>>>>
>>>> I am afraid I cannot provide an example, except possibly if we can
>>>> narrow down the cause so I can reproduce the crash using a small amount of
>>>> code and without using any proprietary data (that is currently needed to
>>>> reproduce the crashes). I am running 3.0.8.
>>>>
>>>> Many thanks for any help/suggestions/etc.
>>>>
>>>>
>>>>
>>>> On 22 August 2014 15:55, mono user <mono.user789 at gmail.com> wrote:
>>>>
>>>>> Is there anything I can do to avoid the following crash? I am running
>>>>> Mono 3.6.0. I am not using any native libraries that don't ship witth Mono.
>>>>> Many thanks.
>>>>>
>>>>>
>>>>> Stacktrace:
>>>>>
>>>>>
>>>>> Native stacktrace:
>>>>>
>>>>>
>>>>> Debug info from gdb:
>>>>>
>>>>> Mono support loaded.
>>>>> [Thread debugging using libthread_db enabled]
>>>>> [New Thread 0x7fba882d4700 (LWP 7103)]
>>>>> [New Thread 0x7fba88315700 (LWP 7102)]
>>>>> [New Thread 0x7fba833d0700 (LWP 7100)]
>>>>> [New Thread 0x7fba88b0e700 (LWP 7099)]
>>>>> 0x00007fba90992cd4 in sigsuspend () from /lib64/libc.so.6
>>>>>   5 Thread 0x7fba88b0e700 (LWP 7099)  0x00007fba90992cd4 in sigsuspend
>>>>> () from /lib64/libc.so.6
>>>>>   4 Thread 0x7fba833d0700 (LWP 7100)  0x00007fba90d032ad in waitpid ()
>>>>> from /lib64/libpthread.s        o.0
>>>>>   3 Thread 0x7fba88315700 (LWP 7102)  0x00007fba90a49163 in epoll_wait
>>>>> () from /lib64/libc.so.6
>>>>>   2 Thread 0x7fba882d4700 (LWP 7103)  0x00007fba90d01a21 in
>>>>> sem_timedwait () from /lib64/libpth        read.so.0
>>>>> * 1 Thread 0x7fba917ab780 (LWP 7098)  0x00007fba90992cd4 in sigsuspend
>>>>> () from /lib64/libc.so.6
>>>>>
>>>>> Thread 5 (Thread 0x7fba88b0e700 (LWP 7099)):
>>>>> #0  0x00007fba90992cd4 in sigsuspend () from /lib64/libc.so.6
>>>>> #1  0x00000000005cac54 in suspend_thread (sig=<value optimized out>,
>>>>> siginfo=<value optimized o        ut>, context=0x7fba88b0d780) at
>>>>> sgen-os-posix.c:113
>>>>> #2  suspend_handler (sig=<value optimized out>, siginfo=<value
>>>>> optimized out>, context=0x7fba88        b0d780) at sgen-os-posix.c:140
>>>>> #3  <signal handler called>
>>>>> #4  0x00007fba90d0192e in sem_wait () from /lib64/libpthread.so.0
>>>>> #5  0x000000000062c488 in mono_sem_wait (sem=0x977ca0, alertable=1) at
>>>>> mono-semaphore.c:101
>>>>> #6  0x00000000005a501a in finalizer_thread (unused=<value optimized
>>>>> out>) at gc.c:1073
>>>>> #7  0x00000000005823ab in start_wrapper_internal (data=<value
>>>>> optimized out>) at threads.c:660
>>>>> #8  start_wrapper (data=<value optimized out>) at threads.c:707
>>>>> #9  0x0000000000631b16 in inner_start_thread (arg=<value optimized
>>>>> out>) at mono-threads-posix.        c:100
>>>>> #10 0x00007fba90cfb9d1 in start_thread () from /lib64/libpthread.so.0
>>>>> #11 0x00007fba90a48b6d in clone () from /lib64/libc.so.6
>>>>>
>>>>> Thread 4 (Thread 0x7fba833d0700 (LWP 7100)):
>>>>> #0  0x00007fba90d032ad in waitpid () from /lib64/libpthread.so.0
>>>>> #1  0x00000000004a33f8 in mono_handle_native_sigsegv (signal=<value
>>>>> optimized out>, ctx=<value         optimized out>) at mini-exceptions.c:2305
>>>>> #2  0x00000000005005cf in mono_arch_handle_altstack_exception
>>>>> (sigctx=0x7fba9173bac0, fault_add        r=<value optimized out>,
>>>>> stack_ovf=0) at exceptions-amd64.c:905
>>>>> #3  0x0000000000415b69 in mono_sigsegv_signal_handler (_dummy=11,
>>>>> info=0x7fba9173bbf0, context=        0x7fba9173bac0) at mini.c:6842
>>>>> #4  <signal handler called>
>>>>> #5  alloc_sb (heap=0x979530) at lock-free-alloc.c:166
>>>>> #6  alloc_from_new_sb (heap=0x979530) at lock-free-alloc.c:415
>>>>> #7  mono_lock_free_alloc (heap=0x979530) at lock-free-alloc.c:459
>>>>> #8  0x00000000005d4bc7 in sgen_alloc_internal (type=<value optimized
>>>>> out>) at sgen-internal.c:1        69
>>>>> #9  0x00000000005eda92 in sgen_gray_object_alloc_queue_section
>>>>> (queue=0x978ba0) at sgen-gray.c:        58
>>>>> #10 0x00000000005edadd in sgen_gray_object_enqueue (queue=0x978ba0,
>>>>> obj=<value optimized out>)         at sgen-gray.c:96
>>>>> #11 0x00000000005d0a1c in pin_objects_from_addresses
>>>>> (section=0x7fba91744010, start=<value opti        mized out>,
>>>>> end=0x7fb481428040, start_nursery=<value optimized out>, end_nursery=<value
>>>>> optimiz        ed out>, ctx=...) at sgen-gc.c:987
>>>>> #12 0x00000000005d0afb in sgen_pin_objects_in_section
>>>>> (section=0x7fba91744010, ctx=...) at sgen        -gc.c:1025
>>>>> #13 0x00000000005d2d81 in collect_nursery (unpin_queue=0x0,
>>>>> finish_up_concurrent_mark=0) at sge        n-gc.c:2284
>>>>> #14 0x00000000005d3d88 in sgen_perform_collection
>>>>> (requested_size=4096, generation_to_collect=0        , reason=0x706be2
>>>>> "Nursery full", wait_to_finish=<value optimized out>) at sgen-gc.c:3174
>>>>> #15 0x00000000005f0c64 in mono_gc_alloc_obj_nolock
>>>>> (vtable=0x7fb78073c190
>>>>> 0xbcc240
>>>>> 0xbcc240
>>>>> 0x7fb78073c190
>>>>> 0x7fb78073c190
>>>>> vtable(%s), size=<value optimized out>) at sgen-alloc.c:314
>>>>> #16 0x00000000005f1074 in mono_gc_alloc_obj (vtable=0x7fb78073c190
>>>>> 0xbcc240
>>>>> 0xbcc240
>>>>> 0x7fb78073c190
>>>>> 0x7fb78073c190
>>>>> vtable(%s), size=40) at sgen-alloc.c:490
>>>>> #17 0x0000000041e50e83 in ?? ()
>>>>> #18 0x00007fb9fc0025d0 in ?? ()
>>>>> #19 0x00007fb44dd3cda8 in ?? ()
>>>>> #20 0x0000000000000028 in ?? ()
>>>>> #21 0x00007fba8a778ef0 in ?? ()
>>>>> #22 0x00007fba83279d20 in ?? ()
>>>>> #23 0x00007fba8a553e58 in ?? ()
>>>>> #24 0x00007fba8a553e30 in ?? ()
>>>>> #25 0x00007fba833d06e0 in ?? ()
>>>>> #26 0x00007fb780721a38 in ?? ()
>>>>> #27 0x0000000041e4d004 in ?? ()
>>>>> #28 0x00007fb4e5be8c70 in ?? ()
>>>>> #29 0x0000000000000000 in ?? ()
>>>>>
>>>>> Thread 3 (Thread 0x7fba88315700 (LWP 7102)):
>>>>> #0  0x00007fba90a49163 in epoll_wait () from /lib64/libc.so.6
>>>>> #1  0x0000000000585e0a in tp_epoll_wait (p=0x9776a0) at
>>>>> ../../mono/metadata/tpool-epoll.c:118
>>>>> #2  0x00000000005823ab in start_wrapper_internal (data=<value
>>>>> optimized out>) at threads.c:660
>>>>> #3  start_wrapper (data=<value optimized out>) at threads.c:707
>>>>> #4  0x0000000000631b16 in inner_start_thread (arg=<value optimized
>>>>> out>) at mono-threads-posix.        c:100
>>>>> #5  0x00007fba90cfb9d1 in start_thread () from /lib64/libpthread.so.0
>>>>> #6  0x00007fba90a48b6d in clone () from /lib64/libc.so.6
>>>>>
>>>>> Thread 2 (Thread 0x7fba882d4700 (LWP 7103)):
>>>>> #0  0x00007fba90d01a21 in sem_timedwait () from /lib64/libpthread.so.0
>>>>> #1  0x000000000062c59c in mono_sem_timedwait (sem=0x977808,
>>>>> timeout_ms=<value optimized out>, a        lertable=1) at
>>>>> mono-semaphore.c:64
>>>>> #2  0x00000000005870ea in async_invoke_thread (data=0x0) at
>>>>> threadpool.c:1566
>>>>> #3  0x00000000005823ab in start_wrapper_internal (data=<value
>>>>> optimized out>) at threads.c:660
>>>>> #4  start_wrapper (data=<value optimized out>) at threads.c:707
>>>>> #5  0x0000000000631b16 in inner_start_thread (arg=<value optimized
>>>>> out>) at mono-threads-posix.        c:100
>>>>> #6  0x00007fba90cfb9d1 in start_thread () from /lib64/libpthread.so.0
>>>>> #7  0x00007fba90a48b6d in clone () from /lib64/libc.so.6
>>>>>
>>>>> Thread 1 (Thread 0x7fba917ab780 (LWP 7098)):
>>>>> #0  0x00007fba90992cd4 in sigsuspend () from /lib64/libc.so.6
>>>>> #1  0x00000000005cac54 in suspend_thread (sig=<value optimized out>,
>>>>> siginfo=<value optimized o        ut>, context=0x7fff0cb35880) at
>>>>> sgen-os-posix.c:113
>>>>> #2  suspend_handler (sig=<value optimized out>, siginfo=<value
>>>>> optimized out>, context=0x7fff0c        b35880) at sgen-os-posix.c:140
>>>>> #3  <signal handler called>
>>>>> #4  0x00007fba90cff5ba in pthread_cond_wait@@GLIBC_2.3.2 () from
>>>>> /lib64/libpthread.so.0
>>>>> #5  0x000000000060c34c in _wapi_handle_timedwait_signal_handle
>>>>> (handle=0x280a, timeout=0x0, ale        rtable=1, poll=<value optimized
>>>>> out>) at handles.c:1596
>>>>> #6  0x000000000061e7fb in WaitForSingleObjectEx (handle=0x280a,
>>>>> timeout=4294967295, alertable=1        ) at wait.c:194
>>>>> #7  0x000000000058122c in
>>>>> ves_icall_System_Threading_Thread_Join_internal (this=0x7fba917102d0,
>>>>>   ms=-1, thread=0x280a) at threads.c:1306
>>>>> #8  0x0000000041e653f9 in ?? ()
>>>>> #9  0x0000000000a16d80 in ?? ()
>>>>> #10 0x00007fff0cb363f0 in ?? ()
>>>>> #11 0x00007fba8a4514a8 in ?? ()
>>>>> #12 0x00007fff0cb36190 in ?? ()
>>>>> #13 0x00007fff0cb35f80 in ?? ()
>>>>> #14 0x0000000000a23e40 in ?? ()
>>>>> #15 0x0000000000000000 in ?? ()
>>>>>
>>>>> =================================================================
>>>>> Got a SIGSEGV while executing native code. This usually indicates
>>>>> a fatal error in the mono runtime or one of the native libraries
>>>>> used by your application.
>>>>> =================================================================
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> Mono-devel-list mailing list
>>>> Mono-devel-list at lists.ximian.com
>>>> http://lists.ximian.com/mailman/listinfo/mono-devel-list
>>>>
>>>>
>>>
>>>
>>> --
>>> Atte,
>>> Juan Cristóbal Olivares
>>>
>>>
>>> *cxsoftware.com <http://www.cxsoftware.com/>*
>>> Skype: cxsoftware6001
>>> Celular: +56-9 9871 7277
>>> Central: +56-2 2348 7642
>>>
>>>
>>>
>>
>>
>> _______________________________________________
>> Mono-devel-list mailing list
>> Mono-devel-list at lists.ximian.com
>> http://lists.ximian.com/mailman/listinfo/mono-devel-list
>>
>>
>
>
> --
> *Andrea Francesco Iuorio*
> Student in Computer Science, Università degli Studi di Milano
> andreafrancesco.iuorio at gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ximian.com/pipermail/mono-devel-list/attachments/20140906/fe05ea9d/attachment-0001.html>


More information about the Mono-devel-list mailing list