Here is an excerpt from a stack trace I got while profiling
with OProfile:
#0 sem_wait () from /lib64/libpthread.so.0
#1 thread_suspendAll () at core/thread.d:2471
#2 gc.gcx.Gcx.fullcollect() (this=...) at gc/gcx.d:2427
#3 gc.gcx.Gcx.bigAlloc() (this=..., size=16401, poolPtr=0x7fc3d4bfe3c8, alloc_size=0x7fc3d4bfe418) at gc/gcx.d:2099
#4 gc.gcx.GC.mallocNoSync (alloc_size=0x7fc3d4bfe418, bits=10, size=16401, this=...) gc/gcx.d:503
#5 gc.gcx.GC.malloc() (this=..., size=16401, bits=10, alloc_size=0x7fc3d4bfe418) gc/gcx.d:421
#6 gc.gc.gc_qalloc (ba=10, sz=<optimized out>) gc/gc.d:203
#7 gc_qalloc (sz=<optimized out>, ba=10) gc/gc.d:198
#8 _d_newarrayT (ti=..., length=4096) rt/lifetime.d:807
#9 sequencer.algorithm.gzip.HuffmanTree.__T6__ctorTG32hZ.__ctor() (this=..., bitLengths=...) sequencer/algorithm/gzip.d:444
Two more threads are alive, but waiting on a condition
variable (i.e.: in pthread_cond_wait(), but from my own and
not from druntime code. Is there some obvious way I could have
dead-locked the GC ? Or is there a bug ?
This was compiled with GDC using DMD FE 2.062.
I wasn't able to reproduce this without the profiler running.
Comment #1 by Marco.Leise — 2013-11-07T15:34:57Z
I wish I uploaded a test case, but the one I had was rather large and you can't really dustmite a deadlock condition even if it happens frequently.
Comment #2 by Marco.Leise — 2014-04-05T19:02:29Z
Closing this for now without a test case.
Comment #3 by Marco.Leise — 2014-09-05T11:00:14Z
*** This issue has been marked as a duplicate of issue 4890 ***
Comment #4 by dfj1esp02 — 2014-09-06T15:32:47Z
issue 4890 is reproduced on a newly created thread, is it the case for you? If it's a kernel bug, say what OS, kernel version and processor (speed, cores, HT) do you use.
Comment #5 by dfj1esp02 — 2014-09-06T15:36:02Z
also provide stack traces of other threads
Comment #6 by tetyys — 2017-06-18T11:28:09Z
Having this issue only under profile build (dub --build=profile)
(gdb) bt
#0 sem_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_wait.S:85
#1 0x00000000011bbc4e in thread_suspendAll ()
#2 0x00000000011a22e1 in gc.impl.conservative.gc.Gcx.fullcollect() ()
#3 0x00000000011a078e in gc.impl.conservative.gc.Gcx.bigAlloc() ()
#4 0x00000000011a4659 in gc.impl.conservative.gc.ConservativeGC.__T9runLockedS79_D2gc4impl12conservative2gc14ConservativeGC12mallocNoSyncMFNbmkKmxC8TypeInfoZPvS40_D2gc4impl12conservative2gc10mallocTimelS40_D2gc4impl12conservative2gc10numMallocslTmTkTmTxC8TypeInfoZ.runLocked() ()
#5 0x000000000119dd4a in gc.impl.conservative.gc.ConservativeGC.malloc() ()
#6 0x000000000113ba6b in gc_malloc ()
#7 0x000000000113e8db in _d_newclass ()
#8 0x0000000000ff21f8 in vibe.core.drivers.libevent2.Libevent2Driver.connectTCP() (this=0x6de2dda2b500, bind_addr=..., addr=...)
at ../.dub/packages/vibe-d-0.8.0-beta.8/vibe-d/core/vibe/core/drivers/libevent2.d:347
#9 0x000000000104a60a in vibe.core.net.connectTCP() (bind_address=..., addr=...)
at ../.dub/packages/vibe-d-0.8.0-beta.8/vibe-d/core/vibe/core/driver.d:33
#10 0x0000000000a560fc in handle.GetRTXmlResponse() (Xml=1187, __Xml_8=0x1247e2a <_TMP9856>)
at ../.dub/packages/vibe-d-0.8.0-beta.8/vibe-d/core/vibe/core/net.d:145
#11 0x0000000000a538af in app._sharedStaticCtor13() () at source/app.d:215
#12 0x0000000000a55db5 in app.__modsharedctor() ()
#13 0x00000000011a8f42 in rt.minfo.__T14runModuleFuncsS442rt5minfo11ModuleGroup8runCtorsMFZ9__lambda2Z.runModuleFuncs() ()
#14 0x00000000011a8bd5 in rt.minfo.ModuleGroup.runCtors() ()
#15 0x00000000011a8ded in rt.minfo.rt_moduleCtor() ()
#16 0x0000000001143341 in rt.sections_elf_shared.DSO.opApply() ()
#17 0x00000000011a8dbd in rt_moduleCtor ()
#18 0x00000000011a637b in rt_init ()
#19 0x000000000113da52 in rt.dmain2._d_run_main() ()
#20 0x000000000113d9f8 in rt.dmain2._d_run_main() ()
#21 0x000000000113d968 in _d_run_main ()
#22 0x0000000000a6ce70 in main ()
(gdb) info threads
Id Target Id Frame
* 1 Thread 0x6de2ddb2dc00 (LWP 4036) "rtsrtorrentrela" sem_wait ()
at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_wait.S:85
Occurs early when running the program
Comment #7 by ibuclaw — 2022-12-30T23:22:26Z
*** This issue has been marked as a duplicate of issue 15939 ***