Bug 3284 – snn linked programs never release memory back to the OS
Status
RESOLVED
Resolution
INVALID
Severity
blocker
Priority
P1
Component
druntime
Product
D
Version
D2
Platform
All
OS
Windows
Creation time
2009-09-03T03:08:12Z
Last change time
2023-01-19T03:33:13Z
Assigned to
No Owner
Creator
Dominik Susmel
Comments
Comment #0 by dominik — 2009-09-03T03:08:12Z
Programs using GC or manual malloc/free never actually release memory back to the OS (Windows), which causes programs with heavy memory allocation / deallocation to utilize page swaps heavily and eventually crash.
This is a blocking issue persistent in 1.047 release also (it was not available in version selection).
You can trivially reproduce this effect by malloc'ing several MB's and subsequently freeing them - watch the process memory usage before and after it.
Comment #1 by dlang-bugzilla — 2009-12-03T19:50:59Z
Actually, the garbage collector doesn't use malloc to allocate heap memory (it uses the OS-specific page allocation functions), but it does suffer from the same problem.
Comment #2 by cauterite — 2016-08-20T00:39:13Z
I highly suspect this issue has already been resolved.
Here's a simple test:
import core.memory;
void main() {
// allocate and free lots of 10MB arrays
foreach (_; 0 .. 1000) {
auto x = GC.calloc(10_000_000, 1);
GC.free(x);
x = null;
};
import std.c.stdio;
printf("done\n"); getchar();
};
if you remove the `GC.free(x)` the working set will grow to >1GB. if you leave it in, memory usage is normal ~15MB or so.
so the GC is definitely releasing pages back to the OS when it deallocates.
And before you ask, yes I am linking with SNN.lib
Comment #3 by cauterite — 2016-08-21T20:07:31Z
reopen if there's still a way to trigger this bug
Comment #4 by dfj1esp02 — 2016-08-22T10:20:39Z
As I understand, the test is as follows:
import core.memory, core.stdc.stdio;
void main()
{
void*[100] arrays;
// allocate and free lots of 10MB arrays
foreach (ref x; arrays)
{
x = GC.calloc(10_000_000, 1);
}
foreach (ref x; arrays)
{
GC.free(x);
x = null;
}
puts("must have a small working set here");
getchar();
}
(didn't test)
i.e. the working set never shrinks, so your best strategy is not let it ever grow.
Comment #5 by cauterite — 2016-08-22T10:56:03Z
My mistake; your adjusted test does in fact leave a massive working set.
I think I misunderstood the original bug report, because when you call GC.minimize() it does successfully reduce working set to normal size.
So the exact problem then is that the GC doesn't call minimize() automatically when it is appropriate. Currently, minimize() is only ever called when an allocation fails.
Ideally the GC should minimize during collection whenever the amount of unused reserved memory reaches some threshold. With my limited knowledge of the GC's internals this sounds like a simple patch, so I might give it a crack soon. Lest this bug remain open for 7 whole years.
Comment #6 by dfj1esp02 — 2016-08-22T12:42:02Z
The original description probably complains about C malloc too - worth checking.
Comment #7 by Ajieskola — 2021-11-01T17:53:08Z
The function in question reside in the core namespace, so reclassifying as a DRuntime issue.
Comment #9 by dlang-bugzilla — 2023-01-19T03:33:13Z
I think this needs more focus/clarity of what is broken and needs to be fixed. Is it the C functions or the GC?
Note that we don't use the libc allocators in the GC, we use the OS APIs directly.
Also worth noting that heap allocators, whether new (GC), malloc (libc), or HeapAlloc (OS), are all vulnerable to fragmentation. Programs can only release memory back to the OS if the entire page is free.
It's possible that we no longer release memory to the OS after a GC cycle, because in many applications any released memory is going to be immediately requested again. Applications which require memory in bursts are comparatively rare. I recall that we no longer reserve memory from the OS - though it was a thing we could do and it aligned with the GC design, it was not useful in any measurable way, so it was removed.