This is basically issue 17561 but without `Fiber`s. A fix for this is likely to also fix issue 17561, so 17561 could be considered a duplicate of this. I'm leaving it open for now, because it might be solvable by working around the more general issue somehow.
Related links:
* https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt
* https://github.com/dlang/druntime/pull/1698
Memory corrupting code:
----
import core.sys.posix.sys.mman;
import std.conv: text;
enum pageSize = 1024 * 4; // 4 KiB
enum stackSize = 1024 * 1024 * 3; // 3 MiB
void main()
{
/* Allocate memory near the stack. */
ubyte foo;
auto stackTop = &foo + pageSize - cast(size_t) &foo % pageSize;
auto stackBottom = stackTop - stackSize;
auto sz = pageSize;
void* dst = stackBottom - sz;
void* p = mmap(dst, sz, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANON,
-1, 0);
assert(p == dst, "failed to allocate page");
/* Set it up with zeroes. */
auto mem = cast(ubyte[]) p[0 .. sz];
mem[] = 0;
foreach (x; mem) assert(x == 0, text(x)); /* passes */
/* Break out of the stack and wreak havoc. */
wreak_havoc();
/* Look at the mess. */
foreach (x; mem) assert(x == 0, text(x)); /* fails; prints "13" */
}
void wreak_havoc() @safe
{
ubyte[stackSize] x = void;
x[0] = 13;
}
----
Like in issue 17561, the surrounding code is not @safe, but is actually safe (as far as I can tell). It's the void initialized static array that breaks safety.
In a 32-bit program it's also possible to get there with `malloc` instead of a targeted `mmap`:
----
/* WARNING: This fails quickly for me in a 32-bit Ubuntu VM, but it can
potentially consume all memory. */
import core.stdc.stdlib: malloc;
import std.conv: text;
enum pageSize = 1024 * 4; // 4 KiB
enum stackSize = 1024 * 1024 * 3; // 3 MiB
void main()
{
ubyte foo;
auto stackTop = &foo + pageSize - cast(size_t) &foo % pageSize;
auto stackBottom = stackTop - stackSize;
assert(cast(size_t) stackBottom % pageSize == 0);
while (true)
{
/* Allocate memory. */
auto sz = 1024 * 1024; // 1 MiB
auto p = malloc(sz);
assert(p !is null, "malloc failed");
assert(stackBottom > p);
/* See if it's near the stack. */
size_t distance = stackBottom - p;
if (distance <= sz)
{
/* Set it up with zeroes. */
auto mem = cast(ubyte[]) p[0 .. sz];
mem[] = 0;
foreach (x; mem) assert(x == 0, text(x)); /* passes */
/* Break out of the stack and wreak havoc. */
wreak_havoc();
/* Look at the mess. */
foreach (x; mem) assert(x == 0, text(x)); /* fails; prints "13" */
break;
}
}
}
void wreak_havoc() @safe
{
ubyte[stackSize] x = void;
x[0] = 13;
}
----
Comment #1 by bugzilla — 2021-06-25T20:45:10Z
The compiler should reject any stack frame that's larger than 4K. This is because the operating system puts a guard page at the end of the reserved stack area, and a seg fault in that region is caught by the OS and the reserved stack area is increased.
But, if the access occurs beyond 4k, this doesn't happen. Worse, because of stack arithmetic wraparound, any address becomes accessible.
Comment #2 by robert.schadek — 2024-12-13T18:52:53Z