static char[] hello()
{
char[] result="";
hello:
result ~= `abc`;
goto hello;
return result;
}
void main()
{
pragma(msg,hello());
}
what i want to say is DMD should limit the memory use of CTFE.
if CTFE keep use memory at a certain level and release some, reallocate some, then the situation would be a disaster of compiling.
probably we should have a compile option to limit the compiler's use of
memory. if exceeds then break the compilation
buggy code could easily drop into an endless loop in CTFE
Comment #1 by smjg — 2007-03-20T10:46:16Z
The user shouldn't have to remember to set a compiler flag each time just in case. There therefore ought to be a default limit, which the user can override if necessary.
Comment #2 by vietor — 2007-07-19T23:41:07Z
Requiring a flag to act normally is counter intuitive.
Having a default memory limit for the compiler would result in unexpected failure during normal and valid operation. Causing the compiler to run out of memory is, by no stretch of the imagination, an uncommon case. Thus if anything should require passing an additional argument to the compiler, imposing a limit should, not removing one.
If the user is worried about creating a situation where infinite allocation could occur, and running on an OS that will not handle it in at least a semi-graceful manner, then it should be their responsibility to remember to limit memory usage.
Comment #3 by vietor — 2007-07-20T20:34:59Z
In the above, "an uncommon case" should read "a common case".
Comment #4 by davidl — 2007-07-22T20:31:56Z
No, obviously most app *wouldn't* use memory over a certain level, this is common case. And for special case , apps which would use a great bunch memory which exceeds the compiler limitation should turn the compiler switch , after the author has tested his piece of code thoroughly.
Comment #5 by dlang-bugzilla — 2007-07-23T08:54:55Z
I agree with Stewart and disagree with Vietor. It is an uncommon case for a program's CTFE code to use large amounts of memory, and it is always best to make safety options enabled by default. Not all users can be informed and constantly aware that compiling a program is a potential security risk, and in a multi-user environment it can become disastrous, as a result of simply compiling a program, which, at first sight, sounds like a rather harmless operation.
Comment #6 by vietor — 2007-07-23T22:03:57Z
Seems I'm strongly in the minority here.
My chief concern is:
What is a sane value for a memory limit and what do you plan to do in 10 years when it's no longer a sane value? Additionally, sane for who?
Calling compiler induced memory exhaustion a security risk is making a mountain out of a molehill. At best it's a fairly weak DoS that though it will dramatically reduce system performance as it pushes into swap, will be automatically resolved by the OS when it hits the end of swap and is killed. The likelihood of the memory allocation subsystem killing anything other that the compiler gone wild is very small, but at worst this could result in randomly killing other processes.
Calling this disastrous is playing to hysteria. If you are serious about reliability in a multi-user environment, yet do not have per user resource limits in order to prevent this sort of problem, then your sysadmin is not doing their job.
Additionally, this is a compiler, it's a development tool. If you are running it on mission critical servers that cannot withstand an easily contained memory exhaustion, then you have far greater problems than a "misbehaving" compiler.
Solving this sort of problem by demanding that each application decide upon an arbitrary memory limit to impose upon itself is asking for trouble. Any situation in which this behavior will be a problem and not just an inconvenience, almost certainly has far greater threats to worry about.
I recognize that I am being perhaps overly passionate about a trivial issue in only one of two compilers for a language that is hardly mainstream, and that regardless of how it's decided will probably never effect me as I detest gratuitous preprocessing and compile time shenanigans. However,am I the only one who thinks that creating situations in which valid operations will fail without additional effort, in order to provide an expedient solution to a problem better solved by other means, is the wrong thing to do?
Comment #7 by davidl — 2007-09-04T05:08:17Z
200M is a sane value. Seems few sources use over this amount.
And with an option -MLimit=400M ( or so ) you can use over 200M
It's sane to limit the memory usage.
Comment #8 by Marco.Leise — 2014-01-17T13:07:41Z
(In reply to comment #7)
> 200M is a sane value. Seems few sources use over this amount.
> And with an option -MLimit=400M ( or so ) you can use over 200M
> It's sane to limit the memory usage.
You mean 2 GiB surely? I haven't measured it, but from the looks on my RAM usage graph I'd guess that every single module of GtkD already uses 500 MiB or more.
Comment #9 by pro.mathias.lang — 2019-05-11T17:19:21Z
As experience has shown, there's no "sane" value. Some sources need 128GBs server to compile, which, while not great, is required. Even in the case there's such a bug that they would forever loop and consume all memory would end up being killed by the OS / crash.