Bug 4379 – ICE(blockopt.c): foreach over huge tuple, only with -O

Status
RESOLVED
Resolution
FIXED
Severity
normal
Priority
P2
Component
dmd
Product
D
Version
D2
Platform
Other
OS
Windows
Creation time
2010-06-23T19:42:00Z
Last change time
2011-03-03T00:39:59Z
Keywords
ice-on-valid-code, patch
Assigned to
nobody
Creator
dsimcha

Comments

Comment #0 by dsimcha — 2010-06-23T19:42:43Z
Requires at least 14 elements to fail. import std.stdio, std.typetuple; alias TypeTuple!(1,2,3,4,5,6,7,8,9,10,11,12,13,14) nums; void main() { foreach(num1; nums) { foreach(num2; nums) { writeln(num1, " ", num2); } } } Error Message (in the reduced test case) : Internal error: ..\ztc\blockopt.c 619 In the original case that I isolated this from, the compiler would eat ~300 MB of memory and hang. I can't reproduce this symptom in a reduced test case because the compiler crashes first, so I'm not sure whether the two symptoms are related.
Comment #1 by dsimcha — 2010-06-23T19:57:24Z
Actually, this appears to be related to the total size of the loop body being generated at compile time. Nesting has nothing to do with it. The cutoff appears to be (of all numbers) 198 elements. import std.stdio, std.typetuple; // CTFE function to generate a huge tuple. string generateHugeTuple() { string ret = "alias TypeTuple!("; foreach(outer; 0..198) { ret ~= '1'; ret ~= ','; } ret = ret[0..$ - 1]; // Drop last , ret ~= ") LetterTuple;"; return ret; } mixin(generateHugeTuple()); void main() { foreach(letter; LetterTuple) { writeln(letter); } }
Comment #2 by nfxjfg — 2010-06-24T05:35:43Z
Can't reproduce it with 2.046.
Comment #3 by dsimcha — 2010-06-24T19:42:08Z
I was able to reproduce this on 2.046. Try making the tuples bigger. Maybe it's at least somewhat nondeterministic, a memory allocation bug or something.
Comment #4 by clugdbug — 2010-06-25T00:00:21Z
Only occurs when compiled with -O. It's an optimizer issue. This hits the limit of the optimizer, which loops a maximum of 200 times, hence the ICE at 198 elements. Bug 3681 also hits the same limit, but for a different reason. Actually I think the problem is in the glue layer (maybe in UnrolledStatement?) because the backend shouldn't need to do much work in this example. Reduced test case ---------- template BigTuple(U...) { alias U BigTuple; } alias BigTuple!(1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, 1,1,1,1,1,1) Tuple4379; void bug4379() { foreach(x; Tuple4379) { } }
Comment #5 by clugdbug — 2010-11-22T01:26:45Z
This turns out to be very simple. When merging blocks together, we need to allow one pass per block, since it only merges one block per pass. In the test case, there are more than 200 blocks, and they all get merged into one. PATCH: blockopt.c, blockopt(), line 595 void blockopt(int iter) { block *b; int count; if (OPTIMIZER) { + int iterationLimit = 200; + if (iterationLimit < numblks) + iterationLimit = numblks; count = 0; do { and line 622 do { compdfo(); /* compute depth first order (DFO) */ elimblks(); /* remove blocks not in DFO */ - assert(count < 200); + assert(count < iterationLimit); count++; } while (mergeblks()); /* merge together blocks */
Comment #6 by bugzilla — 2010-12-26T22:51:51Z
Comment #7 by clugdbug — 2011-03-03T00:39:59Z
*** Issue 5656 has been marked as a duplicate of this issue. ***