Bug 14958 – Casting a double to ulong sometimes produces wrong results

Status
RESOLVED
Resolution
INVALID
Severity
critical
Priority
P1
Component
dmd
Product
D
Version
D2
Platform
All
OS
Windows
Creation time
2015-08-24T16:48:00Z
Last change time
2015-08-27T16:53:52Z
Assigned to
nobody
Creator
marcioapm

Comments

Comment #0 by marcioapm — 2015-08-24T16:48:47Z
import std.stdio; void main() { double x = 1.2; writeln(cast(ulong)(x * 10.0)); double y = 1.2 * 10.0; writeln(cast(ulong)y); } Output: 11 12 to!ulong instead of the cast does the right thing, and is a viable work-around.
Comment #1 by schveiguy — 2015-08-24T17:25:44Z
This is floating point, and D does all calculations at the highest precision possible (i.e. real). Please don't treat floating point as an exact representation of decimal numbers. https://en.wikipedia.org/wiki/Floating_point#Minimizing_the_effect_of_accuracy_problems
Comment #2 by schveiguy — 2015-08-24T19:03:23Z
I'm not sure whether this is a bug or not, but the behavior comes down to the conversion from real to double vs. to ulong. This code can demonstrate: import std.stdio; void main() { double x = 1.2; double x2 = x * 10.0; real y = x * 10.0; real y2 = x2; double y3 = y; writefln("%a, %a, %a", y, y2, cast(real)y3); } This outputs: 0xb.ffffffffffffep+0, 0xcp+0, 0xcp+0 Showing that in real representation, the number is not exact, but when converted to double, it does represent what is asked for. However, we know that the conversion definitely is rounding somehow because the original number isn't precise.
Comment #3 by yebblies — 2015-08-25T12:22:37Z
This is working to spec. Please re-open as an enhancement request if you feel it's necessary. The way it goes, is that floating-point hardware does not give us the guarantees you want. E.g. a 32-bit multiply-and-add instruction might use 36 bit precision internally. Given this, we have two choices: 1. Avoid floating-point hardware 2. Allow intermediates to use higher precision than operands 1 comes with an obvious performance penalty. 2 causes some confusion. DMD calculates everything at 80-bit precision at compile time, and the spec allows it. Identical floating point calculations can also produce different results thanks to differing levels of inlining and exact register allocation.
Comment #4 by schveiguy — 2015-08-25T17:52:08Z
(In reply to yebblies from comment #3) > This is working to spec. Please re-open as an enhancement request if you > feel it's necessary. I think the expectation is violated when there is a measurable difference between treating a value as a real and treating it as a double, and the compiler is doing things under the hood that cannot be detected that force the calculation to be done with reals. For instance: double x = 1.2; auto y = x * 10.0; pragma(msg, typeof(y)); // double pragma(msg, typeof(x * 10.0)); // double writefln("%s %s", cast(ulong)y, cast(ulong)(x * 10.0)); // 12 11 Clearly, treating the result as a double should result in 12. But it doesn't. And there's no rhyme or reason why 'y' above should be any different from the result of the expression (x * 10.0). This violates the principal of refactoring calculations by assigning them to variables. I don't want the result of my code to change if I have to create a temporary using auto. Two ways to fix are: 1) treat the result of an expression typed as a double as a double (perform the rounding) or 2) typeof(double op double) should be real. I'm not going to reopen, because I don't have any personal interest in fighting for this. But I can definitely see where D has gone wrong here.
Comment #5 by yebblies — 2015-08-26T03:16:54Z
(In reply to Steven Schveighoffer from comment #4) > (In reply to yebblies from comment #3) > > This is working to spec. Please re-open as an enhancement request if you > > feel it's necessary. > > I think the expectation is violated when there is a measurable difference > between treating a value as a real and treating it as a double, and the > compiler is doing things under the hood that cannot be detected that force > the calculation to be done with reals. For instance: > As I said, if you want to use FPU hardware then the exact results will depend on exact instruction selection, which depends on inlining and register allocation. Rounding at every step is not possible in a performant way. So since we can't guarantee that two identical expressions get evaluated the same way at runtime, it doesn't make any sense to guarantee a maximum precision at compile time. eg double x = someDouble * someOtherDouble; double y = x + something; // this becomes y = mad(someDouble, someOtherDouble, something) // some code that results in registers being spilled to stack double z = x + something; // this doesn't get turned into mul+add, because the compiler decides it's better to just load x and add Now y and z may have different values thanks to increased internal precision of multiply and add.
Comment #6 by schveiguy — 2015-08-27T12:30:55Z
I understand that fp can vary based on compiler selection of instructions, and this is one of those instances. However, the type system doesn't reflect reality here. Even this prints 11: writeln(cast(ulong)(cast(double)(x * 10.0))); The cast to double is ignored because typeof(x * 10.0) *is* already double. But it's not actually a double in generated code. I think that's where I see there is an issue. Only when you actually assign it to a double does it become concretely double. The workaround (using to!ulong) may cease to work at some point also, because inlining could remove that storage to a double.
Comment #7 by ag0aep6g — 2015-08-27T12:37:11Z
(In reply to Steven Schveighoffer from comment #6) > I think that's where I see there is an issue. Only when you actually assign > it to a double does it become concretely double. With -O that doesn't cut it either. ---- void main() { import std.stdio; double x = 1.2; double d = x * 10.0; writeln(cast(ulong) d); } ---- `dmd test.d && ./test` -> 12 `dmd -O test.d && ./test` -> 11
Comment #8 by yebblies — 2015-08-27T16:53:52Z
There is an old open bug somewhere about adding a way to force a dump of excess precision.