Evolving from issue 5628:
---
import std.math;
import std.stdio;
void main()
{
immutable real x = 46;
immutable float xf = x;
immutable double xd = x;
immutable short neg2 = -2;
immutable long neg8 = -8;
writefln!"%.70f"(pow(xd, neg2));
writefln!"%.70f"(cast(double) (1 / (x * x)));
writefln!"%.70f"(pow(xf, neg8));
writefln!"%.70f"(cast(float) (1 / ((x * x) * (x * x) * (x * x) * (x * x))))$
assert(pow(xd, neg2) == cast(double) (1 / (x * x))); // line 7902
assert(pow(xf, neg8) == cast(float) (1 / ((x * x) * (x * x) * (x * x) * (x $
}
---
I ran essentially this program in the testsuite and got:
0.0004725897920604915061933148923145608932827599346637725830078125000000
0.0004725897920604915061933148923145608932827599346637725830078125000000
0.0000000000000498812518796420273359260022516536992043256759643554687500
0.0000000000000498812518796420273359260022516536992043256759643554687500
****** FAIL release32 std.math
core.exception.AssertError@std/math.d(7902): unittest failure
This is the output of FreeBSD-32. On Win32 the result was essentially the same. All others worked. If this program doesn't work (I have no direct access to any of the two OSs), go to std.math, look for the test containing "neg8" and put the last 6 lines of the unittest there (and remove @safe etc. from the test). Then let the testsuite run.
And I'm pretty aware, that one should not compare floating point numbers with == due to rounding issues. But the output above shows, that the numbers have identical bit patterns and they are also of identical type. That should not even fail with floating point numbers.
Comment #1 by look.at.me.pee.please — 2019-12-15T19:16:01Z
Comparing floating point numbers with "==" is in itself an error. You shouldn't be comparing approximation of numbers to see if they are exactly the same.
Anyways this works:
extern(C) export void thethingy()
{
immutable real x = 46;
immutable float xf = x;
immutable double xd = x;
immutable short neg2 = -2;
immutable long neg8 = -8;
writefln!"%.70f"(pow(xd, neg2));
writefln!"%.70f"(cast(double) (1 / (x * x)));
assert(pow(xd, neg2) == cast(real) (1 / (x * x))); // cast(real) for same type.
// can't just cast(double) here cause pow() returns value on the FPU
// so the conversion never happens otherwise
double a = pow(xd, neg2);
assert(a == cast(double) (1 / (x * x)));
}
You are comparing different types, pow() returns real, and you were comparing it with a "double".
The reason you get the same numbers when printed is because Windows doesn't actually support 80-bit floats. Their 'long double' type is just an alias for double.
You guessed it writefln() under the hood uses Windows' snprintf().
https://github.com/dlang/phobos/blob/v2.089.1/std/format.d#L2660
Which is incapable of printing "real" (aka long double, 80-bit floats).
Comment #2 by look.at.me.pee.please — 2019-12-15T19:39:44Z
As for why it works on 64-bit and Linux, they don't use the FPU there. So essentially the pow() function is returning a double in a XMM register.
So if you run the above code I posted with -m64, it does a check using the FPU. Which causes the assert to fail, cause of course it is now testing a double (from pow()) and a real.
The whole situation is really messed up and it seems that it is flip flopping between double and real cause in new hardware nothing actually supports 80-bit floats it's dead.
Really real should mean 128-bit float, it is the next logical step over 80-bit floats.
Comment #3 by bugzilla — 2019-12-15T20:13:54Z
(In reply to jacob from comment #1)
> Comparing floating point numbers with "==" is in itself an error.
I know. But the "==" was there first. I tried to fix the issue and when I found out, that the results where indeed the correct results and the (original) problem was because double was compared with real and real produces just better results, I added the cast. This worked for most machines...
> You shouldn't be comparing approximation of numbers to see if they are exactly
> the same.
I agree, but it's not an approximation in this case. (Although I meanwhile solved the original stuff by using approxEqual with a large constant, so actually it's comparing for identity.)
> assert(pow(xd, neg2) == cast(real) (1 / (x * x))); // cast(real) for
> same type.
That won't work on other computers.
> // can't just cast(double) here cause pow() returns value on the FPU
> // so the conversion never happens otherwise
> double a = pow(xd, neg2);
> assert(a == cast(double) (1 / (x * x)));
I don't understand that. xd is of type F=double and the result of pow() is "Unqual!F" which is also double. FPU or not, the result should behave like a double. When casting another value to double and the results have equal bit patterns, they should be compareable with ==.
> You are comparing different types, pow() returns real
If it does, this is the bug.
> You guessed it writefln() under the hood uses Windows' snprintf().
But hopefully not for long time anymore - see my PR: https://github.com/dlang/phobos/pull/7264
(In reply to jacob from comment #2)
> Really real should mean 128-bit float, it is the next logical step over
> 80-bit floats.
I really, really would be happy if real would be something fixed. Real being something different on every other machine causes a lot of problems. I would even prefer if it would not exist at all, rather than this chaos.
Comment #4 by look.at.me.pee.please — 2019-12-15T22:50:34Z
(In reply to berni44 from comment #3)
> (In reply to jacob from comment #1)
> > Comparing floating point numbers with "==" is in itself an error.
>
> I know. But the "==" was there first. I tried to fix the issue and when I
> found out, that the results where indeed the correct results and the
> (original) problem was because double was compared with real and real
> produces just better results, I added the cast. This worked for most
> machines...
>
> > You shouldn't be comparing approximation of numbers to see if they are exactly
> > the same.
>
> I agree, but it's not an approximation in this case. (Although I meanwhile
> solved the original stuff by using approxEqual with a large constant, so
> actually it's comparing for identity.)
It is an approximation. You are computing something using two different methods. The result at runtime is also dependent on hardware. You change settings in SSE and the FPU so that the same two inputs give different results. It's all an approximation if you are using floating point numbers. Even if math says others, it's different when you have to do computations in the real world and can't use fractions to retain infinite precision.
> > // can't just cast(double) here cause pow() returns value on the FPU
> > // so the conversion never happens otherwise
> > double a = pow(xd, neg2);
> > assert(a == cast(double) (1 / (x * x)));
>
> I don't understand that. xd is of type F=double and the result of pow() is
> "Unqual!F" which is also double. FPU or not, the result should behave like a
> double. When casting another value to double and the results have equal bit
> patterns, they should be compareable with ==.
Non-working:
CALL __D3std4math__T3powTydTysZQlFNaNbNiNeydysZd
FLD qword ptr [DAT_004d2f88]
FUCOMPP
Working (cast(double)):
CALL __D3std4math__T3powTydTysZQlFNaNbNiNeydysZd
FSTP qword ptr [ESP + 0x8]
FLD qword ptr [ESP + 0x8]
FLD qword ptr [DAT_004d2f88]
FUCOMPP
Do you see the difference? It's a hardware problem, and it's the reason why the FPU isn't used anymore. The D and C/C++ convention is to return floating point numbers on the FPU. This means the number stays as it's 80-bit representation in the FPU. All the calculations were probably done in the FPU as well.
The reason the code works that I posted, was because we explicitly told it to store the value before comparing it. As you can see, it "FSTP" (stores) the value in a qword (64-bits) before loading it and doing the comparison.
> > You are comparing different types, pow() returns real
>
> If it does, this is the bug.
It's not really a bug. It's how the FPU operatores, all values on the FPU are stored as 80-bit fp. You get situations like these cause of the difference in precision. The SSE doesn't support 80-bit floats, and operations are done in the same precision they are given in. If you do 32-bit floating point operations, they stay in 32-bit (if that's how it was programmed). With the FPU everything gets "upgraded" to 80-bits. You can't really avoid it unless you don't use it, but I imagine Windows/FreeBSD 32-bit ABI probably requires it. Though I guess it could be changed for D's ABI.
Comment #5 by bugzilla — 2019-12-16T06:29:14Z
(In reply to jacob from comment #4)
> It's not really a bug. It's how the FPU operatores, all values on the FPU
> are stored as 80-bit fp. You get situations like these cause of the
> difference in precision. The SSE doesn't support 80-bit floats, and
> operations are done in the same precision they are given in. If you do
> 32-bit floating point operations, they stay in 32-bit (if that's how it was
> programmed). With the FPU everything gets "upgraded" to 80-bits. You can't
> really avoid it unless you don't use it, but I imagine Windows/FreeBSD
> 32-bit ABI probably requires it. Though I guess it could be changed for D's
> ABI.
I understand. This is the viewpoint from hardware perspective. But seen from the perspective of a programmer it is a bug. The specs say, that intermediate calculations can be done with higher precision (here 80-bit fp). But the moment, a function is returning a double, I expect it to be a double and not an 80-bit real. (Or would it be better to do an explicit cast at the last line of the pow function?)
So IMHO after
CALL __D3std4math__T3powTydTysZQlFNaNbNiNeydysZd
the compiler should fill in some instructions to get rid of the extra precision of the reals. Either by directly manipulating the bit pattern of the real or by the version you mentioned (FSTP / FLD), if that's easier to implement.
Comment #6 by look.at.me.pee.please — 2019-12-16T07:40:42Z
(In reply to berni44 from comment #5)
> (In reply to jacob from comment #4)
> > It's not really a bug. It's how the FPU operatores, all values on the FPU
> > are stored as 80-bit fp. You get situations like these cause of the
> > difference in precision. The SSE doesn't support 80-bit floats, and
> > operations are done in the same precision they are given in. If you do
> > 32-bit floating point operations, they stay in 32-bit (if that's how it was
> > programmed). With the FPU everything gets "upgraded" to 80-bits. You can't
> > really avoid it unless you don't use it, but I imagine Windows/FreeBSD
> > 32-bit ABI probably requires it. Though I guess it could be changed for D's
> > ABI.
>
> I understand. This is the viewpoint from hardware perspective. But seen from
> the perspective of a programmer it is a bug. The specs say, that
> intermediate calculations can be done with higher precision (here 80-bit
> fp). But the moment, a function is returning a double, I expect it to be a
> double and not an 80-bit real. (Or would it be better to do an explicit cast
> at the last line of the pow function?)
>
> So IMHO after
>
> CALL __D3std4math__T3powTydTysZQlFNaNbNiNeydysZd
>
> the compiler should fill in some instructions to get rid of the extra
> precision of the reals. Either by directly manipulating the bit pattern of
> the real or by the version you mentioned (FSTP / FLD), if that's easier to
> implement.
The Spec says a lot of stuff, it is generally outdated or not followed at all.
In this regard i'd say it'd probably follow C convention. Otherwise what you are asking to do is enforce precision with the FPU which just isn't going to happen. You can change the ABI but it's not worth it for 32-bit Windows/FreeBSD, those probably aren't targetted very often. It'd also be a breaking change.
Don't do exact comparisons (which you shouldn't be doing in the first place) and you probably won't notice this anyways.
Comment #7 by bugzilla — 2019-12-16T07:55:14Z
(In reply to jacob from comment #6)
> The Spec says a lot of stuff, it is generally outdated or not followed at
> all.
In that case, this should be a bug report for the specs... ;-)
Comment #8 by bugzilla — 2019-12-16T10:11:14Z
You sound like you consider this a WONTFIX. Feel free to change the status accordingly.
Comment #9 by dlang-bot — 2020-01-24T12:43:08Z
@berni44 updated dlang/phobos pull request #7363 "Fix Issue 20508 - std.math.pow(-infinity, y) does not return NaN for imaginary or complex results" mentioning this issue:
- Remove std.math from checking for public unittests. The reason for
this is the documentation for pow() with two floating point arguments,
where the public unittest does not work on Win32 and FreeBSD due to
issue 20451. So this test is commented out for these two OSs, but now
the style checker does not recognize anymore, that there is indeed a
public example.
https://github.com/dlang/phobos/pull/7363
Comment #10 by bugzilla — 2021-02-20T18:45:22Z
A more simple example, probably the same "bug":
unittest
{
double a = 0.00009999995;
double b = 0.00000000005;
writefln!"%.80f"(a);
writefln!"%.80f"(0.0001 - b);
assert(!(a < 0.0001 - b));
}
Comment #11 by dlang-bot — 2021-02-20T18:48:10Z
@berni44 updated dlang/phobos pull request #7804 "[WIP] std.format: format floats and doubles with %g / %G / %s" mentioning this issue:
- Add casts due to issue 20451.
https://github.com/dlang/phobos/pull/7804
Comment #12 by dlang-bot — 2021-03-08T11:00:13Z
@berni44 updated dlang/phobos pull request #7823 "Fix Issue 16200 - Faster pow implementation for integral exponents" mentioning this issue:
- Commented one unittest out, due to issue 20451.
https://github.com/dlang/phobos/pull/7823
Comment #13 by robert.schadek — 2024-12-13T19:06:27Z