Comment #0 by john.loughran.colvin — 2023-07-07T20:02:51Z
I am surprised to see the different behaviour from dmd below:
dev@dev:~$ cat testfloat.d
import std.stdio;
static import core.math;
static import core.stdc.math;
static ulong mantissa = 0x8000000000000001UL;
void main() {
float a = core.math.ldexp(float(mantissa), -213);
float b = core.stdc.math.ldexpf(float(mantissa), -213);
writeln(*cast(uint*)&a);
writeln(*cast(uint*)&b);
}
dev@dev:~$ dmd -run testfloat.d
1
0
dev@dev:~$ ldmd2 -run testfloat.d
0
0
dev@dev:~$ cat test.cpp
#include <cmath>
#include <iostream>
int main() {
float p = ldexp(0x8000000000000001UL, -213);
float q = ldexpf(0x8000000000000001UL, -213);
std::cout << *((unsigned int*)&p) << std::endl;
std::cout << *((unsigned int*)&q) << std::endl;
}
dev@dev:~$ g++ test.cpp -o test_cpp
dev@dev:~$ ./test_cpp
0
0
Comment #1 by ibuclaw — 2023-07-08T13:25:54Z
Arguably introduced by https://github.com/dlang/dmd/pull/7995
Corresponding druntime PR https://github.com/dlang/druntime/pull/2135
Since druntime/2135, there are now float and double overloads of all core.math intrinsics.
```
float ldexp(float n, int exp);
double ldexp(double n, int exp);
real ldexp(real n, int exp);
```
DMD however only has proper support for the x87 (real) version only - the float and double declarations are no more than aliases. So the above declarations might as well instead be
```
float ldexp(real n, int exp);
double ldexp(real n, int exp);
real ldexp(real n, int exp);
```
This affects the call to core.math.ldexp, as the first parameter is implicitly cast to real. i.e:
```
core.math.ldexp(cast(real)float(mantissa), -213);
```
As per the spec: https://dlang.org/spec/float.html#fp_intermediate_values
> For floating-point operations and expression intermediate values, a greater
> precision can be used than the type of the expression. Only the minimum
> precision is set by the types of the operands, not the maximum.
> Implementation Note: On Intel x86 machines, for example, it is expected
> (but not required) that the intermediate calculations be done to the full
> 80 bits of precision implemented by the hardware.
So the compiler is behaving as expected when it discards the float cast/construction, as `real` has the greater precision over `float(mantissa)`.
If you want to force a maximum precision (float) then you need to use core.math.toPrec instead.
```
float a = core.math.toPrec!float(core.math.ldexp(real(mantissa), -213));
```
If dmd still doesn't do the right thing, then that is a problem that needs addressing. The compiler has to perform a store and load every time a variable or intermediary value is passed to toPrec.
---
But that's skirting around the sides somewhat. Really, there's lots of intrinsic declarations in core.math to which dmd does not implement - and there's no expectation that it should implement either.
DMD's idea of an intrinsic is that it should result in a single instruction. `real ldexp()` becomes `fscale`, but there is no equivalent instruction for double or float on x86. Rather, the closest approximation is:
```
fscale; // ldexp()
... // Add 4-6 more instructions here to pop+push the result using a
... // narrower precision (i.e: it should be equivalent to toPrec!T).
```
Instead of these false intrinsics, they should just be regular inline functions, and all special recognition of them removed from the DMD code generator/builtins module.
```
real ldexp(real n, int exp); /* intrinsic */
/// ditto
pragma(inline, true)
float ldexp(float n, int exp)
{
return toPrec!float(ldexp(cast(real)n, exp));
}
/// ditto
double ldexp(float n, int exp)
{
return toPrec!double(ldexp(cast(real)n, exp));
}
```
GDC and LDC can continue to map the float and double overloads to their respective back-end builtins (i.e: libc call), ignoring the function bodies.
---
This could also have been worked around if you used scalbn() instead. ;-)
Comment #2 by ibuclaw — 2023-07-08T17:49:41Z
By the way, it has occurred to me that DMD's behaviour is essentially no different to what I'd expect gdc or ldc would emit when compiling with `-ffast-math`.
When giving it a try with gdc, I indeed did see this behaviour in both gdc and g++.
```
$ gdc test.d -o test_d -ffast-math -mfpmath=387 -fno-builtin
$ ./test_d
1
0
$ g++ test.cpp -o test_cpp -ffast-math -mfpmath=387
$ ./test_cpp
1
0
```
As soon as I turn on any optimization levels though, the static "mantissa" is constant propagated and `0 0` is printed at run-time.
(Your C++ test isn't faithful to the D version, adjusted it as follows)
```
#include <cmath>
#include <iostream>
static uint64_t mantissa = 0x8000000000000001UL;
int main() {
float p = __builtin_ldexpf((float)mantissa, -213);
float q = ldexpf((float)mantissa, -213);
std::cout << *((unsigned int*)&p) << std::endl;
std::cout << *((unsigned int*)&q) << std::endl;
}
```
Comment #3 by robert.schadek — 2024-12-07T13:42:52Z