Filing these as one bug report since they probably have the same underlying cause (can't tell for sure since CustomFloat relies on BitFields, and debugging string mixins that I didn't write myself is incredibly hard)
import std.stdio, std.numeric;
alias CustomFloat!(1, 13, 2) F;
void main() {
F num = F(0.314);
writeln(num.get!float);
}
Compile w/ -O -inline -release. Result: Compile time error:
numeric.d(206): Error: variable result used before set
Compile with default compiler settings. Result: Compiles, assert failure:
[email protected](115): Assertion failure
Compile with -release to disable asserts but without -O to prevent compiler from noticing variable used before set. Runs, produces incorrect result. Prints:
1.74431e-39
Comment #1 by dsimcha — 2009-11-16T20:15:34Z
Okay, I've gotten a good start on debugging this by pragma'ing the bitfield code and pasting it in to temporarily get rid of the mixin. The assert failure is in setting the exponent:
void exponent(uint v){
assert(v >= exponent_min);
assert(v <= exponent_max); // FAILS
signfractionexponent = cast(typeof(signfractionexponent)) ((signfractionexponent & ~49152U) | ((cast(typeof(signfractionexponent)) v << 14U) & 49152U));
}
The problem appears to be overflow in the following code in opAssign:
exponent = cast(typeof(exponent_max))
(value.exponent + (bias - value.bias));
If the result is negative, we get overflow. Instead, this should probably result in a denormal or something.
Comment #2 by dsimcha — 2009-11-16T20:20:16Z
Oh yea, while we're on the subject of overflow in CustomFloat and I'm reading the code, is this right?
// denormalized source value
static if (flags & Flags.allowDenorm)
{
exponent = 0;
fraction = cast(typeof(fraction_max)) value.fraction;
}
Or should it be (from the normal value branch):
static if (fractionBits >= value.fractionBits)
{
fraction = cast(typeof(fraction_max))
(value.fraction << (fractionBits - value.fractionBits));
}
else
{
fraction = cast(typeof(fraction_max))
(value.fraction >> (value.fractionBits - fractionBits));
}
Comment #3 by sandford — 2010-04-08T09:30:55Z
There seems to be an additional problem with custom float. In DMD 2.039 and 2.042 it doesn't compile. I've developed a hack/patch around this issue.
For example:
CustomFloat!(1, 5, 10) temp;
Error: this for signfractionexponent needs to be type CustomFloat not type CustomFloat!(1,5,10)
Error: struct std.numeric.CustomFloat!(1,5,10).CustomFloat member signfractionexponent is not accessible
Error: template instance std.numeric.CustomFloat!(1,5,10) error instantiating
I've been able to reduce the code to a test case
struct CustomFloat(
bool signBit,
uint fractionBits,
uint exponentBits,
// uint bias = (1u << (exponentBits - 1)) - 1 // This is the problem
CustomFloatFlags flags = CustomFloatFlags.all
)
{
float foo;
float bar() { return foo;}
F get(F)() { return 1; }
}
however, when I try this as independent test case, the bug isn't reproduced.
As a quick patch I've move bias from a template parameter to the struct body as an enum:
struct CustomFloat(
bool signBit, // allocate a sign bit? (true for float)
uint fractionBits, // fraction bits (23 for float)
uint exponentBits,
// uint bias = (1u << (exponentBits - 1)) - 1
CustomFloatFlags flags = CustomFloatFlags.all)
{
enum bias = (1u << (exponentBits - 1)) - 1;
...
Comment #4 by sandford — 2010-04-08T10:49:06Z
The bit layout of custom float is not IEEE compliant and can't be used to represent half, float nor double types.
Here's the correct layout
private mixin(bitfields!(
uint, "fraction", fractionBits,
uint, "exponent", exponentBits,
bool, "sign", signBit));
Comment #5 by sandford — 2010-05-08T10:20:39Z
Another issue: float.infinity isn't converted properly.