Comment #0 by bearophile_hugs — 2011-12-13T01:34:31Z
This is part of std.numeric.gcd (DMD 2.057beta), it doesn't work with BigInts because they (correctly) don't define a "min" and they (because of bug 4120) can't be used in a boolean context yet:
static if (T.min < 0) {
enforce(a >= 0 && b >=0);
}
while (b) {
Unfortunately std.traits.isSigned works with built-in types only, and std.BigInt is not regarded as a citizen. So I have had to define a isSignedNumber, maybe there are ways to improve its code. Here you find an improved gcd() that seems to work with BigInts too:
import std.traits: Unqual, isSigned;
import std.exception: enforce;
template isSignedNumber(T) {
enum bool isSignedNumber = isSigned!T ||
(__traits(compiles, {return T.init - 1 < 0;}) &&
(T.init - 1) < 0);
}
/**
Computes the greatest common divisor of $(D a) and $(D b) by using
Euler's algorithm.
*/
T gcd(T)(T a, T b) {
static if (is(T == const) || is(T == immutable)) {
return gcd!(Unqual!T)(a, b);
} else {
static if (isSignedNumber!T) {
enforce(a >= 0 && b >=0);
}
while (b != 0) { // BUG 4120
auto t = b;
b = a % b;
a = t;
}
return a;
}
}
unittest {
assert(gcd(2 * 5 * 7 * 7, 5 * 7 * 11) == 5 * 7);
immutable int a = 5 * 13 * 23 * 23,
b = 13 * 59;
assert(gcd(a, b) == 13);
assert(gcd(BigInt("334158684640375"), BigInt("18505861418625")) ==
BigInt("150454157875"));
}
See also bug 4125
Comment #1 by clugdbug — 2011-12-13T08:34:07Z
There are a few interesting issues here:
Firstly, GCD for bigints is an important primitive operation, but it's very complicated (the sub-quadratic algorithms are highly non-trivial). It's completely different to algorithms used for built-in types (where the cost of arithmetic is independent of the magnitude of the integer). So it can't sensibly be made generic.
Secondly, Euclid's algorithm is always inferior to the binary GCD algorithm, whenever the latter applies. The generic version should be using binary GCD.
Finally, it's Euclid's algorithm, not Eulers!
Comment #2 by bearophile_hugs — 2011-12-13T10:02:29Z
(In reply to comment #1)
> There are a few interesting issues here:
>
> Firstly, GCD for bigints is an important primitive operation, but it's very
> complicated (the sub-quadratic algorithms are highly non-trivial). It's
> completely different to algorithms used for built-in types (where the cost of
> arithmetic is independent of the magnitude of the integer). So it can't
> sensibly be made generic.
>
> Secondly, Euclid's algorithm is always inferior to the binary GCD algorithm,
> whenever the latter applies. The generic version should be using binary GCD.
>
> Finally, it's Euclid's algorithm, not Eulers!
So this code is useless... Thank you for your comments, Don.
Comment #3 by bearophile_hugs — 2012-04-06T05:11:00Z
I'd say any polylogarithmic GCD algorithm would be better than none. Euclid (current) and Stein (binary) versions would be fine.
From a user's perspective: when I need a casual GCD for BigInts (I just did) and I find std.numeric throwing weird complaints at me, I can, realistically:
* take std.numeric version and hack the complaint away,
* write my own version of GCD.
Each of these is error-prone. So, is the current situation really better than having a slow GCD right there in the library? If people complain about speed then, it can be improved, but a specialization error like now, seriously? "I have it, I know BigInt could be used with it, but I won't let you use it because - of all things - BigInt doesn't have a .min", that's just unfriendly. To have a base version working seems a step forward to me.
Ivan Kazmenko.
Comment #5 by hsteoh — 2017-04-18T18:44:35Z
To make matters worse, std.numeric.gcd has no sig constraints, so once you import std.numeric.gcd, you cannot even define a BigInt specialization of gcd in your own code without causing a conflict in the overload set. Unless you do a static import or any of the other ways of working around the symbol conflict.