Bug 14802 – Template argument deduction depends on order of arguments
Status
RESOLVED
Resolution
FIXED
Severity
normal
Priority
P1
Component
dmd
Product
D
Version
D2
Platform
All
OS
All
Creation time
2015-07-15T22:32:00Z
Last change time
2017-08-02T08:07:33Z
Keywords
pull, wrong-code
Assigned to
nobody
Creator
timokhin.iv
Comments
Comment #0 by timokhin.iv — 2015-07-15T22:32:11Z
Template argument deduction from several identical parameters may depend on order of the arguments.
test.d
----
void f(T)(T x, T y)
{
pragma(msg, T);
}
void main()
{
f(1.0, 1.0f);
f(1.0f, 1.0);
}
----
$ dmd test.d
float
double
Comment #1 by schveiguy — 2015-07-16T12:47:22Z
template deduction depends on "common type" deduction.
For two types that implicitly cast to each other, it has to make an arbitrary decision. Same thing happens with 1, 1U.
However, when it cannot implicitly convert one to the other, the decision is to use the one that both can accept. For example: 1, 1L yields the same type as 1L, 1.
Comment #2 by timokhin.iv — 2015-07-16T14:22:33Z
(In reply to Steven Schveighoffer from comment #1)
> template deduction depends on "common type" deduction.
>
> For two types that implicitly cast to each other, it has to make an
> arbitrary decision. Same thing happens with 1, 1U.
>
> However, when it cannot implicitly convert one to the other, the decision is
> to use the one that both can accept. For example: 1, 1L yields the same type
> as 1L, 1.
Well, if this is expected and desired, then all right, but this is certainly weird.
For one, I dont't think compiler should make arbitrary decisions on user's behalf. It might just as well pick an arbitrary function from an overload set if there are several equally good candidates, but it doesn't. Why does it do so for types?
Furthermore, in both cases there seems to be one option that is better than the other: for float and double the common type should definitely be double, as more precise, for int and uint — probably uint, because int is promoted to uint in arithmetic expressions, but not the other way around.
Comment #3 by k.hara.pg — 2015-07-16T14:36:56Z
From 2.066, IFTI is improved to support following case.
void foo(T)(T, T[] a) { pragma(msg, T); }
void main()
{
short[] a;
foo(1, a); // prints 'short'
}
See also: https://issues.dlang.org/show_bug.cgi?id=12290
I think this issue is a part of the implemented enhancement.
At least, the order dependent behavior is a bug in D. Instead it should be consistent result.
Comment #4 by schveiguy — 2015-07-16T14:49:33Z
In the case of integer literal + short array, this is a different story, because:
foo!int(1, a) and foo!short(1, a) do not both compile.
However, for double/float:
f!double(1.0, 1.0f) and f!float(1.0, 1.0f) both compile.
Decision is necessarily arbitrary because both can compile. To make the decision "consistent" is not necessarily a bug fix, I can see how the compiler is free to arbitrarily decide which type to use.
However, I'll note that in other cases, the compiler prefers one over the other:
// both double[]
auto arr = [1.0, 1.0f];
auto arr2 = [1.0f, 1.0];
// both double
auto x = true ? 1.0 : 1.0f;
auto x2 = true ? 1.0f : 1.0;
same for uint vs. int
Comment #5 by timokhin.iv — 2015-07-16T15:18:17Z
(In reply to Steven Schveighoffer from comment #4)
> ...
>
> However, for double/float:
>
> f!double(1.0, 1.0f) and f!float(1.0, 1.0f) both compile.
>
> Decision is necessarily arbitrary because both can compile. To make the
> decision "consistent" is not necessarily a bug fix, I can see how the
> compiler is free to arbitrarily decide which type to use.
>
> ...
Or it can reject the call as ambiguous (or prefer double as it does in other cases). As a matter of fact, that's what it does if f(float, float) and f(double, double) are defined explicitly, instead of being generated by template instantiation.
Comment #6 by k.hara.pg — 2015-07-17T05:11:14Z
(In reply to Steven Schveighoffer from comment #4)
> However, I'll note that in other cases, the compiler prefers one over the
> other:
>
> // both double[]
> auto arr = [1.0, 1.0f];
> auto arr2 = [1.0f, 1.0];
>
> // both double
> auto x = true ? 1.0 : 1.0f;
> auto x2 = true ? 1.0f : 1.0;
>
> same for uint vs. int
Yes, that's what I imagine. The type T deduction in:
void f(T)(T x, T y) {}
void main() { f(1.0, 1.0f); }
should be equivalent with:
alias T = typeof(true ? 1.0 : 1.0f);