import std.conv, std.stdio;
void foo(uint ua) { writefln("foo(%s)", ua); }
void bar(int ia) { writefln("bar(%s)", ia); }
void main(string[] args)
{
int sa = to!int(args[1]);
uint ua = sa;
foo(ua); bar(ua);
foo(sa); bar(sa);
foo(int.min); bar(uint.max);
}
http://dpaste.dzfl.pl/04bbf332f26b
This behavior is explicitly documented in http://dlang.org/type.html#Usual Arithmetic Conversions.
The page arguments that those conversions are OK, because the representation of signed and unsigned ints/longs is the same.
Integer values cannot be implicitly converted to another type that cannot represent the integer bit pattern after integral promotion. For example:
ubyte u1 = cast(byte)-1; // error, -1 cannot be represented in a ubyte
ushort u2 = cast(short)-1; // error, -1 cannot be represented in a ushort
uint u3 = cast(int)-1; // ok, -1 can be represented in a uint
ulong u4 = cast(long)-1; // ok, -1 can be represented in a ulong
The representation is not a very compelling argument to allow those conversion to be implicit, because signed/unsigned problems occur when interpretating the value.
Problems with signed/unsigned integer promotion rules for binary operators have been discussed in bug 259 and there is a sane proposal for safe conversions in
bug 239 comment 39. Time to fix this.
Comment #2 by john.michael.hall — 2022-05-24T21:12:52Z
The signed to unsigned conversions also occur before function preconditions are run.
import std.stdio: writeln;
void foo(uint x)
in(x >= 0)
{
writeln(x);
}
void main() {
int x = -1;
foo(x); //prints 4294967295
}
Comment #3 by robert.schadek — 2024-12-13T18:21:33Z