This code compiles & runs:
void main()
{
uint u = ~0;
dchar d = u;//implicit conversion
assert(d > d.max);
}
It should use VRP to determine whether the range fits dchar range.
~
~
~
~
~
Comment #1 by lio+bugzilla — 2015-05-28T20:06:59Z
There are two solutions to this:
1. Make dchar.max == uint.max
2. Make dchar = uint use range checks to determine whether implicit cast is safe
ad 1. This changes dchar.max, but there doesn't appear to be much code depending on it. In fact, Phobos has a couple `if (d > 0x10FFFF)` which don't currently make sense.
ad 2. Generate warning/deprecation when the range falls outside 0..0x10FFFF. breaks some code that assembles a dchar from wchar/char, and fixed by adding casts.
Comment #2 by thomas.bockman — 2015-10-24T16:13:54Z
The issue of dchar.max versus uint.max has been discussed on the forums:
http://forum.dlang.org/thread/[email protected]
It was decided that D compilers must support dchar values which are outside the range of valid Unicode code points; otherwise handling encoding errors and the like gets very awkward. Besides, actually enforcing a range less than that which is actually representable by the binary format of dchar would be very difficult to do in a performant way.
In light of this, I think #1 (Make dchar.max == uint.max) is the correct solution. Allowing dchar.max to be less than its true maximum has probably caused subtle bugs in some generic code.
The value of the maximum code point (0x10FFFF) should be moved to a separate constant.
Comment #3 by robert.schadek — 2024-12-13T18:43:03Z