Following D code works fine:
enum X = "X";
import core.stdc.stdio;
void main(){
puts(X);
}
However, if that enum X is the result of a #define in a C file, you get an error:
// x.c
#define X “X”
// D code
import x;
import core.stdc.stdio;
void main(){
puts(X); // Error
}
// Error: function `core.stdc.stdio.puts(scope const(char*) s)` is not callable using argument types `(char[2])`
// cannot pass argument `"X"` of type `char[2]` to parameter `scope const(char*) s`
Comment #1 by bugzilla — 2023-10-29T08:14:35Z
That is probably because the C literal becomes a char[2] instead of a const(char)[2]. I'll look into that.
Comment #2 by bugzilla — 2023-11-19T08:01:11Z
The problem is at expressionsem.d(4188):
if (sc && sc.flags & SCOPE.Cfile)
e.type = Type.tchar.sarrayOf(e.len + 1);
else
e.type = Type.tchar.immutableOf().arrayOf();
I'm not sure what the solution is. I'm not sure this even should be fixed, after all, C semantics are different.
Although, as a workaround,
puts(X.ptr);
will work.
Comment #3 by dave287091 — 2023-11-19T13:38:33Z
One of the issues with the current state is that if you generate a .di file from the C file, you get a D enum with the expected behavior, but if you directly import the C file, you trigger the reported problem.
As these are enums collected from macros, they don’t need to follow C semantics. The C code has already been preprocessor expanded and so don’t use these defines directly. The collected enums are just for the convenience of the D user and can follow D semantics.
I ran into this issue when using importC with SDL. Their docs and examples show code like:
SDL_bool ok = SDL_SetHint(SDL_HINT_NO_SIGNAL_HANDLERS, "1");
where SDL_HINT_NO_SIGNAL_HANDLERS is a #define. I was able to work around it with the .ptr workaround, but it seems like the more that examples “just work” the better.
Comment #4 by dave287091 — 2023-11-19T13:42:10Z
So the problem is that the string literal is being treated with C semantics in a D file, when you want it treated with D semantics as it is the result of a preprocessor-created enum.
Comment #5 by bugzilla — 2023-11-20T01:59:39Z
A .di file has D semantics, even if it was generated from a .c file. Hence, it will always be imperfect, subject to the impedance mismatches between D and C.
If the C semantics of a string literal were changed to match D, then they would no longer work for C files.
If a special case was added to D implicit conversion rules so a char[2] was implicitly converted to const(char)*, then who knows what D code will break that relied on overloading differences between them.
I cannot think of a solution that resolves this.
Comment #6 by bugzilla — 2023-11-20T02:01:07Z
> it seems like the more that examples “just work” the better.
I completely agree with that, but we also can't break things.
Comment #7 by dave287091 — 2023-11-20T05:29:13Z
I think I was a little unclear with my previous comments. What I meant is that these enums are inserted by the compiler into the C code. But they are really D enums that happen to come from a C file. They should follow D rules - be an immutable(char)[2] or whatever, not a char[2] because they are not C enums, they are D enums that just happen to live in a module resulting from a C file. The C code can’t access them, only D code can. The check `sc.flags & SCOPE.Cfile` should actually be false.
It might not be with the effort to fix that, I don’t know. It is a minor inconvenience to need to write .ptr.
Comment #8 by robert.schadek — 2024-12-13T19:30:54Z