Huh, I thought it was stated explicitly. I'll see if I can strengthen the wording here.
Yes, it's possible to do it correctly, but constructing syntax nodes programmatically is a rather painful process. String interpolation is so much better.
Sure, will do.
I agree, but I also have no idea how to do this well :(.
Yes, it will be extended in the future to provide more of this information. At a minimum, I'd like to have enough information to evaluate #if
checks appropriately, because that's the kind of compilation-target checking one could if you had a function that's built for the target.
Doug
It returns Int
, which will be 32-bits. Now, the actual implementation of the macro (the one that transforms syntax trees) will be built with a 64-bit Int
for the build machine. If you try to do math on the integer literals---say, you want to implement your own kind of constant folding---you would need to be very careful to do so using the target's bit-width.
No, we do not have a fallback interpreter. Our options in that case would be to build or load the macro implementation into the compiler (e.g., via the plugin mechanism we've been using for our prototypes) or pre-expand all of the macro uses in the code base.
FWIW, the need for type information of subexpressions was called out in the power assertions post.
This is very much the direction we'd like to go.
I can give a couple of examples where it would be useful to know what declarations are being referenced within an expression:
-
You might want to transform
+
operations that refer to a specific implementation of+
without affecting normal math. -
You might want to implement something like
#selector
or#keyPath
as a macro, where you need to know something about the declarations referenced along the way. -
You might want to distinguish between references to local variables and references to global declarations, because you want to alter how local variable references are captured in a closure you generate.
Doug