It's always been a goal of Swift to be a good language to write great APIs in. I see macros primarily as a power tool for API development, which they achieve in two main ways:
- allowing API authors to better bridge the gap between generality (which often requires more abstraction, which often in turn adds circumlocution on the client side) and convenience; and
- allowing API authors to check preconditions that, for whatever reason, go beyond what can be expressed in the type system.
With that in mind, I think there are a lot of interesting potential interactions between macros and constant evaluation; but I do think we need to be a little more explicit about what we mean by constant evaluation.
Full constant evaluation means evaluating expressions all the way down to a normal form, which (glossing over some details) means a literal value of the expression's type. It requires all the values it sees to have this normal form, and it is blocked by any parts of the program that it doesn't understand. In Swift terms, the latter includes (at a minimum) calls to anything that isn't either
@inlinable or non-resilient; if the constant evaluator sees such a call, it must fail.
This imposes some inherent limitations on what full constant evaluation can achieve. Expressions of resilient type, for example, cannot possibly be constant-evaluated (unless they throw) because they must ultimately produce a value by calling a non-delegating
init, and the non-delegating
inits of resilient types cannot be
@inlinable. Expressions of optional resilient type can be constant-evaluated, but only if they produce
nil (or throw). To make this concrete, we cannot fully constant-evaluate an expression of
URL type unless it does not actually produce a
Sometimes this is desirable. If you need a hard guarantee that a particular expression can be emitted as a compile-time constant, you really do need full constant evaluation. Otherwise, you need some way to avoid being blocked by code you can't understand statically. There are two basic ideas for doing that:
- Work with abstract computations as completely opaque.
- Separate some subset of the computation that can be reliably constant-evaluated while the remainder stays abstract.
Macros can be a tool for achieving both of these, within limits. A macro that decides not to analyze and break apart a sub-expression is treating it as an opaque computation, and a macro could certainly restrain itself to doing things that are consistent with constant-evaluation. For example, consider a macro that recognizes uses of
+ with string literals/interpolations and concatenates them. This is, effectively, treating the interpolation operands as opaque and doing an abstract constant-evaluation of the concatenation operator. The main limitations are that macros must work with source programs, and so they cannot acquire information that isn't obvious in the source (e.g. understanding that a variable referenced from the macro operand is initialized to 1 and not re-assigned prior to the point where the macro is used) or produce results that cannot be expressed in source. Procedural macros also require writing code in terms of expressions instead of values, which can be a significant conceptual leap from other programming tasks.
When a "constant" evaluator can work with opaque computations, that's usually called abstract interpretation. Abstract interpretation is able to work with opaque values, treat opaque calls as producing such values (and potentially leaving them in arbitrary memory), and so on. Unfortunately, it is inherently a best-effort analysis, because it is often very difficult for the interpreter to make basic decisions like whether to take a branch or not. (For example: suppose the program reads a stored property of an opaque value and compares it against a value previously read from that same property; when are these known to be the same?) Because of this, it is rarely (if ever) used in core language semantics; instead, it's mostly used in tools like static analysis engines, where gradual improvement of the tool over time is seen as a good thing.
Constant evaluation of subsets of computation is a more promising idea for cases where full constant evaluation is not possible but some kind of constant evaluation is still desired. A lot of these use cases boil down to doing some sort of precondition check statically, either purely for diagnostic purposes or as an optimization to avoid doing it at runtime. If the preconditions of a function can be identified statically, then in principle they can be constant-evaluated when the arguments are compile-time constants, and then the rest of the function can be executed normally. One advantage of this sort of design is that it can naturally degrade to a dynamic check in cases where the arguments aren't statically known; this can happen even with
init(integerLiteral:) in several different situations.
One final, somewhat unrelated interaction between constant evaluation and macros that's worth calling out is that constant evaluation could conceivably be used from macros. If macros are integrated into the compiler, then in principle a macro could ask the compiler to try to constant-evaluate a particular expression, then do different things based on the result. For example, a macro could ask whether one of its argument expressions was the constant value
false. This would require a lot of prerequisite work to enable, though.