[Pitch] Compile-Time Constant Values

@inlinable doesn't make any such guarantees, unless the library offers ABI stability (i.e. system libraries). For your regular Swift package, cross-module inlining has basically no downside, except perhaps increased compile time.

Anyway, I feel the whole point about compiler-evaluable functions is diverging from the point of this pitch: that you sometimes need to ensure that a particular value or function parameter is known at compile time.

One example is memory ordering parameters used by swift-atomics. This currently requires special @_semantics attributes, and is another example of important libraries being released with underscored language features.

Perhaps @lorentey can correct me if I'm wrong, but I believe the feature pitched here would be sufficient for them to drop that particular attribute. That alone would be a big win IMO.

1 Like

If a function is going to change in a breaking way and the library is a dynamic library, then inlining the function could break stuff when the app is used with a newer version of the library (dynamic linking allows the use of newer versions as long as the api stays the same im pretty sure, that’s one of the advantages of dynamic linking).

And yes, I agree that the conversation is getting a bit far from the original pitch. Could you please elaborate on the swift atomics switutation? Is it that it requires the ordering parameter to be known at compile time? If so, why?

I agree with this. I have used templates with value components in Metal Shading Language (very similar to C++) to enforce compile-time constants. The const keyword would provide an alternative version of such in Swift without corrupting the intention of generics.

Furthermore, this could be a solution to the use of values instead of types in C++ templates when doing C++ interop. Potentially, the value components of a C++ function could be const parameters, while the typename components map to a Swift generic argument.

You aren't allowed to explicitly put brackets around a generic function call in Swift, as you must incorporate the type in a way the compiler can type-infer. I have experienced a related constraint in Metal. Combined with const being used in the way described above, we could overcome an otherwise impossible barrier to C++ interop. It may not be idiomatic, but it's the only solution that would get the job done.

I am planning to explore a DirectX backend for either a Swift for TensorFlow resurrection or a spinoff project, and good C++ interop in Swift 5.6 would be very helpful. For more on the S4TF stuff, I have a post on it that was just freed from the spam filter: Swift for TensorFlow Resurrection: Differentiation running on iOS

Also, I have used Swift Atomics before while trying to parallelize MultiPendulum and the @_semantics stuff indeed looks wierd.


It isn't. There are two kinds of stability guarantee: source stability (so your code won't break if you compile with an updated library), and binary stability (so your code won't break even if it loads a newer/older library at runtime). The ability to distribute independent updates to a dynamic library requires that it offers binary stability, and that comes with so many costs that it's only really worth it for system libraries.

Within a single application, dynamic libraries might also make sense if you have several executables using common dependencies (e.g. XPC services). But those dependencies won't independently update.

Memory ordering is basically a compiler feature. It puts barriers in the code, across which the compiler may not re-order certain instructions.

Compilers otherwise assume code is single-threaded. They'd see you acquire a lock and then write to memory, and think that since no other code can possibly be running in parallel, nobody will notice if it reorders those operations. But that's bad - we definitely need the lock before we write to memory, and that write needs to happen before we release the lock.

That's why it needs to be known at compile time. It's not just an optimisation; memory ordering is part of the semantics of atomic operations.

1 Like

Yup. All interesting discussion topics but could you break them out into a new discussion thread in the discussion section?

Could you break the possibility of this enabling templates in C++ interop into a new thread, linking my post above? @Ben_Cohen I'm not sure I'm the best person to do it, so could someone with more experience do it for me?

I think this realization will have a major impact on the course C++ interop takes, and I don't want it to be forgotten in this long comment history.


IMO, this actually is (or almost is) the right model. Already, with inline functions, the optimizer is likely to compute the result of the operation at compile-time. The difference with what people usually want in a compile-time evaluation model is:

  • making the computation result available to the front-end instead of the back-end (for other places that want constant values, such as enum raw values or places where it's relevant to the type system, which I think Swift doesn't have yet)
  • providing a guarantee that the function will be evaluated at compile-time

I think that this guarantee is fundamental. As Swift seeps deeper into software stacks, people start caring a lot more about its code generation promises. Aside from just whether Swift has reached the mythical "sufficiently smart compiler" status that allows it to always make the right decision, there's the potential for user error. For instance, if your lookup table is generated with a function that accepts an argument, if that argument is accidentally non-constant, Swift will fall back to runtime evaluation with you being none the wiser. C++ had that problem and came up with consteval to fix it. We can get it right on the first try.

It's a given at this point the core requirement for compile-time evaluation that the body of the function (and of all functions in its call graph) is available to the compiler, which presumably means that the minimum requirement for any constant-evaluatable function is @inlinable. I imagine that we would want a separate marker, like @evaluatable, that could imply @inlinable (similarly to how open implies public) and also make the function evaluatable at compile-time. I think that there should be a constraint that the entire body of the function must be evaluatable at compile-time in this case instead of just letting the compiler fail if it runs into a branch it can't handle.

It's unclear to me how this should interact with protocol requirements, but that seems like a space that should be investigated.

It's also unclear to me how @evaluatable would work for functions that accept closures. (Must the closure be marked @evaluatable? In that case, does it still work if you pass a closure that isn't? @reevaluatable is almost certainly not the right solution.)

Aside from that, I'd propose const as an expression "marker" (like try and await, not sure what the terminology is) instead of a variable declaration modifier. This allows you to write foo(bar: const myFunction()), guaranteeing the constant evaluation of myFunction without having to assign it to a "const let" variable. Without const, the compiler will not evaluate the expression in the front-end, and the back-end decides if it cares to; this should satisfy the crowd that wants the compiler to decide. However, const could be implied in a number of places that must be constant, like enum raw values or the body of @evaluatable functions.


I'd say: treat @evaluatable as a permission from the library author to evaluate an inlinable function, and treat everything in the same module as evaluatable. Then just have the interpreter emit an error if it reaches something it can't evaluate.

So if map from the standard library is @evaluatable and the closure comes from the current module, there will be no issue:

let x = const [1,2].map { $0 * 2 }

Here, the interpreter has the code for map and for the closure and will thus never reach a code path it can't evaluate. (Assuming the relevant array functions are evaluatable too.)

Whereas if you do:

let x = const [1,2].map { Int.random(0..<$0) }

Now you'll have a compile time error while evaluating random in the closure (random is not evaluatable). A compile-time stack trace of where evaluation stopped should be provided, the same stack trace you'd see if there was a run-time error in the closure.

We could introduce an effect system to determine in advance if all code paths are evaluatable and have the compiler guaranty that (and then you'd need something like @reevaluatable), but that sounds like a lot of trouble.

I’m confused why random wouldn’t be evaluatable? I’m not sure to understand why purity has anything to do with compile time execution :thinking: same for inlinable tbh.

1 Like

Compile time evaluation is basically running some parts of the program before the program is fully built, and before it is actually running. So anything that depends on a mutating global state (like random) can't work because we don't know what the global state will be at run time.

I suppose there could be some utility in having a compile-time random giving a different value at every build, but if that's a thing I think it should be expressed differently from the normal random function. Perhaps #random (like other compile-time inserted values). That way const does not change its meaning.


I really like this approach (see the post I'm replying to). It feels very swift-like and it also allows developers to guarantee that a value is computed at compile time. I really like the semantics of marking expressions with const it feels natural and is relatively intuitive (without being overly verbose).

My one piece of feedback is that @evaluatable might not be the right word to use. Unlike inlineable it's not immediately clear what it would mean (because all functions are hopefully evaluatable). Maybe it'd be worth revising the notation to make it clear that it's to do with compile time evaluation?

What about @compilerEvaluatable? I know it's a bit verbose but probably fine for a function annotation which you can put one line above the function. It also captures the meaning very well, IMHO.

1 Like

Yeah, another option would be @pure which I think is what @compilerEvaluatable means (because a compile-time function should be pure). But more developers would probably understand compiler evaluatable than pure function. I don't really know how many developers know what pure functions are.

I don't think pure is the way to go. There is nothing in that term that suggest the compiler will evaluate it at compile time, if necessary. There's certainly nothing suggesting that the compiler must (be able to) do so.

In general, I am not sure that an attribute for functions is necessary, unless it is to make an API guarantee for clients.

If the function is invoked from a const declaration, the compiler has to verify its purity whether or not it is tagged as such. And if it is not used in a const declaration, there's no benefit to marking a function as pure.

Please note, I am using term pure to mean "evaluable at compile time". The discussion of the possible benefits to annotating function purity in general and at runtime are out of scope.

The use case is that @compilerEvaluatable won't necessarily evaluate it at compile time either. It's just saying that it's possible. Unlike @inlineable it won't be able to be applied to every function. And the true underlying requirement is that the function must be pure.

I think that it's important to mark functions explicitly as @compilerEvaluatable so that some random developer making a contribution to a library for example, won't mess up someone's use case by suddenly breaking the purity of the function. It think it's important for the intent to be clear.

To be clear, I'm not saying that pure is the better option. I was just giving a bit of further reasoning behind it. I think compiler evaluatable is probably still better.

To use pure as the annotation, you are fixing a definition of the term that there is much disagreement on. You can easily have compilerEvaluable and pure be non-identical though overlapping sets.


The current time, as denoted by the system clock, is an external data source. By some definitions, a function that returns the current time cannot be pure.

However, it is entirely reasonable to want to include the current time in a variable set at build-time, to be used for debugging.

I do not think it would be a good idea to limit this feature to functions we think are ideologically pure (aka "a good idea").


I was of the impression that external sources such as the system clock should not be allowed in compiler evaluatable functions. And that they should really be ideologically pure and reproducible. However I do agree that locking the definition to pure is not a good idea for future expansion, because the definition of compilerEvaluable can change but pure probably can't.

1 Like

I think it should be #time then. This way it doesn't matter whether it's used inside or outside of a const expression, and the meaning of Date(), time(), etc. can stay the same (run-time time).

But if you really want your function to use #time inside const expressions and time() outside of it, then D has a nifty solution to offer for this (transliterated to Swift here):

if #ctfe {
   // branch to use for compile-time evaluation
} else {
   // branch to use at run time

The magic #ctfe boolean will be true when evaluating things at compile time, and false otherwise. This is generally useful if you need a different code path for compile time evaluation (perhaps you need to avoid a mutex, or avoid a non-evaluatable function in the Accelerate framework, etc.).

Getting the current time was the first thing I thought of that might not be considered pure by most purists, but which could conceivably be used at build time. Generalize to any similar example, and it is obviously infeasible to provide compile-time directives for all such imaginable (never mind unimagined) use-cases.

Another use-case I've thought of is creating a data table by reading data from a file. While the idea of disk I/O at compile time might itself be debated, I don't think there's anyone who would consider such a function to be pure.

1 Like