Compile-Time Constant Expressions for Swift

I have some implementation progress updates and other new thoughts!

Implementation progress

Our prototype implementation is progressing nicely. It can execute a lot of the code in the standard library. If you're interested, you can follow along in the tensorflow branch of the apple/swift github repository. Here are some files to look at --

@compilerEvaluable on internal functions

We wanted to remove the requirement that internal functions need @compilerEvaluable annotations, but then we noticed a problem with that. Consider:

/// Module A
@compilerEvaluable
public func foo(x: Int) -> Int {
  return bar(x: x + 1)
}

internal func bar(x: Int) -> Int {
  return x + 1
}

/// Module B
import A
#assert(foo(x: 0) == 2)

For the interpreter to evaluate the call in module B, it needs the SIL for foo and bar from module A. But the compiler does not make bar's SIL available in module A unless we tell it to.

The best way we can think of to solve this problem is to require @compilerEvaluable on internal functions, and to make @compilerEvaluable imply @inlinable for internal functions.

Therefore, we have switched back to thinking that we should require @complerEvaluable on all functions that can execute at compile time.

AST verifier for @compilerEvaluable?

Initially, we were planning to implement an AST verifier that checks whether @compilerEvaluable functions only do allowed things. This verifier would be completely separate from the SIL interpreter that actually evaluates the functions.

@Joe_Groff pointed out that the AST verifier and SIL interpreter would duplicate a lot of behavior, and that it could be onerous to keep them in sync.

Therefore, we're going to explore doing the verification on the SIL, and try to share as much between the verifier and the interpreter as possible.

6 Likes

This sounds like the same problem we solve with @inlinable by allowing @inlinable internal. Why can't we have @compilerEvaluable internal for functions we want to call from @compilerEvaluable public functions, and still infer the attribute for purely internal calls?

2 Likes

What if somebody wants to use an internal @compilerEvaluable function during compilation internally but does not want to make that function @inlinable? Is there some motivation for not supporting that use case? Is there any motivation for coupling these behaviors besides a desire to minimize the annotation burden?

Interesting, I hadn't thought of that use case. I think it's possible to make rules that allow that to happen. For example:

  1. Compile-time calls to external functions only work if the callee is @compilerEvaluable.
  2. A @compilerEvaluable function can only call other @compilerEvaluable functions, even if the caller and callee are internal.
  3. @compilerEvaluable always implies inlinable.
  4. However, compile-time calls to same-module functions work on non-annotated functions. (e.g. a module can do #assert(foo()) when foo is defined in the same module, even if foo is not annotated).

(1), (2), and (3) guarantee that the transitive call graph of any compile-time call to an external function has SIL available. (4) supports the use-case that you mention.

Do those rules sound reasonable, easy-to-understand, etc?

I think my only motivation was that I wasn't thinking about the purely-internal use case :) Thanks for pointing it out!

Are the rules that I wrote above along the lines of what you are thinking?

4 Likes

To clarify, I think it is perfectly implementable to allow arbitrary functions to be treated as "compiler evaluable" by the module they are defined in without any annotation. We just need to figure out whether a public "compiler evaluable" function (which definitely should have an attribute, as it is a high promise than just inlinable) should be able to call non-public symbols that don't also have attributes. I think we need to have an attribute, but it could be either "compiler evaluable", or we could introduce a new thing - just like @inlinable has two attributes: @inlinable vs @usableFrominline.

-Chris

2 Likes

Can't we just export the SIL of the internal function, but only make it available for compile time evaluation? When it's not used at runtime, it does not cause any resilience issues.

I can imagine that this is not preferred, because it might lead to a lot of unexpected SIL to be exported without the user knowing or wanting this to happen, but perhaps that's not a problem?

1 Like

Regarding referencing non-@compilerEvaluable functions from @compilerEvaluable functions (without calling them), I have noticed situations where it will be difficult for a "pre-execution" verifier to check whether the non-@compilerEvaluable function gets called:

@compilerEvaluable
func foo(x: Int) {
  bar(x, nonCompilerEvaluableFn)

  let y = baz(x)
  return y()
}

Without knowing what bar and baz do, a verifier can't tell whether nonCompilerEvaluableFn might get called, and it can't tell whether y is @compilerEvaluable. Some possible solutions:

  1. Make the verifier pessimistically assume that bar calls its argument, and that baz returns a non-@compilerEvaluable function.
  2. Make the verifier optimistically assume that bar does not call its argument, and that baz returns a @compilerEvaluable function. Let the interpreter emit errors if an actual call to foo ends up doing something forbidden.
  3. Forbid references to non-@compilerEvaluable functions.
  4. Make the verifier look into bar and baz to see if it can prove anything about their.
  5. Add a "compiler evaluable function type", so that the type system can prove that non-@compilerEvaluable functions never get called.

Certainly, 5 is too much for an initial implementation. 1-3 all seem reasonably acceptable to me. 4 has the problem that changes to bar's and baz's implementations can break callers.

Some inline answers to your questions:

Yes. Though, if we forbid references to non-@compilerEvaluable functions, then the closure that largerFn returns will end up being a @compilerEvaluable function, and so it'll get evaluated at compile time.

If this is allowed, then yes that's what should happen. To keep things simple, we might initially forbid closures that capture mutable variables and then escape @compilerEvaluable functions.

I'm not sure yet, for the reasons at the beginning of my post.

(Did you mean to mark makeHashInMysteryValueFn @compilerEvaluable?) If we allow references to non-@compilerEvaluable functions, then yes, this should be allowed.

1 Like

This whole discussion is just proving the point that @compilerEvaluable and its companion @isAvailableForCompilerEvaluable (which also seems necessary) will spread thought the code. Which is far from desirable for a language that is meant to present well - you won't be able to see the code for all the annotations.

First two observations:

  1. The compiler can reason as the whether it can evaluate a function at compile time or not, it doesn't need the annotations for the task of compile time evaluation alone.
  2. The only code that gets dynamically linked is code that comes with the OS. The rest is linked statically. Therefore the only code that might need to make a 'promise' (via an annotation) about its future implementation is OS supplied code.

This suggests two alternatives:

  1. Only OS supplied code can be annotated.
  2. The code is relinked after an OS update, in which case it doesn't matter if it was compiler evaluable and now isn't or vice versa.

I would prefer the 2nd option above, but the 1st at least means only Apple (and someday other OS venders) have to worry about the annotation.

This is very much not the case. Libraries you install via Homebrew are dynamically loaded for the most part. SwiftPM supports building dynamic libraries. Linux/Windows rely even more on dynamic linking than Apple platforms.

2 Likes

This is only true today because of Swift’s current limitations. We want more libraries to be dynamically linkable and mix-and-match. If Dropbox.framework was built against SwiftNIO.framework 2.1.6, and Crashlytics.framework was built against SwiftNIO.framework 2.2.1, we want both of them to be loadable into one process with one SwiftNIO that’s new enough for both. It’s a moot issue today because Swift requires that everything be built by the same compiler, but it won’t always be that way.

6 Likes

I think a proper module system is the way to go rather than ad-hoc annotations every time a new feature is introduced.

What is a proper module system and how would it solve resilience issues like this, which are essentially promises library authors make about the future?

I would definitely be happier if @constantEvaluable was only needed across resilient module boundaries, the same way that enum-nonexhaustiveness is only considered across those boundaries. That would continue to mean "only to the stdlib, Foundation, and OS libraries" until we formalize a real language model for third-party library stability.

Now, in the short term, the problem with that is that we don't serialize function bodies across any library boundary unless something is marked @inlinable. If @constantEvaluable implies @inlinable for ABI-exposed functions (which is debatable), then we're only increasing the annotation burden for library authors who didn't care about the inlinability of code from their library but do care about guaranteed constant-evaluation for some of it. Still, it would be really nice if this stopped being a concern that dogs every part of the language design that affects cross-module interactions.

9 Likes

Hi! I’m really excited to see Swift moving in this direction! I hope this is just the beginning and we endup getting more powerful metaprogramming capabilties. Some questions and comments ahead (be aware I’m not an expert on this topic, just a big fan of it ^^):

  • Usage of metaprogramming: One concern I have is the thinking that this is not gonna be widely used. Even if that may seem true now I think that if the system is good enough it will open new possibilites for Swift developers. Examples of this is how Sourcery opened a new world of code generation in Swift; or if you follow J. Blow streams you can see how even his team is discovering new uses of metaprogramming on their projects thanks to Jai capabilities. (thanks @Tino for the link with Jai information)

  • Does the current idea of the directive imply that the function is ONLY used at compile time?. Because it may be really useful to just allow the same code to run at compile and/or run time.

  • The attribute: is there any reason to not flip the behaviour so the checks of compile time execution happen at the call site? Like Jai does with the #run directive. I agree that for cross module boundaries a directive is needed but I wonder if we can reduce the burden of annotating code in your same module.

  • Reading the last alternative (Fully compile the expressions and evaluate them on the machine) I understand why that exact alternative is not desirable but I don’t fully understand the current direction.
    Is the goal to have a full Swift interpreter? If that’s not the case I'm worried that the usage of this feature is gonna be weird with the usage of a strange derivate of Swift.
    How hard would be to keep it in sync with future language evolution proposals? Is there anything stoping Swift to copy the mentioned feature from Jai and run any Swift code at compile time? (obviously in practice it may not be easy, just wondering if that’s the future plan)
    Wouldn’t a full Swift interpreter also help in other dynamic scenarios like JIT or just as a plain interpreter?

  • The proposal mentions a future direction with Values in generics and improvements in Static Strings. But what about other metaprogramming features like rust-like macros, or directly being able to create/manipulate/generate code via AST nodes or textual strings. Are these features in any way related to this piece of work or completely unrelated, or are they even feasible in Swift?

Sorry for the long post, I've been thinking about this for a long time now (wow my first post is old! Run Swift code at compile time) and I'm really excited to see some movement around this ^^

@marcrasi @Chris_Lattner3 Just saw this. I have a question and apologies if this was discussed above (long thread). Today the constant propagation infrastructure is also used in the inliner so we constant propagate as we go. Has this been thought about in the design? The reason I am asking is I did a quick search in the thread for inliner and didn't find anything.

Would this machinery be able to fix our current issues with Integer/Double literals? They're currently Int64 or Float80 at best, which means that they're useless for initializing arbitrary precision data types. With compiler evaluatable functions, would we be able to engineer a standard arbitrary precision data type, that can initialize ints and floats of any size?

This is unrelated. There's already machinery in the compiler to allow literals of up to 2048 bits. You can see that here. Shame that the DoubleWidth stuff had to be moved to prototypes.

Redesigning the literal protocols is a separable issue. It would be certainly be more practical to define them in terms of a higher-level arbitrary-precision interface (say, an initializer that took an arbitrary-length Array or UnsafeBufferPointer of words) if it were possible to guarantee that the arbitrary-precision container got interpreted away entirely at compile time for types like Int and Double.

2 Likes

Oh cool. What do you mean by "moved to prototypes"?

It used to be available in snapshots. But apparently that was causing issues: [stdlib] Move DoubleWidth to test/Prototypes by moiseev · Pull Request #15470 · apple/swift · GitHub