Compile-Time Constant Expressions for Swift

Does this have to be a human generated annotation?

That is, would it be possible to either:

  1. Have the compiler save the SIL for any functions that it determines to be evaluable at compile time, and then throw it away after the evaluation phase is over.

  2. Just automatically export the SIL of any internal functions which are called by @compilerEvaluable functions

It seems to me there are 2 separate issues: A public API contract, and getting the data to the right area in the compiler. It seems like the second issue is a job for the compiler, as opposed to making humans mark things up. (Whereas the first is still a job for humans)

1 Like

While it would be perfectly acceptable to make that transparent for Application, it can't be for a library that need stable ABI. The user has to know exactly what is part of the exporter API and what is not.

If the compiler is allowed to silently export some internal functions just because they are called from an exported function, that would make very difficult for the developer to know what is part of the API and what is not.

If we make those silently exported functions available only for compile time evaluation, this may not be a problem.

To clarify, we would still require the annotation on Public members. The compiler would just generate an internal annotation for internal/private functions that meet the requirements. If an annotated public function tries to call an internal/private function that isn't compile time evaluable, it would be an error at the call site within that public function. Thus it will only compile when the public contract is met.

These would not be exported as part of the API, only for compile time evaluation. You would not be able to call them from outside their module, but the SIL might be used at compile time.

Not part of the ABI as this is a compile time function, but if a client code requires compile time functions present on a library, removing them will break the code and IMHO should be considered as part of the API.

Right, but the client code can only call the public functions, and those are the ones we would require to be annotated.

1 Like

Sorry for the delayed response:

This proposal doesn't include any meta programming features, it is just mentioned as a related area.

No, functions can be called normally as well.

I'm not sure what you mean. The idea of the attribute is 1) to make it clear that a public API author is committing to keeping a function as a constexpr, and 2) allow modular checking.

The compiler implementation already checks for constexpr-ness in the absence of the attribute (e.g. non-public functions shouldn't need it) and keeps in mind the call site constraints specifically.

The goal is to keep compile time fast. The proposal basically is a swift interpreter, there is no goal here to avoid that. We just don't want to literally compile it to (e.g.) X86 machine code, dlopen a library and run it.

It's hard to say, but maybe. They'd have to be designed and scoped. I am personally in favor of adding a macro system at some point in the future.

-Chris

2 Likes

I haven't considered that, but I don't see a great application there. The inliner has concrete constants and can fold immediately. The constexpr stuff has to keep track of lots of intermediate results.

This is all to say that it is possible the infra could be reused, but I haven't thought about it :-) and it seems like a potentially different domain.

-Chris

I haven't thought about it either that much TBH.

I am just saying that the inliner relies on the same constant propagation infrastructure used by diagnostic (and performance) constant propagation. So if this proposal subsumes constant propagation, it is reasonable to ask what will happen to this inliner code. I guess it would rely on the old constant propagation infrastructure? Just trying to understand the overall impact of the proposal on the codebase.

This is strictly an additive change. Upon recommendation from @Joe_Groff and others, my plan to get this into master is to split it into three patches:

  1. Basic constant representation, SILConstants.h/.cpp:
    swift/SILConstants.h at tensorflow Β· apple/swift Β· GitHub
    https://github.com/apple/swift/blob/tensorflow/lib/SIL/SILConstants.cpp

  2. The interpreter itself:
    https://github.com/apple/swift/blob/tensorflow/lib/SILOptimizer/Mandatory/TFConstExpr.h
    https://github.com/apple/swift/blob/tensorflow/lib/SILOptimizer/Mandatory/TFConstExpr.cpp

  3. The reimplementation of the "ConstantPropagation" sil function pass. This doesn't change all of the constant propagation infra in swift, it just subsumes the existing pass that tries to diagnose overflows.

The first is very close (I just need time to make the PR). The second will needs to refactoring to de-TF'ify it. The third hasn't been implemented yet.

After these three are done, we can come back to staticAssert. It sounds like the core team doesn't find staticAssert to be super motivating, but is inclined to take the underlying infrastructure anyway because it is a big cleanup and more principled infra for the diagnostics pass, and can provide guaranteed constant folding even at -O0, which is nice for performance stability in some cases.

-Chris

3 Likes

Just throwing this idea out there: would it maybe be an alternative (I just found this through google, so no idea how appropriate it would be for this use case) to compile to LLVM bitcode and then use lli to run it?

That approach has a lot of overhead, both in generating the LLVM bitcode and because the LLVM interpreter is not particularly fast. A SIL interpreter has the potential to be less fine-grained, so have less overall interpreter overhead, and shortens the pipeline length before reaching the interpreter. (I say this having tried the exact thing with compile-time evaluation for the language I worked on before Swift and finding it made compilation unacceptably slow compared to a hand-rolled evaluator.)

6 Likes

Joe is right. Also, an important aspect of a SIL interpreter running during the mandatory optimization sequence is that it can "know" about important standard library entrypoints, e.g. for array allocation. This allows it to significantly shortcut certain operations.

1 Like

Can someone tell me what is the difference between @pure functions and @compilerEvaluable, other than the proposed use case? What is an example of function that could be @pure but cannot be @compilerEvaluable?

1 Like

@compilerEvaluable functions must use only the subset of Swift features which are supported by the interpreter, and must run in a limited number of operations. @pure functions can use any Swift feature as long as they cause no visible side effects and can take as long to run as they'd like.

9 Likes

In the ideal fullness of time, "pure" and "compiler evaluable" could asymptotically approach being the same thing, since in principle any computation that's pure and only relies on compile-time-known inputs ought to be a candidate for compile-time folding. There are some practical barriers in the meantime, first of all the expressivity of the compile-time interpreter itself, and more generally the ability to invoke pure functions that are implemented in binary frameworks. (Maybe one day we could load said binary frameworks in a sandbox and try to call into them from the compiler, but that'd be a ways away.)

More generally, there is some parallel language infrastructure both of these concepts, as well as others, could build on, since they're both "effects" attributes that propagate transitively through the call graph.

10 Likes

I don't think that's true. You don't need to be able to see the body of a @pure function, but you need to be able to for it to be compiler evaluable. They are definitely related, but I don't see them converging because of that difference.

3 Likes

Right, that's what I was trying to get at. If the compiler's interpreter were 100% capable of executing the language, @compilerEvaluable becomes at its limit a synonym for pure (no side effects) + @inlinable (clients are allowed to hardcode the behavior of the current implementation of this function).

No, they aren't. The former allows only types which are members of a protocol, the latter allows for existentials, too which is only P I think but theoretically it can include any subtype of P, too.
The former implies type substitution at compile time, the latter usually at runtime (boxing), though it can be optimized out in certain ways.
But in general, I like the idea.

Classes can have shared mutable state likewise for existentials.
Classes as well as existentials are mostly reference types over value types because of the unboundness of their value size.

Exactly, this is indeed a problem which may can be alleviated by modding the linker with a reinline capability.

I like D but string mixins are the worst part in D ever.
You have no syntax highlighting, you have no partial semantic checking of mixins and you couldn't operate on code in AST form just in a flat string, it means in order to achieve code transformations you need to tokenize & parse D-Code strings and deparse & detokenize them manually.

If you really want D meta programming capabilities in Swift, please add a proper macro system. :wink:

The net effect is that either function accepts any value of a type that conforms to P.

These are implementation details. Swift generics do not require compile-time substitution, and Swift can choose to compile these function signatures identically, since existentials can be unboxed and passed as generics at function boundaries.

This is conflating implementation details and semantics again. Semantically, Swift presents existentials with the same semantics as the underlying concrete value. There is no way to introduce shared mutable state through existentials, since each existential value uniquely owns its contained value. Saying that existentials box is also not entirely accurate; in Swift, a value is stored inline in an existential value unless it exceeds a certain size.

5 Likes