I don’t have any objections to supporting it if you all can make it work. It seems like you all are trying to keep this initial proposal simple, though, and MemoryLayout support could throw a curveball in your initial implementation effort. If you’ve already got it working, great!
It seems like a strange bit of the core language model to leave out to me. In general, func foo<T: P>(x: T) and func foo(x: P) are isomorphic, and the latter is easier to write and understand for a lot of people. While existentials are limited now, we’ll fix that eventually, and I think we should avoid forcing people into writing the long generic form of things when they otherwise wouldn’t need to because of limitations in other language features.
I think the discussion about non-type generic parameters in the proposal is a bit of a red herring. Constant evaluation is likely to be neither necessary nor sufficient to support this. Because of Swift’s generics model, we can support dynamic non-type generic parameters, so . Making type equality judgments might be able to use constant evaluation in some situations, but it’s likely we’ll want to support more flexible bidirectional deduction of at least some things, like integers. I think it’s largely an independent feature.
Yes please. Even ignoring the TensorFlow use-case, Swift’s compile-time metaprogramming features (to the extent that it has them) leave a lot to be desired. Obviously this pitch doesn’t address code generation or reflection but I see this as an important step in the right direction.
One concern I have though is, should #if directives be made to operate on constexprs instead of magic compiler directives like os()? Taking the model presented here into account, it seems very intuitive to me to think of conditional compilation as nothing more than static if statements (...with #endifs instead of curly braces, I guess).
Bikeshedding: #staticAssert -> #assert. #directives already implies static behaviour, so I don’t think “static” is necessary. @compilerEvaluable -> @compiletime or @const. No justification other than that the former looks unwieldy to me.
Not sure that this, factorial and alike, is a real problem. If you were expecting the compiler to evaluate a factorial or similar recursively for you then you would have to document the limit you chose. I suspect that if you had a default of 512, like C++, then it is something very special that would require a manually set recursion limit and that this should be well documented, e.g.:
/// - precondition: n < 20 otherwise Int will overflow.
/// - recursionLimit: Set to 20 because Int will overflow in any case.
@compilerEvaluable(recursionLimit = 20)
func factorial(_ n: Int) -> Int {
return n == 0 ? 1 : n * factorial(n - 1)
}
Instead of recursion limit or operations limit you could have time limit - since this is what we really want to protect against. EG.
/// - timeLimit: Even though `BigNum` could theoretically calculate any factorial a time limit of 0.1 seconds is imposed to limit compilation time.
@compilerEvaluable(timeLimit = 0.1)
func factorial(_ n: BigNum) -> BigNum {
return n == 0 ? 1 : n * factorial(n - 1)
}
People are doing things wrong all the time ;-) - and afaics, there is no obvious choice for the limit, so sooner or later, someone will run in a situation where the input to a compile time expression grows enough to be a very tiny bit to large...
What will people do if the compiler tells them their recursion level is 1048582, and 1048576 is the maximum that is allowed?
Unless the code is really simple, I bet they won't change it fundamentally, but rather look for cheap workarounds.
Making things configurable is often a dubious compromise, but afaics, any fixed value is even more dubious.
There is a very good reason this is not the design:
We want our code to compile consistently, especially with the same version of the compiler. Any duration-based limit would be subject to intermittent compiler errors and make code that even approaches the limit rather brittle.
I was thinking specifically about using compiler evaluable expressions as arguments to non-type generic parameters and using those values in subsequent compiler evaluable expressions in the generic context. It isn't clear to me after reading the document whether that is something that would eventually be supportable or not, especially if the type of the non-type generic parameter was itself generic, such as: struct FixedArray<Size: UnsignedInteger, size: Size> {}.
I was also thinking about limitations on compiler evaluable expressions in generic contexts more generally after reading the section Intentional limitation: Constant expressions of generic type. It seems likely to me that generic code is the context where I would most frequently to want to take advantage of compile time evaluation. I hope there is a way to make the interpreter and the generic system work well together (in the fullness of time of course).
Got it, I'm happy for us to subset it out, thanks! I'm working on the implementation now, and no, this part isn't implemented yet.
I'm not opposed to supporting this in the future, but if we're going to allow dynamic constructs, then classes instances and class methods should also be supported. The current design is intentionally keeping this to the static side of the language, which is where the strong use cases come from. I'd prefer to save the slippery slope arguments to future proposals, in an effort to reduce the controversy in this proposal.
I'm not sure about actual use cases of compile-time evaluation of differentiated functions, but if the input function is compiler-evaluable, its Jacobian products (both forward and reverse mode AD) should be compiler-evaluable as well.
About implementation details: Currently automatic differentiation is the last pass in the mandatory pipeline (when TensorFlow is not enabled). AD better depends on the output of EmitDFDiagnostics, which happens long after DiagnosticConstantPropagation. Perhaps many things in constexpr folding can be implemented as a utility so that AD can call them to fold derivatives.
I am concerned about the number of annotations getting added to Swift; it seems to be part of all proposals!
Annotations everywhere goes against the clean syntax of Swift, what’s the point of type inference if there are 10 annotations. Surely explicit typing is an annotation.
Annotations also have bad syntax been leading and not trailing and having a visually striking @.
Perhaps a better solution would be to have the compiler work out if it can evaluate the function or not and evaluate when it can. This works well in Java.
In addition if the compiler just evaluated stuff you wouldn’t need #if etc. since if would just be used.
It doesn't seem bad to me if they're for rare things (like compile time constant expressions) that most people will never need to think about (progressive disclosure, etc). Outside of the standard library, I don't see this being a commonly used attribute.
This proposal looks great, but I have a follow-up question after reading the "alternatives considered" about if the attribute is required or not.
I expect the compiler to always statically evaluate and simplify as much of my application logic as possible; does this interpreter do anything to improve constant-folding of regular @inlineable functions, or does it only operate on things which are explicitly marked @compilerEvaluable?
I think the attribute is useful as a developer-assist, but (and I hate to sound repetitive here), I think it would look nicer as a parameter: @inlineable(compilerEvaluable). That way your code will look a little more cohesive if you have types which expose a mixture of regular-inlineable and inlineable-and-statically-evaluatable functions.
Existentials and classes are dynamic in different ways, though. Classes have subclassing, but they also have shared mutable state, whereas existentials are still fundamentally value types. They're exactly as dynamic as generics, which you already have a plan to support.
(To be clear, I'm not saying they have to be supported in the initial proposal, only that I think they're less controversial and more straightforward an extension than, say, supporting classes would be.)