Automatic Requirement Satisfaction in plain Swift

Sure, but the problems being addressed by this proposal are here today and we have every reason to expect they'll continue to grow. In contrast, nobody's even floated an idea for a way to get computation that—like all ordinary computation in Swift, the system is allowed to perform at run-time—back into the type system, and as I've mentioned, forcing any such computation to happen at compile-time clashes mightily with Swift's fundamentals. We need to make progress based on what's viable (and implementable!) today, not some abstract idea of what might be possible someday if someone figures out how to do it.

Protocol conformance is a relatively well-factored part of the system: whether a type conforms to a protocol, and what that tells the rest of the type system about what you can do with the type, is orthogonal to how the members of the conformance get computed. If we were to introduce a Turing-complete decision process for picking witnesses, or for generating them via compile-time evaluation, that wouldn't affect the decidability of the rest of the type system, which only needs to know that the conformance exists.

I agree with that—it's one reason I'm not afraid of type-level metaprogramming—but don't understand why you're saying it.

That's not a problem if the conformance doesn't cross a resilience boundary, which is what the scoped conformances proposal is about.

That sounds interesting, but I don't really understand how it would play out. Can you give an example?

While freely admitting that I don't know much about this area, one issue that I've heard WRT letting ordinary Swift code participate in type-checking is that compile-time evaluation happens as an optimisation after SIL is generated, and that's too late in the pipeline to participate in semantic analysis.

So one thing I've been wondering is whether we could use an LTO-style summary file approach to do incremental type-checking. IIUC, that could allow things like the following (imagining that we have generic value parameters):

struct FixedSizeArray<Element, let Count: Int> { ... }

func supportsSpecialArrayOperations<Element>(
  _ elementType: Element.self,
  count: Int
) -> Bool {
  return TypeInfo<Element>.isTrivial && count.isMultiple(of: 2) // && ...
}

extension FixedSizeArray where supportsSpecialOperations(Element, Count) {

   func mySpecialOperation() -> Element { ... }
}

Then when calling myObject.mySpecialOperation(), that will initially be flagged as 'maybe okay' by semantic analysis, SIL will be generated and optimised, and we process the file again with those results and can now statically determine whether that function exists or not. And a similar sort of principle could allow generated protocol conformances to have complex predicates which are also expressed as ordinary Swift code.

That's the picture I've assembled from fragments here and there - is that an accurate statement of the problem, and could this kind of solution work in principle?

If we allowed type-level predicates like that, then the expressions would most likely be required to be compiler-evaluable (implying that they're inlinable, and thereby have their implementations exposed to the compiler in an interpretable form). In at least some situations, we can factor the evaluation in such a way that semantic analysis does not rely on the result of the evaluation to make name lookup or type inference decisions. Protocol conformance is one such potential situation, because type checking the conforming type and uses of the protocol only needs to know that the conformance exists; if resolving the members of that conformance used compile-time evaluation, it could conceivably happen after semantic analysis as a mandatory SIL pass. Similarly, if overload resolution cannot not be directed by compile-time evaluation, such as if we treated definitions with a predicate like your example as the least specific overload, then we would not have to evaluate the predicate in order to do semantic analysis.

4 Likes