Automatic Requirement Satisfaction in plain Swift

The fact that it isn't possible today doesn't mean it can't be expressed some other way. Compile-time evaluation has the power to fail compilation too if the evaluation encounters an error, for instance.


If we can make the compile time evaluation model work it sounds like a more pleasant and flexible model to work in to me. It also channels energy in a direction that I suspect has much more general utility.

What known use cases exist for Structural beyond providing default conformances? I've been aware of datatype generic programming and shapeless for quite a while but never had time to look into what use cases may have been discovered beyond the obvious ones discussed in this proposal.


Hi Joe!

Thanks very much for chiming in on this thread. Much appreciated.

We've thought a bit about these sorts of potential points in design space (and having prototyped out a number of them too!). One concern we have tried to think through carefully is: we want to fit within Swift's current design (e.g. that assumes separate compilation).

The combination of separate compilation (e.g. even separate files in a module) and optimization passes to de-virtualize + specialize complicates adding a feature for allowing [lack of] compile-time evaluation to fail compilation.

Regarding slow compile times & heavy memory usage from deeply nested type metadata: we've intentionally chosen a shallow representation to avoid slow compile times and complicated types. (And we've done some preliminary analysis on compile times for struct's with many fields. This also happens to work well with separate compilation & resilience boundaries too.)

Further, I suspect that providing good error messages is the same amount of work (and perhaps even less work) for this approach than arbitrary compile time evaluation & specialization & KeyPaths & ... . (But quite happy to debate the point & hear more!

On related notes, I suspect that we can invent extra syntax & sugar if we wanted to. (We've shied away from that for now.) Additionally, as called out previously, this very much begs for variadic generics. :slight_smile:

@Joe_Groff: As part of writing this response, I feel like I'm making somewhat ungrounded speculations. What do you recommend as the way for us to become concrete about these concerns / hypotheticals so we can make progress towards a conclusion?


This seems like something we can take our time to figure out what the best overall design is for reflection in the language, instead of bounding ourselves to the constraints of the current design. We've had success using compile-time evaluation to interface with Apple's os_log architecture, which is similar in that it requires a lot of nontrivial compile-time lowering to efficiently create format strings and data buffers to feed to the system log facility. On the other hand, it also seems like the proposed feature cuts against the grain of Swift's current design in some ways; as many people noted already, it requires types that take advantage of the facility to expose conformance to a protocol that exposes the underlying structure of the type in a way that would prevent the type layout from being changed without breaking ABI.

I don't think compile-time evaluation is at odds with separate compilation. You have to expose information to the compiler one way or the other, encoding it as types or as a small set of inlinable code.

Well, using a cons list to encode a list of fields means you're going to nest types arbitrarily deep the larger they get. Raising custom errors from compile-time-evaluated code is a matter of fatalError-ing or asserting with a message; we've discussed having attributes that might be able to direct custom type error messages, but that's a more complicated problem without clear solutions.

Yeah, there are many interrelated things here I think need holistic consideration. Variadic generics too IMO also call out for compile-time evaluation, to make working with them not require programming in a different sub-language. But they too have the same challenge you have with conditional default conformances that you really want to feed type information back between type- and term-level while doing so.



FYI, shabalin will be discussing this proposal in the Swift For Tensorflow Open Design Review meeting tomorrow. Anyone is welcome to join.

The meeting will be Friday 06/05 at 9am pacific / 16:00 UTC.

Meeting coordinates:

OR join by phone: ‪+1 475-558-0218‬ PIN: ‪624 329‬#


Hi Ewa, is this at 9AM Pacific time?


Yes, 9AM Pacific Time. Thanks for clarifying.

Great to see you folks @shabalin et al pushing on these topics in Swift :heart:

Just wanted to voice that some form of being able to pull off automatic conformances in libraries would be tremendously helpful. Today when implementing libraries which end up needing such things and not being part of the compiler, one has to resort to source generation which is painful and error prone (and limited in how it can inspect the involved types...). Though for what I have in mind it'd also be necessary to allow inspecting functions declared on a type I think...

Not really going to take sides about the compile-time-evaluation vs. this approach, @Joe_Groff knows way much more about what's right I suppose. But getting something like that would be tremendously helpful, also so other types can do "Codable-like" semantics, where they get conformances if all fields it contains are of that type etc. (One example I have in mind is "Copyable" or something like that, though again that's yet another topic where/how to implement it)


I'm not sure compile-time evaluation matters to constrained extensions (such as for protocol default implementations), unless you're talking about making compiler-evaluable code part of the type system.

I mean - it would be really cool...

extension FixedSizeArray<Element, let Size: Int> where Size.isMultiple(of: 2) {
  // Dreaming that we had generic value parameters...

But I'm doubtful that that kind of thing is actually feasible, though. I remember asking if it would be possible to use it in conditional compilation:

So if I understand that right, the compile-time evaluation happens after type-checking, meaning everything before that (parsing, type-checking itself including constrained extensions) can't make use of the results.

We're speaking specifically about generated default implementations here, which in some ways makes the issue of compiler evaluation interacting with the type system less important. Generated default implementations are the last resort the compiler picks up when there are no other candidate witnesses available, so a compile-time evaluation approach could rely on failure during evaluation to constrain when the default implementation is applied. If the evaluation of the default implementation builder fails, there's no other way the type could have conformed to the protocol anyway.

I spent some time on this after @shabalin's presentation yesterday. (The presentation was clear and interesting, thanks Denys.)

I had to work through this concretely to understand it. I hope this helps somebody else understand it better, and also this it does not make somebody else misunderstand. Please correct me on any errors.

Quick summary:

  1. The StructuralCons version has complicated syntax but works.

  2. I would like something easier to read and less nested.

  3. Alternatives:

    1. Create fixed-type-arity Structure types for the fields and properties of structures (and anything else where you need structure metadata). This works but multiplies entities a lot.

    2. Variadic generics. But if I try to analyze that out in any detail it gets complicated because variadic generics are complicated. I have not reached full understanding there though.

  4. Possible conclusions:

  5. Use the StructuralCons way but add some stuff to the compiler to manage the syntax. Maybe you could hide some of this from the programmer. (Hand-waving because I have not thought that through.)

  6. Have the compiler synthesize something like the fixed-arity Structure types to simplify the type structure and syntax.

  7. Just deal with the StructuralCons stuff and assume that you know how to fly when you start the propeller. If the nested types strain the compiler then work on the compiler to deal with it.


I am more comfortable with the StructuralCons approach than I was when I started, after I worked through a couple of alternatives in strawman code. Using the Cons-ed types you can recurse through arbitrary-sized structures.

Structure1 ... StructureN

One alternative would be to create a family of Structure1 ... StructureN structs that took a known number of type parameters. You'd get something like

struct Structure3<Type_1, Type_2, Type_3> {
  let _1 : Type_1
  let _2 : Type_2
  let _3 : Type_3

(Assume that's in the middle of a family of structs with Structure2 before, Structure4 ... after.)

Then your .structure method would return one of those for the parts of your structure you wanted to return, matching the correct cardinality.

I coded that up and it works (I can do the CustomHashable example at least.) It's a little easier to read than with the StructuralCons, but you need to use the right StructureN and change it when you add new fields or properties to the struct you're describing, etc. (I try not to use the words “static” and “dynamic” for anything but you see my point.)

There's a gist for
that sample code here.

Variadic Generics

The strawman version uses some kind of variadic generic for the sequence of fields and properties in a structure.

If you have some syntax like:

struct Structured<[T]> {

that expands to

struct Structured {
  let _1 : T[0]
  let _2 : T[1]

in my strawman variadic generic syntax, then you can imagine how you could generate Cons-less structure components.

The problem with my strawman syntax for variadic generics (and possibly with other ways to do variadic generics) is that it's hard to figure out how to use the generated values in real code.

In the body of the struct you're defining, you can't code very strongly because you don't know the number and types of the created fields. PartialKeyPaths sort of get there, but the Any on the right-hand side really loosens the guarantee.

On the caller side, you know more (since you supply the types) but in the structure case you're supplying a variadically-generic struct out to some client, who does not directly know about the structure, unless there's some independent reflection like .allKeyPaths.

I went a bit down the path of analyzing variadic generics and things started to get weird. I have not followed that path all the way to its conclusion yet.

Conclusions (Partial conclusions)

I knew why there were StructuralConses before, but now I really know why.

Hope that all makes sense.


Great write-up!

The Cons-list is certainly an interesting workaround. Unfortunately I think it's a general problem that many of the interesting things we'd like to do with the language are limited by the current generics system. Especially when you get in to the realm of plugging in to Swift at compile-time (e.g. code transforms like function builders, static reflection like for default implementations based on member information), or really any kind of boilerplate elimination. We need to deal with arbitrary sequences of heterogeneous types (possibly with constraints), and that's why variadic generics is so important - even if it sounds a bit niche and looks like alien hieroglyphics. Fortunately it's (loosely?) on the roadmap for Swift 6.

Let me sketch out one possible design using variadics, and without relying on compile-time evaluation.

Most things that use variadic generics will have a variable number of members (e.g. the common example of a ZipN wrapper needs to store the N sequences which make it up, each of which has a different type), so there will need to be some way to iterate over them. So you could imagine a ZipN iterator looking something like this:

struct ZipNSequence<(T: Sequence)...>: Sequence {
  // ...
  struct Iterator: IteratorProtocol {
    var iterators: (T.Iterator...)
    mutating func next() -> (T.Element...)? {
      // Here. Constructs a tuple by iterating over the variadic members,
      // Each of which has a different type.
      return ( 

There are lots of options for how we could design a Members<T...> structure, but the simplest is to allow extracting an object's members as a tuple by keypath application, just like the ZipSequence does with calling .next().

struct TypeInfo<Root> {
  // The "magic" here is that the parameters depend on compile-time
  // knowledge about `Root`.
  // Think of it like every single type had a hidden extension with its own 
  // definition of this, each with the correct type.
  static var members: Members<...magic...>

  struct Members<T...> {
    static var keyPaths: (KeyPath<Root, T>...) { /* compiler magic */ }

    static func valuesAsTuple(from instance: Root) -> (T...) {
      // Or something like this. Analogous to calling ""
      // on every item. Alternatively, add a 
      // "KeyPath<Root, T>.get(from: Root) -> T" member function.
      return (instance[keyPath: keyPaths...]...)

Then you could compare the tuples, since those are now Equatable to any arity.

extension Equatable where TypeInfo<Self>.Members<(T: Equatable)...> {
  public static func ==(lhs: Self, rhs: Self) -> Bool {
    return TypeInfo.members.valuesAsTuple(from: lhs) == 
             TypeInfo.members.valuesAsTuple(from: rhs) 

If you had to do some more complex walking, like for Codable, you'd have to iterate, possibly with some kind of local generic binding (iteration over heterogeneous types isn't currently possible in Swift, which is one of the reasons we can't iterate over tuple components):

extension Encodable where TypeInfo<Self>.Members<(T: Encodable)...> {
  public func encode<T>(using encoder: T) where T: Encoder {
    for <ChildType: Encodable> child in TypeInfo.members.valuesAsTuple(from: self) {
      child.encode(using: encoder)

I think that also looks a lot simpler than writing a bunch of conformances for every node in the Cons chain.


Hello! Why is the structuralRepresentation property is not static? I see that this provides access to the underlying values, but wouldn't this allow an implementation to vary the structure returned? Do you know how this affects the compiler's ability to optimise the conforming type's implementation?

I've been meaning to respond to this for some time now but have had my hands full with other things.

Wow, that's quite the scary tale! Over the years, especially as I've gained experience with the minimal metaprogramming already possible in Swift, I've become more and more convinced that what I think you're calling “type-level metaprogramming” is not the evil you seem to be saying it is.

Because Swift generics are type-checked at their point of definition, “type-level metaprograms” are just “programs,” and error messages are no worse than we get from any other Swift code. In fact, more “first class” metaprogramming systems such as quasi-quotes inevitably create the same kind of bad error messages you may be accustomed to seeing from C++ template metaprograms—those that point into the guts of the implementation—because they operate on AST fragments that are disconnected from the type system. This is just one advantage of keeping the type system actively engaged at all times.

I'm not sure what causes you have in mind when you cite “slow compile times,” but it's important to remember that when generics aren't separately compiled, every distinct invocation of a metafunction needs to be instantiated and type-checked. Swift doesn't have that problem.

Heavy memory usage due to nested metadata: if it's a runtime cost you're concerned with, except in atypical “infinite type” scenarios, non-public types and conformances don't have to incur any metadata cost in shipping code. I understand that it's more important to optimize that metadata cost away on mobile platforms, but it's an opportunity that should be taken regardless of how deeply types are ultimately nested.

I don't understand the assertion that the type system should not be expressive enough for what we want to do in protocol implementations. Of course nobody is claiming that all computation should be possible or practical in the type system, but surely the type system needs to be expressive enough to support—along with the rest of the language—everything we need to do in Swift.

Frankly, given the fix under discussion for conditional conformances of generic types, the type system is just a small step away from being Turing-complete already. I happen to think it's inevitably going to get there, because, as I mentioned, type-level metaprograms are just programs, and there's no hard line you can point to and say “naughty coder, now you're writing a metaprogram, and you shouldn't use the type system for that.” People that view themselves as doing ordinary programming are going to keep demanding a more expressive and consistent type system.

That's really not true, unless there's some feature I don't know about for describing a new type at runtime and injecting it back into the type system. You can't generate the associated TangentVector type for a model of Differentiable using any form of runtime evaluation, so there's nothing to lift to compile-time. And even if we had such a runtime facility, it would have the same error message weaknesses as a quasi-quote-based macro system. This is another place where keeping type computation engaged with the type system has real benefits.

Lastly, regarding variadic generics: for the record, I want them as much as anybody, and I think they could potentially clean up some code related to this proposal. But even once we have them, I'm pretty sure we'll want to use them with nested type structures in order to express their full potential. Certainly all the actual implementations of variadic generics in other languages are used that way. I don't love looking at cons-lists in diagnostics, but you can solve that other ways, like making (A, B) an alias for Cons<A, Cons<B, Never>> or whatever. We seem to want tuples to have all the properties of nominal types anyhow… :wink:

/cc @Douglas_Gregor @Chris_Lattner3


Instead of thinking about things in terms of type system vs. compile-time evaluation, I think we should think about how these two mechanisms can work together and strengthen each other. I think you'd agree that one of the most frustrating things about C++'s evolution is how the different strands of template metaprogramming and constexpr have never quite managed to dovetail. I think we're in a good position to do better than that in Swift, since the generics system already has a compile-time/runtime duality, and compile-time evaluation gives us opportunities to give the rest of the language that same power to span compile time and runtime.

Getting to the matter at hand here, I'm concerned that we're growing a lot of different reflection-ish interfaces in different directions. What I'd like to figure out is what the best foundational interface for exploring type structures, and constructing new ones, would look like. I think it should be possible to have a mechanism that can use compile-time information when available, but also fall back to runtime reflection when spanning ABI boundaries (or just when opting out of specialization for any other reason). I also think that that will be necessary for success, because Swift's type system is never really fully "static" in the way C++'s is, once you have dynamic reflection or multiple ABI-independent libraries in play.


I would love to hear your ideas about that!

Actually I had to stop programming in C++—and, with gratitude, also got to stop worrying about its evolution—when I joined the Swift project in 2013. IIUC that was fairly early in the evolution of constexpr and all kinds of new TMP stuff has happened since, so I guess I never had the privilege of being frustrated by that problm. But I can imagine what it must be like for you :wink:

As far as I know, the rest of the language already has that power, at least in principle. There's no law in the non-existent language specification that prevents the compiler from deducing whatever it wants about what runtime code will do, and I/O aside, moving that computation to compile-time where possible.

But I'm really not focused on when computation happens. Nothing being proposed here—other than optimizability—depends on being able to do more at compile time than the type system already does in a debug build. We don't have a problem with crossing that boundary. There's a boundary with the type system that's more of a problem: once computation leaves the type system, you can't fully get it back in. We can construct different types depending on computation outside the type system (example), but you can never use computation outside the type system to decide which of these types will be bound to a typealias or associatedtype.

When trying to address the problem of metadata bloat, one approach we considered aside from scoped conformances was to think about ways a programmer could demand that certain things happen at compile-time. We decided not to propose those approaches because they clash with Swift's non-monomorphizable model, and so can't be composed with regular Swift code. The lesson for me in that is that it may be a red herring to focus on the compile-time/run-time distinction.

Doesn't what's being proposed here achieve exactly that? It works just fine with code composed across resilience boundaries.

You don't need anything nearly that esoteric for the problem to arise; the example I pointed at above is enough. It's why decided not to pursue “compile-time features:” the inability to monomorphize generics in the general case affects the type system pervasively.

We're talking about adding stuff to the language here, so we don't necessarily need to constrain ourselves to what's possible now. Protocol conformance is a relatively well-factored part of the system: whether a type conforms to a protocol, and what that tells the rest of the type system about what you can do with the type, is orthogonal to how the members of the conformance get computed. If we were to introduce a Turing-complete decision process for picking witnesses, or for generating them via compile-time evaluation, that wouldn't affect the decidability of the rest of the type system, which only needs to know that the conformance exists.

It kind of defeats the point of resilience, though, if it encodes the structure of a type as an associated type, because an associated type binding can't change. I think that we generally don't want these sorts of fundamental type traits to be tied to protocol conformances; if we do need a type-level representation of them, I think they should behave more like AnyObject, which can be used as constraints, but don't require conformance, because they're intrinsic to the type.

Sure, but the problems being addressed by this proposal are here today and we have every reason to expect they'll continue to grow. In contrast, nobody's even floated an idea for a way to get computation that—like all ordinary computation in Swift, the system is allowed to perform at run-time—back into the type system, and as I've mentioned, forcing any such computation to happen at compile-time clashes mightily with Swift's fundamentals. We need to make progress based on what's viable (and implementable!) today, not some abstract idea of what might be possible someday if someone figures out how to do it.

Protocol conformance is a relatively well-factored part of the system: whether a type conforms to a protocol, and what that tells the rest of the type system about what you can do with the type, is orthogonal to how the members of the conformance get computed. If we were to introduce a Turing-complete decision process for picking witnesses, or for generating them via compile-time evaluation, that wouldn't affect the decidability of the rest of the type system, which only needs to know that the conformance exists.

I agree with that—it's one reason I'm not afraid of type-level metaprogramming—but don't understand why you're saying it.

That's not a problem if the conformance doesn't cross a resilience boundary, which is what the scoped conformances proposal is about.

That sounds interesting, but I don't really understand how it would play out. Can you give an example?

While freely admitting that I don't know much about this area, one issue that I've heard WRT letting ordinary Swift code participate in type-checking is that compile-time evaluation happens as an optimisation after SIL is generated, and that's too late in the pipeline to participate in semantic analysis.

So one thing I've been wondering is whether we could use an LTO-style summary file approach to do incremental type-checking. IIUC, that could allow things like the following (imagining that we have generic value parameters):

struct FixedSizeArray<Element, let Count: Int> { ... }

func supportsSpecialArrayOperations<Element>(
  _ elementType: Element.self,
  count: Int
) -> Bool {
  return TypeInfo<Element>.isTrivial && count.isMultiple(of: 2) // && ...

extension FixedSizeArray where supportsSpecialOperations(Element, Count) {

   func mySpecialOperation() -> Element { ... }

Then when calling myObject.mySpecialOperation(), that will initially be flagged as 'maybe okay' by semantic analysis, SIL will be generated and optimised, and we process the file again with those results and can now statically determine whether that function exists or not. And a similar sort of principle could allow generated protocol conformances to have complex predicates which are also expressed as ordinary Swift code.

That's the picture I've assembled from fragments here and there - is that an accurate statement of the problem, and could this kind of solution work in principle?

If we allowed type-level predicates like that, then the expressions would most likely be required to be compiler-evaluable (implying that they're inlinable, and thereby have their implementations exposed to the compiler in an interpretable form). In at least some situations, we can factor the evaluation in such a way that semantic analysis does not rely on the result of the evaluation to make name lookup or type inference decisions. Protocol conformance is one such potential situation, because type checking the conforming type and uses of the protocol only needs to know that the conformance exists; if resolving the members of that conformance used compile-time evaluation, it could conceivably happen after semantic analysis as a mandatory SIL pass. Similarly, if overload resolution cannot not be directed by compile-time evaluation, such as if we treated definitions with a predicate like your example as the least specific overload, then we would not have to evaluate the predicate in order to do semantic analysis.