Automatic Requirement Satisfaction in plain Swift

I think that the pitch gives a valid approach to this kind of problems, but for very performance-sensible protocols like Equatable and Comparable I don't think that it will ever be viable to basically do a runtime introspection (equivalent to Mirror but better defined) as opposed to directly doing the work.

Imho automatic requirements have to be defined statically.


Would this negate the need for SwiftUI to use metadata for managing a View’s state?

I'm not sure this is the best design for static reflection in Swift (which is what this is - using compiler code synthesis to build up some traversable metadata describing the value's members).

Just concentrating on the non-public conformance case for now, I think it should be possible to get equivalent performance to a hand-written implementation. I'm concerned that the long chains of generic types could lead to longer compile times (and worse -Onone/debug performance), but I guess function builders rely on the same thing.

That said, I'm not really sure I love the chain idea (with the StructuralEmpty terminator). It's painfully clear that we need variadic generics soon. It would be much nicer to express something like "a conformance for all Structurals when all their members are Equatable" like this:

extension Structural: Equatable where TypeInfo<Self>.Members<(T: Equatable)...> {
  // ...

Where TypeInfo<Root>.Members<T...> could be some magic static reflection type (like MemoryLayout<T>) with guaranteed optimisations.


Just to clarify the point of performance: it's not a matter of implementing completely new optimizations, but rather a matter of polishing what's already there.

As an example, a simpler structural encoding that only use Cons and Empty is easier to optimize away and offers a much closer performance to custom hand-specialized implementation (raw numbers here):

Benchmark Baseline (1x) Performance (more is better)
CustomEquatable Equatable 0.8268825083
CustomHashable Hashable 0.8791851976
CustomComparable Hand-specialized 0.7609794796
Additive Hand-specialized 0.8708537815
InplaceAdd Hand-specialized 0.5392857712

We designed Structural in such as way that it's possible to statically remove the conversion to and from the encoded version. Current optimiser is already doing an extremely reasonable job here without any contributions on our side.


The proposal outlined above doesn't introduce runtime reflection, but rather aims to be an equivalent of static or compile-time reflection. While we don't include any direct static evaluation guarantees in the text, we believe this can be added in the future either through improved optimizations at SIL level and/or more explicit addition of guaranteed compiled-time evaluation to Swift language.

As outlined in my previous comment, Swift is already doing a great job at optimizing a simpler version of the encoding, so there is a some evidence that suggests that more work on optimiziations at SIL level could be sufficient to minimize (or even completely eliminate) the current performance gap.

1 Like

Generalizing protocol conformance synthesis is an interesting problem, but I don't think type-level metaprogramming is the way to do it. It's a path of sadness that'll lead only to bad compiler errors, slow compile times, heavy memory usage from deeply nested type metadata, and programmer frustration because the type system is not, and really should not, be expressive enough for the sorts of things people will want to do in their protocol implementations. I also fear that we're working on too many different ways of expressing the same thing—there's already a well-developed proposal for iterating through the structure of a type with key paths, for instance. Type specialization is really just a special case of compile-time evaluation, and Swift is well set up to be able to lift values between type and value level, and also constant-fold and evaluate well-designed reflection APIs at compile time so that default implementations can be written in terms of reflection and still generate optimized specialized implementation. I think that's a far more promising direction.


Not for the implementations, but you at least need something at the type-system level to constrain conformances (e.g. for Equatable). By far the most common constraint is "all types conform to X", and I think any reasonable implementation of variadic generics ought to also support that. So it seems reasonable to model a type's properties as a variadic generic.

C++'s proposed static reflection doesn't make use of template metaprogramming, either (because of all the things you mentioned, particularly compile time). They have a magic type meta::info, which you can query or iterate using built-in expressions like get_data_members(info), and it's guaranteed to never escape to runtime (which we can be looser about).

1 Like

I feel that compile-time evaluation will also be important to having a satisfying model for working with variadics, since it is natural to want to work with them as collections rather than use otherwise unidiomatic cons-walking techniques. That would also require figuring out the interactions between type-level and value-level programming necessary to make that work. But Swift already has semantically equivalent runtime and compile-time type system implementations, as well as operations like existential wrapping and opening that move information from type- to value-level and back, so I think we're in a good position to explore that direction.


Where do value type parameters and dependent typing fit into this ?

1 Like

The reason we went for a more type-centric approach here is that we wanted to express the fact that a derived conformance is available only when all of the struct/enum parts (stored properties and/or associated values nested within) conform to the same protocol. I believe it is not possible to express this statically with a less typed approach like KeyPathIterable. You could provide a "blanket" conformance that statically works for any type, but it could only fail at runtime if parts of the type are non-conforming (which is too late). Any form of optimization at compile-time will have to respect this failure to preserve the semantics.

The fact that it isn't possible today doesn't mean it can't be expressed some other way. Compile-time evaluation has the power to fail compilation too if the evaluation encounters an error, for instance.


If we can make the compile time evaluation model work it sounds like a more pleasant and flexible model to work in to me. It also channels energy in a direction that I suspect has much more general utility.

What known use cases exist for Structural beyond providing default conformances? I've been aware of datatype generic programming and shapeless for quite a while but never had time to look into what use cases may have been discovered beyond the obvious ones discussed in this proposal.


Hi Joe!

Thanks very much for chiming in on this thread. Much appreciated.

We've thought a bit about these sorts of potential points in design space (and having prototyped out a number of them too!). One concern we have tried to think through carefully is: we want to fit within Swift's current design (e.g. that assumes separate compilation).

The combination of separate compilation (e.g. even separate files in a module) and optimization passes to de-virtualize + specialize complicates adding a feature for allowing [lack of] compile-time evaluation to fail compilation.

Regarding slow compile times & heavy memory usage from deeply nested type metadata: we've intentionally chosen a shallow representation to avoid slow compile times and complicated types. (And we've done some preliminary analysis on compile times for struct's with many fields. This also happens to work well with separate compilation & resilience boundaries too.)

Further, I suspect that providing good error messages is the same amount of work (and perhaps even less work) for this approach than arbitrary compile time evaluation & specialization & KeyPaths & ... . (But quite happy to debate the point & hear more!

On related notes, I suspect that we can invent extra syntax & sugar if we wanted to. (We've shied away from that for now.) Additionally, as called out previously, this very much begs for variadic generics. :slight_smile:

@Joe_Groff: As part of writing this response, I feel like I'm making somewhat ungrounded speculations. What do you recommend as the way for us to become concrete about these concerns / hypotheticals so we can make progress towards a conclusion?


This seems like something we can take our time to figure out what the best overall design is for reflection in the language, instead of bounding ourselves to the constraints of the current design. We've had success using compile-time evaluation to interface with Apple's os_log architecture, which is similar in that it requires a lot of nontrivial compile-time lowering to efficiently create format strings and data buffers to feed to the system log facility. On the other hand, it also seems like the proposed feature cuts against the grain of Swift's current design in some ways; as many people noted already, it requires types that take advantage of the facility to expose conformance to a protocol that exposes the underlying structure of the type in a way that would prevent the type layout from being changed without breaking ABI.

I don't think compile-time evaluation is at odds with separate compilation. You have to expose information to the compiler one way or the other, encoding it as types or as a small set of inlinable code.

Well, using a cons list to encode a list of fields means you're going to nest types arbitrarily deep the larger they get. Raising custom errors from compile-time-evaluated code is a matter of fatalError-ing or asserting with a message; we've discussed having attributes that might be able to direct custom type error messages, but that's a more complicated problem without clear solutions.

Yeah, there are many interrelated things here I think need holistic consideration. Variadic generics too IMO also call out for compile-time evaluation, to make working with them not require programming in a different sub-language. But they too have the same challenge you have with conditional default conformances that you really want to feed type information back between type- and term-level while doing so.



FYI, shabalin will be discussing this proposal in the Swift For Tensorflow Open Design Review meeting tomorrow. Anyone is welcome to join.

The meeting will be Friday 06/05 at 9am pacific / 16:00 UTC.

Meeting coordinates:

OR join by phone: ‪+1 475-558-0218‬ PIN: ‪624 329‬#


Hi Ewa, is this at 9AM Pacific time?


Yes, 9AM Pacific Time. Thanks for clarifying.

Great to see you folks @shabalin et al pushing on these topics in Swift :heart:

Just wanted to voice that some form of being able to pull off automatic conformances in libraries would be tremendously helpful. Today when implementing libraries which end up needing such things and not being part of the compiler, one has to resort to source generation which is painful and error prone (and limited in how it can inspect the involved types...). Though for what I have in mind it'd also be necessary to allow inspecting functions declared on a type I think...

Not really going to take sides about the compile-time-evaluation vs. this approach, @Joe_Groff knows way much more about what's right I suppose. But getting something like that would be tremendously helpful, also so other types can do "Codable-like" semantics, where they get conformances if all fields it contains are of that type etc. (One example I have in mind is "Copyable" or something like that, though again that's yet another topic where/how to implement it)


I'm not sure compile-time evaluation matters to constrained extensions (such as for protocol default implementations), unless you're talking about making compiler-evaluable code part of the type system.

I mean - it would be really cool...

extension FixedSizeArray<Element, let Size: Int> where Size.isMultiple(of: 2) {
  // Dreaming that we had generic value parameters...

But I'm doubtful that that kind of thing is actually feasible, though. I remember asking if it would be possible to use it in conditional compilation:

So if I understand that right, the compile-time evaluation happens after type-checking, meaning everything before that (parsing, type-checking itself including constrained extensions) can't make use of the results.

We're speaking specifically about generated default implementations here, which in some ways makes the issue of compiler evaluation interacting with the type system less important. Generated default implementations are the last resort the compiler picks up when there are no other candidate witnesses available, so a compile-time evaluation approach could rely on failure during evaluation to constrain when the default implementation is applied. If the evaluation of the default implementation builder fails, there's no other way the type could have conformed to the protocol anyway.