The "Requirement Machine", a new generics implementation based on term rewriting

Hi all,

I just flipped a feature flag enabling a new generic signature minimization algorithm for protocol signatures. Since I've only posted a little bit about my work in this area here so I thought I'd give an more complete update.

A year and a half ago I wrote how Swift type checking is undecidable, then last year I sketched out a new way of formalizing Swift generics as a term rewrite system. Since then I've been working on implementing this idea with the goal of replacing the GenericSignatureBuilder.


The GenericSignatureBuilder is used for two distinct tasks in the compiler:

  • Computing a generic signature from user-written requirements in where clauses. A generic signature contains a "minimal, canonical" set of requirements from which the properties of all type parameters (generic parameters and their nested types) can be proven.
  • Answering queries about type parameters in a previously-built generic signature.

The GenericSignatureBuilder has several long-standing problems that are quite hard to fix in an incremental way.

It is slow, with exponential time and memory usage when a protocol hierarchy has high "fan-out" (if protocol P1 has 4 associated types that conform to P2, and P2 has 4 associated types that conform to P3, and so on, it constructs 4*4*4*... rather large objects to represent each equivalence class for example).

It uses a lot of memory, because answering generic signature queries requires keeping these rather large lazily-built GenericSignatureBuilder instances around for each generic signature the type checker encounters. In extreme examples you can make it allocate gigabytes of memory and end up with an effectively non-terminating compilation.

It suffers from various correctness issues, mostly in the minimization algorithm. For example, consider this protocol from SR-12120:

protocol Swappable where
    Swapped: Swappable,
    Swapped.Swapped == Self,
    Swapped.A == B, Swapped.B == A
    associatedtype A
    associatedtype B
    associatedtype Swapped

The two requirements Swapped.A == B and Swapped.B == A imply each other, one or the other can be dropped from the minimal generic signature for the protocol Swappable, however the GenericSignatureBuilder would drop both, resulting in an incorrect protocol that did not satisfy the properties expected by the user.

Another example concerns superclass requirements. Suppose you have two classes, and a Holder<T, U> type where T is required to be some subclass of Base<U>:

class Base<U> {}
class Derived : Base<Int> {}
struct Holder<T : Base<U>, U> {}

Now I write an extension of Holder for the case where T is further constrained to be a subclass of Derived:

extension Holder where T : Derived {}

The requirement T : Derived in the extension's where clause supercedes the requirement T : Base<U> in the definition of struct Holder. The GenericSignatureBuilder would drop the latter requirement. However, that was not enough, because T : Base<U> together with T : Derived also imply that U == Int. The Requirement Machine correctly handles this:

extension Holder where T : Derived {
  func foo(_ u: U) -> Int {
    // fails to type check with GenericSignatureBuilder; RequirementMachine
    // correctly infers that "U == Int" therefore "u" can be returned here
    return u

One last example from SR-14485 where the GenericSignatureBuilder completely mangles the generic signature:

func truncate<Input: FixedWidthInteger & UnsignedInteger & BinaryInteger,
              Output: FixedWidthInteger & SignedInteger & BinaryInteger>(_ input: Input, outputType: Output.Type = Output.self) -> Output 
              where Output.Magnitude == Input {

The above truncate() function would receive the generic signature <Input, Output where Input == Input.Magnitude, Output : FixedWidthInteger, Output : SignedInteger, Input.Magnitude == Input.Magnitude.Magnitude.Magnitude>. This has the nonsense requirement Input.Magnitude == Input.Magnitude.Magnitude.Magnitude, and also Input itself loses it's conformance requirement, so it doesn't even have a nested type named Magnitude at all.

Finally, the fact that uncomputable generic signatures exist together with the GenericSignatureBuilder making no attempt to define which signatures it supports means the underlying approach is fundamentally unsound. A valid Swift compiler cannot claim to accept all "syntactically valid" generic signatures.

Current state

The new implementation is called the "Requirement Machine". The source code is in the lib/AST/RequirementMachine directory of the source tree. Unlike the GenericSignatureBuilder which is one big file, I tried to split it up into smaller files with lots of comments.

In Swift 5.6, the Requirement Machine is enabled for generic signature queries. The GenericSignatureBuilder is still used for constructing new canonical minimal signatures. This setup doesn't address most of the bugs, but it reduces memory usage, since the Requirement Machine's rewrite system implementation is inherently more compact than the GenericSignatureBuilder's "expanded" graph of type parameters.

Swift 5.6 still allows falling back to the GenericSignatureBuilder for generic signature queries by passing the -requirement-machine=off frontend flag. This flag is gone on the main branch, where the old implementation of queries has been removed. We now always build a rewrite system for generic signatures encountered by the type checker.

On the main branch, I've been working on an implementation of generic signature minimization using the Requirement Machine. This is controlled with three frontend flags:

  • -requirement-machine-protocol-signatures={off,on,verify}
  • -requirement-machine-inferred-signatures={off,on,verify}
  • -requirement-machine-abstract-signatures={off,on,verify}

These flags enable the new code path for protocol signatures, generic signatures of concrete types and functions, and finally for the last one, some generic signatures the compiler constructs internally as part of conformance checking and other things. The new algorithm is mostly implemented, except emission of requirement redundancy and conflict diagnostics is still in progress.

For each flag, the "off" state runs the old GenericSignatureBuilder logic as before. The "on" state runs the new logic. "verify" mode runs both and compares the results, triggering an assertion if they mismatch. The purpose of verify mode is to emit diagnostics (since so far only the GenericSignatureBuilder can do it) and to test the new code (on at least the generic signatures that the GenericSignatureBuilder could support, we expect them to match to avoid breaking the ABI for the calling convention and function mangling).

Today I landed a PR to enable the -requirement-machine-protocol-signatures flag, setting it to "verify" by default. This is a temporary state of affairs; once we do some more qualification and finish diagnostics, I'll set it to "on" by default. What this means is that if you try an example that caused problems for the GenericSignatureBuilder, such as the Swappable protocol above, you'll get an assertion. If you see assertions on valid code, please let me know.

Once all three flags are set to on by default, the GenericSignatureBuilder will not run at all. At some unspecified future point when everyone is happy with the new state of things, I will remove the GenericSignatureBuilder entirely.


I've measured a significant memory usage reduction when the Requirement Machine was enabled for queries in Swift 5.6. I've also seen performance gains with the new minimization algorithm on main. I have not optimized the minimization algorithm much yet, so there is a lot of room for improvement, but it's dramatically faster on pathological examples, with more modest gains on codebases like the standard library. I have yet to see any projects where it is slower, despite the unoptimized state.


There are various frontend flags that produce interesting output if you're curious:

  • -dump-requirement-machine -- dumps all constructed rewrite systems
  • -debug-requirement-machine=... -- specify a comma-separated combination of debugging flags
  • -analyze-requirement-machine -- dumps various histograms I used for performance tuning


For solving generic signature queries and computing canonical types, the techniques are pretty standard, such as the Knuth-Bendix completion algorithm and so on. Incidentally, this is where the undecidable generic signatures are rejected; completion is a semi-decision procedure that either terminates with a correct solution or runs forever. We run it up to a limit set by command line flags, which you should hopefully never need to change:

  • -requirement-machine-max-rule-count
  • -requirement-machine-max-term-length
  • -requirement-machine-max-concrete-nesting

The problem of coming up with a general algorithm to find a minimal set of rules that generate a rewrite system, and picking a "canonical" minimal set among many possibilities was surprisingly difficult to solve in a general way (where "general" means fixing the cases that the GenericSignatureBuilder failed to find a correct answer for, while producing identical results on formerly-valid code). I came up with an algorithm based on these three papers:


I started writing a paper about the Requirement Machine. I'm not going to post a PDF yet because it is still a work in progress. In particular, the minimization algorithm is not documented, and some parts are out of date since it represents the state of the design from June last year. If you're curious, install a TeX distribution and run "make" in the docs/RequirementMachine directory of the source tree.


Slava, I've been idly following your work on the Requirement Machine since your undecidability post a year and a half (!) ago, and just want to commend you on such a monumental effort! Really exciting to see this finally enabled by default, and thrilled to see that we're likely to get some performance wins in addition to a more sound foundation to Swift generics. Congratulations! :slightly_smiling_face:



A hopefully quick question. In the original post about the type system being undecidable, you wrote that:

What we need to do is come up with an appropriate restriction to the combination of SE-0142 and SE-0157. If a protocol obeys the restriction, the algorithm should always terminate with the correct result. If a protocol does not obey the restriction, we want to be able to produce a diagnostic instead of crashing or miscompiling.

What restrictions came out of that work?


Glad you asked -- I probably should have talked about this in my post.

So to begin with, every generic signature is in math terms is finitely presented. This means the original number of generic requirements written by the user is necessarily finite.

Starting from a finite set of requirements, we build a confluent rewrite system, that is, one where a canonical type can be computed in a finite number of steps by applying rules in any order.

The process of building a confluent rewrite system from a finite set of initial requirements is not guaranteed to terminate, so it runs up to a specified number of iterations, failing with an error if the iteration limit is exceeded. If the process succeeds, let's call the original set of requirements convergent. So we have this set inclusion to begin with:

  • Convergent generic signatures ⊂ Finitely-presented generic signatures

So let's try the undecidable example from my original post:

protocol Undecidable
    where A.C == C.A,
          A.D == D.A,
          B.C == C.B,
          B.D == D.B,
          C.E == E.C.A,
          D.E == E.D.B,
          C.C.A == C.C.A.E {
  associatedtype A : Undecidable
  associatedtype B : Undecidable
  associatedtype C : Undecidable
  associatedtype D : Undecidable
  associatedtype E : Undecidable

This is rejected with a somewhat cryptic error message:

undecidable.swift:1:10: error: cannot build rewrite system for protocol; rule length limit exceeded
protocol Undecidable
undecidable.swift:1:10: note: failed rewrite rule is [Undecidable:A].[Undecidable:B].[Undecidable:D].[Undecidable:C].[Undecidable:C].[Undecidable:C].[Undecidable:E].[Undecidable:B].[Undecidable:A].[Undecidable:A].[Undecidable:E].[Undecidable:C].[Undecidable:E] => [Undecidable:A].[Undecidable:B].[Undecidable:D].[Undecidable:C].[Undecidable:C].[Undecidable:C].[Undecidable:E].[Undecidable:B].[Undecidable:A].[Undecidable:A].[Undecidable:E].[Undecidable:C]
protocol Undecidable

So we know there are undecidable generic signatures that we simply cannot reason about, and we're limited to decidable generic signatures. This gives us another set inclusion in the middle:

  • Convergent generic signatures ⊂ Decidable generic signatures ⊂ Finitely-presented generic signatures

However, the undecidable cases are arguably not interesting, since they're extremely hard to construct and would never arise in practice.

The question is, are there decidable generic signatures that are not convergent? The answer is unfortunately "yes". Indeed, there is no single algorithm that can solve the word problem (in our lingo, compute canonical types) for every decidable monoid (generic signature). Everything we do is necessarily an approximation.

I'll show some examples of those shortly, but let's try to refine the set inclusions from the other end first.

First of all, let's say a generic signature is finite if there are only finitely many unique type parameters. (This is a stricter condition than there only being finitely many requirements!). A finite generic signature is always convergent:

  • Finite generic signatures ⊂ Convergent generic signatures ⊂ Decidable generic signatures ⊂ Finitely-presented generic signatures

Any generic signature that does not involve a protocol with a recursive conformance is always finite.

A signature with a recursive conformance is finite if the recursive conformance is "tied off" with a same-type requirement. For example,

protocol Z5 {
  associatedtype T : Z5 where T.T.T.T.T == Self

The above protocol only has 5 distinct type parameters -- Self, Self.T, Self.T.T, Self.T.T.T, Self.T.T.T.T -- so any signature with a conformance to this protocol (eg, <T where T : Z5>) is finite.

(Incidentally, the GenericSignatureBuilder had trouble with this protocol, showing that it's view of the world was incorrect even for finite generic signatures. For reasons I never understood and no longer have any reason to investigate further, the "Z3" example works, but "Z4" and "Z5" do not).

Generalizing further, a recursive conformance that is completely unconstrained also gives rise to a convergent rewrite system, despite there being infinitely many type parameters now. The SwiftUI View protocol is one example:

protocol View {
  associatedtype Body : View

If I have a function generic over Views with the signature <T where T : View>, then I have infinitely many type parameters: T, T.Body, T.Body.Body, ... etc. However a confluent rewrite system can still be constructed.

Another example is the Collection protocol's Indices type. There are some requirements placed on it, for example Indices.Element == Index, but the recursive expansion of Indices itself goes on forever, so if you have <T where T : Collection>, then T.Indices, T.Indices.Indices and so on are all distinct types. (Arguably, this was an oversight in the design; I highly doubt any concrete conforming type actually has unique instances for each of those nested types in practice). While there are infinitely many unique type parameters here, there is a confluent rewrite system with finitely-many rewrite rules.

So this means that in order to break the confluent rewrite system formalism, you need a generic signature that simultaneously:

  • involves a protocol with at least two recursive associated types, and same-type requirements between them
  • despite the same-type requirements, still has infinitely many unique type parameters

(Note that these conditions are necessary, but not sufficient. There are plenty of examples that satisfy both but still work.)

The simplest example I've seen is the following:

protocol XYX where X.Y == X.Y.X {
  associatedtype X : XYX
  associatedtype Y : XYX

With this protocol, completion will construct an infinite sequence of rules:

Self.X.Y.X => Self.X.Y
Self.X.Y.Y.X => Self.X.Y.Y
Self.X.Y.Y.Y.X => Self.X.Y.Y.Y

If you try to compile this protocol in Swift 5.5 , it appears to work! So how much expressivity did we lose when the Requirement Machine was introduced? Well, in reality, not much, because if you play around with it a bit, you'll see the GenericSignatureBuilder doesn't actually reason about it correctly, failing to discover the longer equivalences:

protocol XYX where X.Y == X.Y.X {
  associatedtype X : XYX
  associatedtype Y : XYX

func sameType<T>(_: T.Type, _: T.Type) {}

func foo<T : XYX>(_: T) {
  // The GSB allows this
  sameType(T.X.Y.Y.Y.X.self, T.X.Y.Y.Y.self)
  // The GSB does not allow this, even though it should work
  sameType(T.X.Y.Y.Y.Y.X.self, T.X.Y.Y.Y.Y.self)

Also while this protocol looks simple, I don't believe it is as simple as it seems. You have two recursive conformances on the associated types X and Y, and a non-trivial same-type requirement between a nested type of length 2 and 3. If someone can think of a practical use-case for this, involving a conforming concrete type, I'd be curious to see it.

However, this protocol certainly does have a decidable word problem (algorithm for finding a canonical type). You look for occurrences of "X" that appear after "X.Y" followed by zero or more "Y"'s, and delete the final "X".

Another interesting thing is that this protocol shows that the choice of the initial requirements can affect whether completion succeeds or not. Let's try adding a third associated type W that is equivalent to X.Y:

protocol XYX where X.Y == X.Y.X, X.Y == W {
  associatedtype W : XYX
  associatedtype X : XYX
  associatedtype Y : XYX

Incidentally, this also shows that the word problem here is decidable; by fiddling with the initial set of requirements, we've constructed an isomorphic signature that has a convergent rewrite system.

Amusingly, if you run this new version with -requirement-machine-protocol-signatures=verify, it hits the verify assertion because the GSB produces a ridiculous "minimal" signature:

RequirementMachine protocol signature minimization is broken:
Protocol: XYX
RequirementMachine says:      <Self where Self.[XYX]W == Self.[XYX]W.[XYX]X, Self.[XYX]X : XYX, Self.[XYX]Y : XYX, Self.[XYX]W.[XYX]X == Self.[XYX]X.[XYX]Y>
GenericSignatureBuilder says: <Self where Self.[XYX]W == Self.[XYX]W.[XYX]X, Self.[XYX]X : XYX, Self.[XYX]Y : XYX, Self.[XYX]W.[XYX]W == Self.[XYX]W.[XYX]W.[XYX]X, Self.[XYX]W.[XYX]X == Self.[XYX]X.[XYX]Y, Self.[XYX]X.[XYX]W == Self.[XYX]X.[XYX]W.[XYX]X, Self.[XYX]Y.[XYX]W == Self.[XYX]Y.[XYX]W.[XYX]X, Self.[XYX]W.[XYX]W.[XYX]W == Self.[XYX]W.[XYX]W.[XYX]Y.[XYX]X, Self.[XYX]W.[XYX]W.[XYX]X == Self.[XYX]W.[XYX]Y.[XYX]X, Self.[XYX]W.[XYX]Y.[XYX]X == Self.[XYX]W.[XYX]Y.[XYX]X.[XYX]X, Self.[XYX]X.[XYX]W.[XYX]W == Self.[XYX]X.[XYX]W.[XYX]W.[XYX]X, Self.[XYX]Y.[XYX]W.[XYX]W == Self.[XYX]Y.[XYX]W.[XYX]W.[XYX]X, Self.[XYX]Y.[XYX]X.[XYX]W == Self.[XYX]Y.[XYX]X.[XYX]W.[XYX]X, Self.[XYX]W.[XYX]W.[XYX]Y.[XYX]X == Self.[XYX]W.[XYX]Y.[XYX]W.[XYX]X, Self.[XYX]W.[XYX]Y.[XYX]X.[XYX]X == Self.[XYX]X.[XYX]Y.[XYX]W.[XYX]X, Self.[XYX]X.[XYX]W.[XYX]W.[XYX]X == Self.[XYX]X.[XYX]W.[XYX]Y.[XYX]X, Self.[XYX]X.[XYX]W.[XYX]Y.[XYX]W == Self.[XYX]X.[XYX]W.[XYX]Y.[XYX]W.[XYX]X, Self.[XYX]Y.[XYX]X.[XYX]W.[XYX]W == Self.[XYX]Y.[XYX]X.[XYX]W.[XYX]W.[XYX]X, Self.[XYX]X.[XYX]W.[XYX]Y.[XYX]W.[XYX]X == Self.[XYX]X.[XYX]W.[XYX]Y.[XYX]Y.[XYX]X, Self.[XYX]Y.[XYX]X.[XYX]W.[XYX]W.[XYX]X == Self.[XYX]Y.[XYX]X.[XYX]W.[XYX]Y.[XYX]X>
Please submit a bug report ( and include the project and the crash backtrace.

Of course in this case, it's not the Requirement Machine that's broken, it's the GSB. If you pass -requirement-machine-protocol-signatures=on, the GSB is bypassed, and the code compiles and both calls to sameType() in the body of foo() type check as expected.

Now, a more realistic example of a non-convergent rewrite system occurs when you have two infinite recursive conformances that are "smashed together". For example,

protocol P1 { // this is totally fine
  associatedtype A : P1

protocol P2 { // this is totally fine
  associatedtype A : P2

protocol P3 { // this will not work
  associatedtype X : P1 & P2

Unfortunately, this is actually the only example I've seen where the GSB handles things correctly, but the Requirement Machine's formalism fails. What happens is that when building the rewrite system for <T where T : P3>, the completion procedure will try and fail to introduce an infinite series of same-type requirements which notionally would be written as follows, if we had a syntax to disambiguate associated types with the same name from different protocols:

T.[P1]A == T.[P2]A
T.[P1]A.[P2]A == T.[P1]A.[P2]A
T.[P1]A.[P1]A.[P2]A == T.[P1]A.[P1]A.[P1]A

To resolve this, you either need to define a third protocol that inherits from both P1 and P2 and imposes the additional requirement on the inherited associated type:

protocol P1and2 : P1, P2 where A : P1and2 {}

protocol P3 { // this is fine
  associatedtype X : P1and2

Or, you tie off the infinite recursion in some spot, either P1, P2 or P3:

protocol P1 {
  associatedtype A : P1

protocol P2 {
  associatedtype A : P2

protocol P3 {
  associatedtype X : P1 & P2 where X.A.A.A == X // or whatever

This is unfortunate but I haven't been able to come up with a good solution that preserves the properties we want. Swift 5.6 included a hack that made this work in the simplest examples (such as the P3 written as above) but the hack did not generalize to the minimization algorithm, and it also failed in a real-world setup that was slightly more complex than P3.

Here is the one real-world example where I've seen this come up, in RealmSwift -- [SR-15792] error: Segmentation fault: 11 on compile of project - Swift (the bug report mentions a Segfault which was an unrelated problem in the GSB, which is still used in Swift 5.6 for signature minimization; once that crash was fixed the failure to construct a confluent rewrite system was revealed as soon as the type checker went on to check the body of a generic function, performing a generic signature query against the protocol in question).

In RealmSwift's case, the desired design was that the recursion reaches a fixed point; ironically, one of the maintainers mentioned they were unable to declare this requirement in previous versions of Swift because of problems with the GSB. It seems that now they can add this requirement as intended.

If this comes up in other projects I'll need to come up with a more general solution, but for now, I believe the Requirement Machine handles the overwhelming majority of realistic generic signatures that previously worked with the GSB, while also supporting many more complex examples that did not work before.

For writing new code, if you follow these rules of thumb you'll be unlikely to hit the limits of the formalism:

  • Prefer to define your protocols so that there are only finitely-many type parameters
  • Just one recursive associated type, whether it is ultimately finite or infinite, is also fine
  • If you need more than one infinite recursive conformance, avoid complex same-type requirements that relate deeply-nested types of those recursive associated types
  • Avoid "horizontal" composition that forces the Requirement Machine to introduce implied same-type requirements between associated types with the same name in unrelated protocols

I tried some of these examples in Rust. I don't claim to be knowledgeable in Rust, so take this with a grain of salt.

Here is my first attempt at writing an equivalent of the 'XYX' protocol:

trait XYX {
    type X : XYX<Y = <<Self::X as XYX>::Y as XYX>::X>;
    type Y : XYX;

Just like the Swift version, this fails. The Rust compiler reports a similar-sounding error message:

error[E0275]: overflow evaluating the requirement `<<Self as XYX>::X as XYX>::Y == _`
 --> src/
6 |     type X : XYX<Y = <<Self::X as XYX>::Y as XYX>::X>;
  |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The second version with the new associated type W compiles just as in Swift:

trait XYX2 {
    type W : XYX2<X = <Self::X as XYX2>::Y>;
    type X : XYX2<Y = Self::W>;
    type Y : XYX2;

Here is an example that works in Swift but doesn't in Rust. First the Swift version:

protocol P1 {
  associatedtype T : P1

protocol P2 : P1 where T : P2 {}

Here is the Rust equivalent:


trait P1 {
    type T : P1;

trait P2 : P1<T : P2> {}

Which fails with:

error[E0391]: cycle detected when computing the super predicates of `P2`
  --> src/
32 | trait P2 : P1<T : P2> {}
   | ^^^^^^^^^^^^^^^^^^^^^
note: ...which requires computing the super traits of `P2`...
  --> src/
32 | trait P2 : P1<T : P2> {}
   |                   ^^
   = note: ...which again requires computing the super predicates of `P2`, completing the cycle

As for the "horizontal composition" example, I don't know how to express it in Rust; the traits system does not introduce implicit equivalences between associated types with the same name in unrelated traits (which is arguably better than the situation in Swift, for various reasons):

protocol P1 {
  associatedtype A : P1

protocol P2 {
  associatedtype A : P2

protocol P3 {
  associatedtype X : P1 & P2

Nice! Thanks for the thorough reply.

How hard would it be for the compiler to do the above transform whenever a protocol composition is encountered? Basically type check protocol composition as if it were a nominal protocol (with an implied name, perhaps with the protocols sorted alphabetically to canonicalize the implied name). I.e. this problem from above:

protocol P1and2 : P1, P2 where A : P1and2 {}

protocol P3 { // this is fine
  associatedtype X : P1and2

I guess that’s a very simplified description of my “merged associated types” idea that I gave up on: RequirementMachine: Remove merged associated types by slavapestov · Pull Request #41256 · apple/swift · GitHub. It’s described in my paper in the “merged associated types” section.


Gave up on it? That's too bad. Is the idea fundamentally problematic? Or is backwards compatibility too hard?

I don't want to rule out somehow supporting this particular pattern some day, I just think my particular implementation of this idea was flawed. It didn't work in more complex examples, increased the size of the rewrite system by up to 3x even when it wasn't needed, broke some invariants I was hoping to preserve that will allow more optimizations in the future, and I couldn't reconcile it with the minimization algorithm.

Preserving compatibility is always hard when replacing a key component of the compiler like this, especially since generics didn't really have anything like a formal specification and the test coverage was uneven. I spent a lot of time writing new tests and trying to reverse-engineer and understand existing behaviors. I think I've mostly succeeded in this regard, and now that we have a better-defined core we can expand what is supported over time.

For this particular problem, a few of us on the team feel that a better direction for future evolution of the language is to try to eliminate the implied same-type requirements between unrelated associated types of the same name altogether. I know @Joe_Groff is not a fan of this because it can cause coherence problems if unrelated protocols evolve over time.


My intuition also favors dropping the implied same-type requirement but only for consistency with how nominal protocol inheritance works. I.e. I'd prefer a world where A & B is semantically the same as declaring a protocol that inherits from A and B. (Didn't it use to work this way a long time ago?)

I'd like to learn more about the coherence problem you allude to, if you or Joe have the time. Thanks!


The coherence problems are changes that can lead to catastrophic failure (program stops compiling), or program that subtly change their semantics, maybe without any compiler warning. All you need is to add or rename an associated type in a protocol, so that this name matches an associated type in another protocol.

I can give one example (but I'm sure there are many other naughty scenarios). Say you're starting from two protocols (even maybe defined in different modules):

protocol P1 { associatedtype A }
protocol P2 { associatedtype B }

Now the user defines a type that conforms to both protocols (because users are wild):

struct T: P1 & P2 { 
  typealias A = String
  typealias B = Int

Now, for some reason, the protocol P1 eventually gets a second associated type that happens to be named B:

 protocol P1 {
   associatedtype A
+  associatedtype B

Now the T type has to share its B type for both P1 and P2 protocols. Sometimes it is OK, but sometimes it is not OK at all. If P1 and P2 have conflicting requirements on B, the program stops compiling (I guess). In some cases, the program keeps on compiling, but there is no guarantee that the semantics and invariants of the new P1 protocol are honored. Considering the author of T is maybe not even aware... this is concerning.


Thanks for the explanation, this is precisely the problem here.

An even more subtle issue is that adding a new associated type with a default witness is a binary compatible change. Normally this is fine, but consider what happens if the conforming type already defined a typealias with the same name to witness some other conformance.

An example:

// in ABI-stable library
protocol P1 {
  associatedtype A

// in client
protocol P1 {
  associatedtype B

struct S : P1, P2 {
  typealias A = Int
  typealias B = String

Then the library author adds B to P1 without recompiling the client:

protocol P1 {
  associatedtype A
  associatedtype B = Float

Now the client's S conforms to P1 with P1.B == Float and conforms to P2 with P2.B == String, but as soon as the client is recompiled, it will now conform to P1 with P1.B == String.


Wow. Thanks. What a mess. It's not too late to improve the situation though, right? We could deprecate using typealias for satisfying associated type requirements and use some new syntax that is explicit enough to avoid most if not all of the coherence problems, no? Hand waving:

struct S : P1, P2 {
  requiredtype(P1) A = Int
  requiredtype(P2) B = String

And should P1 add B as the example from earlier does, then dependent code would cease to compile until the user makes it clear that they want S.B to satisfy both P1 and P2:

struct S : P1, P2 {
  requiredtype(P1) A = Int
  requiredtype(P1, P2) B = String

Yeah, or even without introducing new syntax, prefer a typealias from the same extension when looking for a witness to an associated type a requirement (this would also require name lookup to disambiguate same-named typealiases better).

I'm not convinced this is a major problem, since it hasn't really bitten anyone in 8 years of Swift though. But it's worth keeping in mind if we eg, re-design associated type inference (we've talked about doing something similar to limit the scope of potential witnesses for inference to the extension where the conformance was defined for example).


Not sure if this is related, but I’ve had problems with type aliases from different modules. A simplified example is this:

// Module A
protocol A { 
  associatedtype V
  var value: V { get }

extension Optional: A {
  var value: Int { 1 }

// Module B
import ModuleA

protocol B {
    associatedtype V
    var value: V { get }

extension Optional: B {
  var value: String { “2” }

I’m on my phone ATM, so I haven’t tested this exact case, but I’ve faced such problems when working with frameworks that share API. They often cause nasty, imprecise errors or crashes. In the code I’ve faced problems with, witnesses for ‘value’ used opaque types, which may have made the compiler more buggy.

1 Like