Formalizing Swift generics as a term rewriting system

Formalizing Swift generics as a term rewriting system

Previously I wrote about how the full generality of the Swift generic system is undecidable. The basic idea is that "finitely-presented monoids" can be written as a Swift protocol where the "word problem" on the monoid reduces to proving an equivalence between two types. Since the word problem is known to be undecidable, this shows that Swift generics (as currently defined) are undecidable.

In that post, we started with a finitely-presented monoid, and wrote it down as a protocol containing a series of associated types, which all conform to the protocol itself. The equivalence relation on a finitely-presented monoid is an example of a term rewriting system. In this post, I'll go in the other direction -- starting with an arbitrary set of protocol definitions, we're going to build a term rewriting system which can answer queries about the types that can appear in the protocol's signature.

This offers a new implementation strategy for the generics system which will improve correctness and allows us to formally define the actual subset of supported generic signatures.

Quick overview of the generic signature builder

The compiler's GenericSignatureBuilder can be roughly understood as performing the following tasks:

  • answering queries about the types in a generic signature. For example, given the signature <T where T : Sequence>, we can ask what protocols T conforms to. This answer can be derived directly from the signature itself; it is just Sequence. On the other hand, asking what protocols the nested type T.Iterator conforms to requires looking inside the body of the Sequence protocol, where Iterator is defined to conform to IteratorProtocol.

    The full set of supported queries is the following:

    1. Checking if a type conforms to a protocol.
    2. Listing all protocols a type conforms to.
    3. Checking if two types are equivalent under the same-type requirements.
    4. Rewriting a type into a minimal canonical form using the same-type requirements.
    5. Obtaining a type's superclass constraint, if any.
    6. Obtaining a type's concrete type constraint, if any.
  • computing a minimal, canonical signature from a set of user-written requirements. For example, given a generic signature with the following where clause:

    <T where T : Sequence, T.Iterator : IteratorProtocol>

    The requirement T.Iterator : IteratorProtocol is redundant, because it follows from T : Sequence; the GenericSignatureBuilder will diagnose a warning and remove the requirement

  • computing conformance access paths. This can be seen as a generalization of the "conforms to" query. Instead of producing a true/false result to indicate if the type conforms to the protocol or not, this produces a proof in the affirmative case.

    The proof begins with some explicit conformance requirement written by the user in the generic signature, and traverses a series of associated types and protocols to prove the conformance. For example, given the signature <T where T : Sequence>, the type T.Iterator, and the protocol IteratorProtocol, we can produce the following conformance access path:

    (T : Sequence)(T.Iterator : IteratorProtocol)
    

In this post I'm only going to look at the first of these three -- answering simple queries about an existing generic signature.

Subsetting the problem

I'm also going to only consider a subset of the language, by introducing three restrictions.

Eliminating protocol refinement

Imagine you have the following two protocol definitions, with the second one refining the first, and a generic function using the second protocol:

protocol Drawable {
  func draw()
}
	
protocol Shape : Drawable {
  var area: Int
}
	
func addShape<T>(_: T) where T : Shape {}

Without loss of generality, we can refactor this code by removing the inheritance clause from the Shape protocol, and for each conformance requirement naming Shape, also adding another conformance requirement to Drawable:

protocol Drawable {
  func draw()
}
	
protocol Shape {
  var area: Int
}
	
func addShape<T>(_: T) where T : Shape, T : Drawable {}

The only small complication is that the where clause of the derived protocol may reference associated types from the first. For example,

protocol Sequence {
  associatedtype Element
}
	
protocol Collection : Sequence {
  associatedtype SubSequence
    where SubSequence : Collection,
          SubSequence.Element == Element
}

The correct desugaring in this case is that any inherited associated type is re-stated in the derived protocol:

protocol Sequence {
  associatedtype Element
}
	
protocol Collection {
  associatedtype Element
  associatedtype SubSequence
    where SubSequence : Collection,
          SubSequence : Sequence,
          SubSequence.Element == Element
}

Eliminating multiple generic parameters

Our term rewriting system is simpler to reason about if we only concern ourselves with protocols and their generic signatures.

In Swift, a generic declaration may have one or more generic parameters, with zero or more requirements in the where clause. A protocol P, on the other hand, always has the generic signature <Self where Self : P>, with a single generic parameter and a single conformance requirement.

It turns out that limiting ourselves to this case does not take away any expressivity from the language either. Consider a generic function with a non-trivial generic signature:

func compare<A, B>(_: A, _: B)
  where A : Sequence,
        B : Sequence,
        A.Element == B.Element {}

We can define a new protocol where the generic parameters A and B become associated types, and the requirements in the function's where clause are re-stated as protocol requirements on the associated types:

protocol CompareProtocol {
  associatedtype A
  associatedtype B
    where Self.A : Sequence,
          Self.B : Sequence,
          Self.A.Element == Self.B.Element
}

func compare<Self>(_: Self.A, _: Self.B)
  where Self : CompareProtocol {}

Our function now has a generic signature of the desired form, with a single generic parameter and a single conformance requirement: <Self : CompareProtocol>.

Eliminating superclass and concrete type constraints (for now)

Unlike the first two transformations, which desugar the language but do not take away expressiveness, this last one is a real restriction. It's possible to formalize the term rewriting system while keeping superclass and concrete type constraints, but it's more complicated; I'll write it up later.

Tokens and terms

First, some definitions. A term is an (ordered) list of one or more tokens, with some restrictions. A token is one of the following:

  • a type token, corresponding to an associated type name.
  • a conformance token, naming a protocol.

In Swift, if a type parameter conforms to two protocols which both define an associated type with the same name, the two associated types induce the same nested type. For this reason, the associated type name in a type token is not fully qualified; two protocols with an associated type A are both referenced by a single type token "A".

The two syntactic restrictions on terms are the following:

  • The first token must always be a conformance token.
  • A conformance token cannot be followed by another conformance token.

I'm going to use the following shorthand notation for tokens:

  • Type tokens are written with a "." prefix, like .Iterator or .SubSequence.
  • Conformance tokens are written surrounded by "[." and "]", like [.Sequence] or [.Collection].

A term is written by concatenating its tokens. So the following are all syntactically valid terms:

[.Sequence].Iterator.Element
[.Sequence].Iterator[.IteratorProtocol].Element
[.Collection].SubSequence[.Collection]

The following terms are not syntactically valid, for violating the first and second restrictions, respectively:

.Iterator[.IteratorProtocol].Element
[.Collection].SubSequence[.Collection][.Collection].Element

Mapping types to terms

Consider a Swift generic signature of the form <Self where Self : Collection>, and the type Self.SubSequence.Element. We can map this type to a term as follows:

  • Replace the Self generic parameter with a conformance token for the base protocol, in our case, Sequence.
  • Replace each subsequent identifier with a type token.

In our example, we obtain the term [.Collection].SubSequence.Element, consisting of a conformance token followed by two type tokens.

Mapping terms to types

In order to compute canonical minimal types, terms have to be mapped back to types.

A term consisting of a single conformance token maps to the Self generic parameter type.

In Swift's generics implementation, we make a distinction between unresolved and resolved member types. A type like Self.Iterator.Element can either be thought of as a nested path of plain identifiers (the unresolved case), or a nested path of references to specific associated type declarations (the resolved case).

A term consisting of a single conformance token followed by one or more type tokens (eg, [.P].X.Y) maps to an unresolved type (eg, Self.X.Y).

A term consisting of a single conformance token followed by alternating type and conformance tokens maps to a resolved type, where each conformance/type token pair uniquely identifies an associated type declaration in a protocol. An example of a term representing a resolved type is [.Sequence].Iterator[.IteratorProtocol].Element.

With the exception of Self, a term where the final token is a conformance token does not correspond to a type.

Equivalences on terms

If we have two terms a and b, an equivalence is a pair a <--> b. We can use a set of equivalences to define an equivalence relation on terms. Each equivalence is a bidirectional rewrite rule allowing us to replace a subterm equal to a with b, or vice versa.

For example, consider the following equivalence:

[.Sequence].Iterator.Element <--> [.Sequence].Element

The following term contains a subterm [.Sequence].Element:

[.Collection].SubSequence[.Sequence].Element

Therefore, we can rewrite it to obtain the following, even longer term:

[.Collection].SubSequence[.Sequence].Iterator.Element

Under the equivalence relation defined by this single equivalence, the two terms [.Collection].SubSequence[.Sequence].Element and [.Collection].SubSequence[.Sequence].Iterator.Element are equivalent.

Mapping protocol requirements to equivalences

In our restricted model of the language, there are only two kinds of protocol requirements:

  • Conformance requirements, where some associated type of Self is declared to conform to a protocol.

  • Same-type requirements between two associated types, or Self and an associated type.

These requirements map to equivalences as follows:

  • A conformance requirement Self.X : Q defined in a protocol P becomes the equivalence:

    [.P].X[.Q] <--> [.P].X
    

    That is, roughly speaking, if a term contains a subterm which is known to conform to a protocol, we can introduce or eliminate the conformance token at the end of the subterm (except we cannot do it more than once, by the syntactic validity rule on terms).

  • A same-type requirement Self.X.Y == Self.A defined in a protocol P becomes the equivalence:

    [.P].X.Y <--> [.P].A
    

    Note that a same-type requirement involving Self ends up as an equivalence where one of the terms consists of a single conformance token only. For example, the requirement Self.X.Y == Self becomes the following equivalence:

    [.P].X.Y <--> [.P]
    

    In our formalism, we cannot use this to impose new conformance requirements on Self, because of the syntactic restriction on terms not containing more than one consecutive conformance token. Allowing this would be a "back door" to introducing protocol refinement, which as we agreed would be desugared away.

    For example, suppose we could write down these two equivalences:

    [.P].X.Y <--> [.P]
    [.P].X.Y[.Q] <--> [.P].X.Y
    

    From this we would admit the following series of equivalences:

    [.P] -->
      [.P].X.Y -->
        [.P].X.Y[.Q] -->
          [.P][.Q]
    

    However this term is not valid.

    (What if it were valid? Everything would almost work, and it seems that this could give us a formal model of a generics system with protocol refinement, eliminating the desugaring step. It does complicate a few things later on in this document though, and I haven't thought through all the implications yet.)

An example

Considering the following definitions of the IteratorProtocol, Sequence and Collection protocols. We don't care about protocol requirements that are not associated types, so we can omit them here. Also, since we don't allow protocol refinement, Collection has to re-state some associated types from Sequence, and usages of Collection are rewritten to induce a conformance to Sequence as well:

protocol IteratorProtocol {
  associatedtype Element
}

protocol Sequence {
  associatedtype Iterator : IteratorProtocol
    where Iterator.Element == Element
}

protocol Collection {
  associatedtype Element
  
  associatedtype SubSequence
    where SubSequence : Sequence,
          SubSequence : Collection,
          SubSequence.Element == Element,
          SubSequence.SubSequence == SubSequence
  
  associatedtype Index
  
  associatedtype Indices
    where Indices : Sequence,
          Indices : Collection,
          Indices.Element == Index,
          Indices.Index == Index,
          SubSequence.Index == Index,
          Indices.SubSequence == Indices
}

Using our rules we can write down the equivalences induced by these protocol definitions:

[.Sequence].Iterator <--> [.Sequence].Iterator[.IteratorProtocol]
[.Collection].SubSequence <--> [.Collection].SubSequence[.Sequence]
[.Collection].SubSequence <--> [.Collection].SubSequence[.Collection]
[.Collection].Element <--> [.Collection].SubSequence.Element
[.Collection].SubSequence <--> [.Collection].SubSequence.SubSequence
[.Collection].Indices <--> [.Collection].Indices[.Sequence]
[.Collection].Indices <--> [.Collection].Indices[.Collection]
[.Collection].Indices <--> [.Collection].Indices.SubSequence
[.Collection].Index <--> [.Collection].Indices.Element
[.Collection].Index <--> [.Collection].Indices.Index
[.Collection].Index <--> [.Collection].SubSequence.Index

Expressing generic signature queries on terms

With the ability to map protocol requirements to an equivalence relation on terms, we can express the GenericSignatureBuilder queries as equivalences on terms as well. This doesn't quite give us a strategy for computing them, but we're almost there.

Recall the full list of queries:

  1. Checking if a type conforms to a protocol.
  2. Listing all protocols a type conforms to.
  3. Checking if two types are equivalent under the same-type requirements.
  4. Rewriting a type into a minimal canonical form using the same-type requirements.
  5. Obtaining a type's superclass constraint, if any.
  6. Obtaining a type's concrete type constraint, if any.

Since we've subsetted out superclass and concrete type requirements, we only need to think about the first four in the list.

Checking if a type conforms to a protocol

A type T in the generic signature <Self : P> conforms to some other protocol Q if and only if the term [.P].T is equivalent to the term [.P].T[.Q]. This follows directly from the mapping of a conformance requirement to a term equivalence.

Listing all protocols a type conforms to

Listing all the protocols a type conforms to is a bit funny. While there's no direct way of doing it, the user can only define finitely many conformance requirements and protocols in all. Starting from a single protocol definition, we can recursively walk the requirements of all referenced protocols to compute the transitive closure of all protocols that any type could conform to. Then, it is enough to check conformance to each candidate protocol to build the complete list.

Checking if two types are equivalent

Two types are equivalent if their corresponding terms are equivalent under our equivalence relation.

Rewriting a type into a minimal canonical form

In the next section, we define a linear order < on terms. The minimal canonical form of a term t is the least term equivalent to t under this linear order.

Rewrite systems

So far, we've only talked about equivalence relations on terms, generated by a set of bidirectional equivalences. The problem is that this doesn't give us an evaluation strategy.

Suppose we want to prove that the two terms [.Collection].SubSequence.Iterator.Element and [. Collection].Element are equivalent. Recall the equivalences generated by our Collection protocol and related definitions:

[.Sequence].Iterator <--> [.Sequence].Iterator[.IteratorProtocol]      (0)
[.Sequence].Element <--> [.Sequence].Iterator.Element                  (1)
[.Collection].SubSequence <--> [.Collection].SubSequence[.Sequence]    (2)
[.Collection].SubSequence <--> [.Collection].SubSequence[.Collection]  (3)
[.Collection].Element <--> [.Collection].SubSequence.Element           (4)
[.Collection].SubSequence <--> [.Collection].SubSequence.SubSequence   (5)
[.Collection].Indices <--> [.Collection].Indices[.Sequence]            (6)
[.Collection].Indices <--> [.Collection].Indices[.Collection]          (7)
[.Collection].Indices <--> [.Collection].Indices.SubSequence           (8)
[.Collection].Index <--> [.Collection].Indices.Element                 (9)
[.Collection].Index <--> [.Collection].Indices.Index                   (10)
[.Collection].Index <--> [.Collection].SubSequence.Index               (11)

One possible correct set of equivalences proving this is the following:

[.Collection].SubSequence.Iterator.Element
             (^ apply 2, -->)
[.Collection].SubSequence[.Sequence].Iterator.Element
                                    (^apply 1, <--)
[.Collection].SubSequence[.Sequence].Element
             (^ apply 2, <--)
[.Collection].SubSequence.Element
             (^ apply 4, <--)
[.Collection].Element

If I start applying equivalences at random to try to get from the first term to the second term, I might get lucky and discover the above path. Or I might enter an infinite loop; for example, if I repeat rule (3) and (5) I end up with these ever-longer terms:

[.Collection].SubSequence.SubSequence
[.Collection].SubSequence[.Collection]
[.Collection].SubSequence[.Collection].SubSequence.SubSequence
[.Collection].SubSequence[.Collection].SubSequence[.Collection].SubSequence
[.Collection].SubSequence[.Collection].SubSequence[.Collection].SubSequence.SubSequence
...

Reduction orders

Instead of applying equivalences in both directions, let's instead say that for a linear order < on terms, at each step we can only replace u with v if v < u. This prevents us from entering an infinite loop, because we can't make our terms "longer".

The first question is, what reduction order do we use here. A common reduction order is the lexshort order, defined for terms u and v as follows:

  • If u has fewer tokens than v, then u < v.
  • If u has more tokens than v, then v < u.
  • If u and v both have the same number of tokens, perform a lexicographic comparison on pairs of elements from u and v, from left to right.

Recall the earlier discussion about mapping terms back to types, and the distinction between resolved and unresolved types.

It turns out the lexshort order isn't quite what we want, because we need to ensure that resolved member types are ordered before unresolved member types. That is, we want [.P].X[.Q]Y < [.P].X.Y, even though the former term has 4 tokens, and the latter has only 3. So we use a slightly different order; we treat a conformance token followed by a type token as a single element for comparison purposes. For example, both [.P].X.Y and [.P].X[.Q]Y have two elements to compare.

Two elements a and b are ordered according to these rules:

  • If both a and b are conformance/type token pairs, we perform a lexicographic comparison on the type token name first, and if the two type token names are equal, we compare the names of the protocols.

  • If both a and b consist of a single token, we order conformance tokens before type tokens, or if both tokens have the same kind, perform a lexicographic comparison on their names.

  • If a is a conformance/type token pair, and b is a single token, then a < b if b is a type token, or a otherwise.

An example:

[.P].X[.Q].Y < [.P].X.Y < [.P].X[.Q].Y[.R] < [.P].X.Y[.R] < [.P].X.Y.Z

Termination

Using this reduction order, we can now look at equivalences as unidirectional rewrite rules. The idea is that given two terms, we can try to determine if they're equivalent by attempting to rewrite each term until we reach a fixed point. If both terms rewrite to the same term, we know they are equivalent.

Note that unlike the formulation in terms of equivalences, the application of rewrite rules with a reduction order is guaranteed to terminate. This is because our reduction order has the property that for any given term t, there are only finitely many other terms u such that u < t. Each step either results in a term that is shorter, or there are no more rules left to apply, in which case we stop. Therefore the process must terminate after a finite number of rules have been applied.

However, there is a new problem. At each step of the way, we might have multiple possible rewrite rules that we can apply. It is possible that applying one rule produces a term that cannot be reduced further, while another rule allows us to make progress. This is a problem because it means that termination does not guarantee completion -- we may have just picked an unfortunate series of rules and got stuck.

To avoid getting "stuck", we need our rewrite system to satisfy another property in addition to termination: confluence.

Confluence

Suppose we start with a term t, and we reduce it to u using one sequence of rewrite steps, and reduce it to v using a different sequence of rewrite steps. We say that our rewrite system is confluent if there is always some other term w such that both u and v can be reduced to w. That is, we can get from t to w regardless of which "path" we take.

If a rewrite system is both terminating and confluent, then we can compute the canonical minimal form of a term by repeatedly applying rewriting rules until we reach a fixed point. Similarly, we can compare two terms for equivalence by reducing both to their canonical minimal forms, and checking the results for symbol equality.

The rewrite system obtained from a set of protocol requirements is not guaranteed to be confluent. However, algorithms exist to construct a confluent rewrite system from a non-confluent one by adding new rewrite rules to "repair" confluence violations; this is known as a completion procedure.

If a term t can be reduced to two distinct irreducible terms u and v we call (u, v) a critical pair. The completion procedure adds a new rewrite rule for the critical pair; v --> u if u < v, or u --> v otherwise. Adding a new rule might in turn introduce more confluence violations, requiring the completion procedure to be iterated to fixed point.

A completion procedure outputs a terminating confluent rewrite system if it terminates, however it is itself not guaranteed to terminate. Indeed, it is possible for the completion procedure to never reach a fixed point, repeatedly adding more and more rewrite rules to repair confluence violations, which introduce more violations, and so on.

In practice, one way to ensure the completion procedure terminates is to only execute a fixed maximum number of iterations. If the rewrite system specified by the protocol requirements is too complex, we can "give up".

Next steps

This gives us a roadmap for re-implementing the GenericSignatureBuilder on top of a sound theoretical foundation: we need to find a suitable completion procedure for our rewrite system and reduction order.

This will give us GenericSignatureBuilder query operations that are guaranteed to terminate and produce correct results (as long as the completion procedure succeeds). Of course, If the completion procedure does not terminate in the maximum number of steps, we have to diagnose a "generic signature is too complex" issue. This is unfortunate, but it is preferable to the current situation, where the GenericSignatureBuilder can produce wrong results or crash.

At this point, the formal completion procedure in effect becomes part of the language specification, because the set of supported generic signatures becomes those where the completion procedure produces a confluent rewrite system.

But first, I also need to finish formalizing superclass and concrete-type requirements, as well as the other two major pieces of functionality in the GenericSignatureBuilder that I didn't talk about here -- minimizing signatures, and computing conformance access paths.

60 Likes

Thanks for the awesome write up. I'm still digesting it but can you say more about how often people might encounter "generic signature is too complex" and what kinds of scenarios a user might enter that could cause this error?

1 Like

I don’t have a good answer to this yet with examples or anything, but I suspect that a generic signature that does not have a finite completion in this rewrite system would almost surely not work properly in the current implementation either.

1 Like

Something I'm not quite sure I follow:

Given a finite number of rewrite rules, and a terminating procedure to rewrite a term (since the term strictly decreases in size). Why isn't the reduction complete?

Answering this question was addressed here:

But if a rule gets the reduction stuck, the procedure isn't finished. We simply try the next rule in the list until none apply at which point the smallest term is our canonical one. Correct?

I believe this is a counterexample:

ab -> x
ba -> y
xa -> z

Consider the term aba. If we apply the first rule, we get xa. If we apply the second rule, we get ay. Now xa can be reduced to z by the third rule, but ay cannot be reduced further. This means that (ay, z) is a critical pair. If we introduce the rewrite rule ay -> z, then aba will reduce to z irrespective of the whether we apply the first rule or second rule initially.

Edit: I realized what you might be getting at is that if our rewriting system allows for backtracking, then whether we get 'stuck' we can backtrack up to a previous point where we had multiple applicable rules, and try the next one. This could probably work, but the algorithmic complexity is much worse than with the standard approach, because we might have to backtrack several times if an entire subtree of the search space leads to a dead end. The nice thing about confluence is that once a rewrite rule succeeds, we always know it is safe to continue.

1 Like

Sorry for the poor phrasing, yes, this is what I meant. given n rules and a term of length l, if each rule reduces the term by 1, there are l^n steps to scour the entire rewrite space, so the algorithmic complexity is pretty bad. But at least you will always terminate typechecking.

Between a maybe terminating completion procedure and a very expensive but terminating typechecking I don't know which is faster in practice. It could be that the first one is faster in most cases and the second one is necessary for more complex cases and can be hidden behind a compiler flag. Or it could be that the first one is actually slower because there are no fast-enough completion procedure.

The completion procedure only needs to run once for the entire rewrite system; once constructed, you can minimize terms efficiently .

I'm not that familiar with this kind of topic. Afaics, to build up this confluence graph, you still need to traverse the ruleset for each term, but for each input term you gain some reductions allowing to shorten the traversal of subsequent terms.

Sounds to me like building up some heuristics during term reduction which ought to be more time-consuming as additional lookups have to be conducted over the usual n available lookups in hope to omit further branching.
Further, following this approach increases the set of confluence pairs for each term and the confluence graph seems to depend highly on the terms provided.

Maybe I'm just wrong, don't know, but I think it would be great to benchmark both ways to see the trade-off.

1 Like

Yeah, it's definitely a problem that different ways of stating the same requirements may produce a rewrite system that fails to complete. I did get a toy implementation working and the generic signatures I threw at it seemed to work, except for one which failed in the current GSB as well for a totally unrelated reason. But once I have a "real" prototype implementation in C++ I'm curious to see how common this is in practice.

Can you explain further by what you mean by building up heuristics during term reduction?

I’m loving your ongoing work, @Slava_Pestov on finding a robust system for Swift generics. Sorry that I’m really just here for tourist-like observations.

You mentioned backtracking in the rewrite system in another comment. Reading this section looked to me like a textbook application for dynamic programming which can make backtracking instant (although this would have a memory overhead based on the number of equivalences) and works best if you can set a max-depth up-front. However, it can manage bi-direction matching with backtracking and apply a “cost” to different configurations to choose the best result if simple reductions are ambiguous.

I’m kind of amazed that Swift doesn’t have a simple iterator for the protocols to which a type conforms. This sounds like a weirdly complicating factor.

2 Likes