Ergonomics: generic types conforming "in more than one way"

If we take the "multiple bodies with different where clauses" approach, it seems to me like we could bubble up the information in those where clauses to the conformance, so that we know statically that the conformance needs to take added conditional arguments in order to instantiate the witness table, so that we could still pass down the Int: Equatable conformance (or lack thereof) at the point of instantiation for the witness table instead of doing a global search.

Thanks for your post, Doug!

Would you be open to thinking differently about how this can work? I've always had in mind a system where these determinations can be made at compile time (while still allowing separate compilation of course). I've always thought viewing something that's semantically static through a dynamic lens has been limiting our implementation choices, and I think there are other possibilities.

Whether we can even begin that conversation depends on whether you agree that generics should/could be semantically static. If not, the options are indeed much more limited.

I don't think it's just about viewing the problem through a "dynamic lens". The important property I think we want to maintain in the generics system is modularity. That allows us to implement generics with semantically equivalent dynamic and static implementations, but that's not the only benefit; it also allows for separate compilation and analysis, avoids introducing fundamental compiler performance problems, and minimizes side effects from adding additional packages to a system.

Of course we want modularity, motherhood, :apple: :pie: and all other good things :wink:

I don't know why you think that's related to what I brought up. What I have in mind is modular and preserves all of the properties you listed except possibly the thing about semantically equivalent static and dynamic implementations—depending on what you mean by that—whose value I am inclined to question.

I think what I have in mind could even improve the modularity story, but it's hard to tell because you haven't been very precise about the properties you listed. It would be really helpful if you could unambiguously describe the semantics you think are crucial, ideally with examples that illustrate the differences between the results you want and those you would like to prevent, so we can evaluate both those judgements and any proposals that may arise from this thread.


In cases where we know it statically, yes, and that's a common case I bet we can optimize. In the general case, we won't know until runtime that the type will be Int, and someone will have to do the dynamic lookup.


I don't know what you have in mind, but I can spell out what I know is implementable (relatively) efficiently and where I see things as becoming problematic. Let's revisit your original example. The parts relevant to my examples are these:

// (A) Definition
protocol P {
    static var isEquatable: Bool { get }

// (B) Default implementation
extension P {
    static var isEquatable: Bool { false }

// (C) More-constrained default implementation
extension P where Self : Equatable {
    static var isEquatable: Bool { true }

// (D) Conformance
extension Array : P {}

What we get today is that, at the point (D) where Array is stated to conform to P, the compiler looks at both implementations (B) and (C). For this case, (C) is not applicable because the Array<Element> for an arbitrary Element is not guaranteed to be Equatable. Therefore, the compiler chooses (B) and we get the surprising behavior.

Now, we could imagine a different, conditional conformance of Array to P that only works for Element types that are themselves Equatable (and, thus, the Array is Equatable). Let's call it D' because I've always wanted to be a mathematician:

// (D') Alternative, conditional conformance
extension Array : P where Element: Equatable {}

Now, because we know that Array<Element> will be Equatable, both of the isEquatable implementations (B) and (C) are usable (their requirements are met), so we have to make a choice---should it be (B) or should it be (C)? Well, (C) is more specialized, i.e., it has additional requirements on top of (B), and therefore is assumed to be better (faster, more accurate, etc.), so we pick that one. That's what happens, as expected.

Now, what we want is to get (C) for when the Element type happens to be Equatable, and (B) when it is not Equatable. By looking at (B) and (C), we can encode a decision procedure here, and it's pretty easy. At the point where we know the Element type---whether it happens in the compiler's type checker, optimizer, or at runtime, we'd prefer that it not matter, as @Joe_Groff points out---we want to effectively do this check:

  if Element is Equatable {
    use implementation (C)
  } else {
    use implementation (B)

I believe that's completely implementable. The compiler, seeing both (B) and (C), can emit that if/else block to determine which function to use for the witness of the conformance, so conformance (D) has the behavior we want.

But, there are limitations, so let's see where the model breaks.

First of all, there can be ambiguities. Let's extend the example with another protocol Q and another implementation of isEquatable:

protocol Q { }

// (E) Another constrained implementation
extension P where Self : Q {
    static var isEquatable: Bool { true }

// (F) Array conditionally conforms to Q
extension Array: Q where Element: Q { }

Now, conformance of Array to P has three potential implementations of isEquatable to consider: (B), (C), and (E). (C) is more specialized than (B), and (E) is more specialized than (B), but (C) and (E) can't really be ordered. So if we have a type like this that conforms to both Equatable and Q:

struct X: Equatable, Q { }

What happens for the conformance of Array<X> to P? All of (B), (C), and (E) apply, so I think this must be an error. If we model this case today by extending the conformance of Array to P with both conditions on its Element type, like this:

// (D'') conformance with an ambiguity
extension Array : P where Element: Equatable, Element: Q {}

we end up getting an ambiguity error:

t2.swift:2:16: note: multiple matching properties named 'isEquatable' with type 'Bool'
    static var isEquatable: Bool { get }
t2.swift:6:16: note: candidate exactly matches
    static var isEquatable: Bool { false }
t2.swift:10:16: note: candidate exactly matches
    static var isEquatable: Bool { true }
t2.swift:19:16: note: candidate exactly matches
    static var isEquatable: Bool { true }

I consider that to be the correct behavior here, because there is no "best" answer. I would be fine with the original conformance of Array to P (marked (D) at the beginning) becoming ambiguous and triggering an error at compile time. I think it would be a major problem with the design if this ambiguity could happen at run time.

To be clear, the fix here is not too complicated, and could possibly even be suggested by the compiler. You effectively need to provide another constrained implementation that is more specialized than both (C) and (E):

// (G) Yet another constrained implementation, better than the others!
extension P where Self : Equatable, Self: Q {
    static var isEquatable: Bool { true }

That fixes the problem, and we still have a (relatively simple) decision algorithm:

switch (Element is Equatable, Element is Q) {
  case (true, true): use implementation (G)
  case (true, false): use implementation (C)
  case (false, true): use implementation (E)
  case (false, false): use implementation (B)

This depends on having the complete set of potential witnesses visible at the point where the conformance is declared. If we were to move the definition of the protocol Q and the corresponding protocol P extension implementation (E) into another module, we'd lose the ability to select implementation (E) because it's not visible to us at build time. I consider that a good thing, but I could certainly imagine it causing some confusion when factoring code into a different module changes the overall behavior. And it's not something that I consider fixable, because even if you take the performance hit of doing a runtime lookup to find all of (B), (C), and (E), you've lost the ability to ensure (prior to runtime) that there can never be an ambiguity.



I appreciate the extensive reply, and I will dig into it in detail, but can we please first answer my earlier question of whether you consider it acceptable to say that all programs without existentials or non-final classes are semantically static? My definition of that term is simple: a semantically static program can be implemented without type-based dynamic dispatching provided all the sources are available to the compiler.

There's a whole class of backward-compatible ways you might extend Swift generics (or to "fix" things that are broken today, such as cross-module dispatching) that would make the semantics of some such programs dynamic. I want to know if you consider any of those points in the design space essential destinations for the language. The answer determines both how I understand the possible futures of Swift's generics system, and whether my ideas are worth discussing at all in this forum.


I think the important thing is not whether the type is Int, but whether the conformance's behavior is conditional on the availability of other conformances on the type or its associated types. Since we don't have any way currently of abstracting over constraints, the set of conditional arguments to a conformance seems to me like it should always be statically knowable.

I agree that it is statically knowable, but I have concerns with having the caller compute this information:

  • You would have to compute the union of all potential constraints to pass in (e.g., both Equatable and Q in my extended example), including evaluating all dependent constraints (e.g., you might need to evaluate both T: Collection and T.Element: Equatable for a different form of the example. Some of that computation will be redundant based on the decision procedure.

  • This makes the set of potential witnesses ABI, which I don't think we want for libraries that build with library evolution.


Having the caller compute the information seems to me like the only way to maintain modularity, and thereby retain the ability to specialize in the optimizer and have any hope of rational behavior in the presence of retroactive conformances. It also seems to me like conditional witnesses ABI is not that terrible of an ABI constraint; if you add new conditional requirements, there's reasonable retroactive defaults for them either being "not present" in existing code, or falling back to dynamic lookup if that happens to work better in practice. If the conformance evolves in the other direction not to need the conditional requirements anymore, then the requirements become redundant ignored arguments, which is maybe inefficient but at least is not constraining.


As I read it, "semantically static" is equivalent to saying that one can monomorphize a Swift program without existential or non-final generics, as in (e.g.) Rust's generics system, specializing your way into having no dynamic dispatch.

That is almost true, although the ability to construct infinite types at runtime means it's not completely true. For example, run this:

protocol InfiniteTypes {
  associatedtype Assoc: InfiniteTypes
  var assoc: Assoc { get }

struct ToInfinity<T>: InfiniteTypes {
  var assoc: ToInfinity<Self> { .init() }

func recursive<T: InfiniteTypes>(_ value: T) {
  if Int.random(in: 0..<100) != 42 {


You can't monomorphize that, because each recursive step builds a new ToInfinity<...> type.



That's what I thought I meant…

That is almost true, although the ability to construct infinite types at runtime means it's not completely true.

No fair; you just broke my brain.

/me makes repairs

I guess I don't think of this program as requiring type-based dynamic dispatch, because calls to recursive always end up in the same function body, regardless of the type of the argument. I am talking about being able to decide the program's control flow without having to make decisions based on types at runtime.


I’m pretty sure you can use mutual recursion with @Douglas_Gregor’s scheme to produce Turing-complete nested generics.

It’s, uh…left as an exercise to the reader! :-)

1 Like

Generally polymorphic recursion is supported. You can try it out

enum PolyList<A> {
    case empty
    case nonempty(A, PolyList<[A]>) // Yes [A] not A
protocol Monoid {
    static var empty: Self { get }
    func append(_ rhs: Self) -> Self
extension Array : Monoid where Element: Monoid {
    static var empty: Self { get { [] } }
    func append(_ rhs: Self) -> Self {
        return self + rhs
func combine<A : Monoid>(_ list: PolyList<A>) -> A {
    switch list {
    case .empty: return A.empty
    case let .nonempty(x, list):
      return combine(list).reduce(x, {$0.append($1)}) // A gets instantiated to [A] here!

As to why you'd want to do this, I asked a StackOverflow question which might be relevant: Applications of polymorphic recursion.


Sorry, I don't know what you could possibly mean. Swift's generics are not turing complete at compile time, and it shouldn't surprise anyone that anything written in Swift might be turing complete at runtime.

I'm talking about whether, given full program sources, the system could determine which function bodies execute at any point without inspecting types at runtime. I'll give you a quick example of a program that's semantically dynamic to help illustrate:

protocol P { func f() }
struct X : P { func f() { print("X") } }
struct Y : P { func f() { print("Y") } }
let a = { $0 == "X" ? X() as P : Y() as P }
for x in a { x.f() } // Which f() gets called?

The question of which f() gets called on each iteration of the loop can't be answered without inspecting the type of each element of the array. This program uses existentials to create dynamic semantics. As long as the program encodes the command line arguments into distinct types, we couldn't write an equivalent program using just generics, because (I hope!) today, generic dispatch is semantically static.

Yes, I'm aware. How is that related? Update: After looking at your StackOverflow post I understand why this would be relevant if we were thinking in terms of monomorphization, but that's not really the question I'm trying to address here.


Sorry, I misunderstood what you meant. Your follow up comment with the example makes things much clearer, thanks. Today you can write code like

func f<A>(_ _: A) { 
  if (MemoryLayout<A>.size == 0) {

Is this program "semantically static"? It is certainly not parametric, that's for sure. Not sure if there's a difference between parametricity and the "semantically static" property that you're trying to describe.

1 Like

Yes that is semantically static. It doesn’t do anything. It’s just a little more interesting if you actually call that function, but still semantically static, because which function body runs for a given call in source code can be determined without inspecting types at runtime if all the source is visible to the compiler

Given that my definition of “semantically static” has been revised a few times here, I thought I should try to nail it down rigorously and formally. It's really hard, though, and that is a good indicator that the whole concept might (cough) be nonsense and I might be wasting everybody's time. So, to those of you who hadn't already figured it out, consider yourself warned. For anyone not yet deterred, I'm still going to take one last stab at defining what I meant.

Part of what makes it hard does indeed come from infinite types. Doug demonstrated that the set of concrete types in the program cannot be known at compile time, even if all sources are visible. Here's an even simpler demo:

struct X<T> {
  static func f(_ n: Int) { 
    if n != 0 { X<Self>.f(n - 1) } // nesting level of X's depends on n

Given the possibility of mutual recursion, the set of possible concrete types can be arbitrarily complex and not even easily characterized, much less enumerated.¹

Fortunately, accepting this fact led me to an approach that I think captures what I've been trying to say, and it's better formulated as a property of conformances than of programs, where by “conformance” I mean specifically the set of implementations used to satisfy any given protocol's requirements. I hope @Douglas_Gregor will confirm this to be part of the intended design:

In Swift, at the moment a concrete type is bound to a generic parameter, we can determine all of the type's conformances to any protocols.

I also hope that this is intended:

The determination of conformances can be done entirely based on declarations visible in the scope where the binding occurs.

Thanks for your indulgence, everybody! Now to dig into earlier posts from Doug and Joe…

¹ I can easily see how the possibility of arbitrary types at run time could cause ambiguities that can only be diagnosed at run time (which Doug mentioned), and wonder if that was considered before we made infinite types “a thing.”


Hey Doug, I finally dug.
(sorry everybody)

Since the semantics of the depends on this check, I think the three of us would probably agree that it's more than just a preference: it's essential that it not matter, because those distinctions aren't present in the user model of the language. If that's what Joe meant by “semantically equivalent static and dynamic implementations” I 100% agree to its importance.¹

[schnipp good schtuff]

This is where you started to lose me, so I dumped all of the code into a file that I could compile and analyze. Then I made a slide deck to see the relationships.

Sure, because neither C nor E is more specialized. It's a little surprising that Array<X>() doesn't produce an ambiguity error in today's compiler.

If we model this case today by extending the conformance of Array to P with both conditions on its Element type… we end up getting an ambiguity error

I don't understand what you mean by “model this case today.” Are you just trying to trigger the same error that should have happened with Array<X>() above? Is this just a very complicated way of demonstrating what we can see by doing:

struct Y : Equatable, P, Q {} // same ambiguities

or is there something deeper that should be understood here?

Maybe the reason I am not surprised by any of this is that I'm really only thinking about semantics. I take it for granted that two types matching all the same conformances and where clauses should have their protocol requirements satisfied by the same extensions, even if one of the types is generic.

I consider that to be the correct behavior here, because there is no "best" answer.


I would be fine with the original conformance of Array to P (marked (D) at the beginning) becoming ambiguous and triggering an error at compile time.

With just (A), (B), (C), and (D) in the program? Can you explain what ambiguity you see there?

I think it would be a major problem with the design if this ambiguity could happen at run time.

Given that infinite types can cause arbitrarily complex generic types to be generated at run time, how would you go about proving that none of the possible combinations turn out to be like Array<X> in your example? Is there something about the limited expressivity of where clauses that makes it possible to make all those determinations at compile time?

You may remember, from our days long before Swift, that I'm not 100% sure this kind of ambiguity ought to be fatal. As you said, when one implementation of a requirement is more specialized than another, the first is “assumed to be better (faster, more accurate, etc.).” Why then should we not assume that when neither of two implementations is more specialized, it doesn't really matter which one is chosen (as long as it's deterministic)? If we can't diagnose all possible ambiguities at compile time, I think it would be very interesting to consider making an arbitrary choice and logging a warning.

To be clear, the fix here is not too complicated, and could possibly even be suggested by the compiler. You effectively need to provide another constrained implementation that is more specialized than both (C) and (E)

Sure. Presumably any number of sufficiently specialized extensions on Array could also be used to fix it?

Wow, after this you get into the cross-module stuff, and my brain is shutting down. I'll have to come back to this again later.

¹ I guess I misread what Joe wrote as “equivalent implementation of static and dynamic semantics.” Ah, semantics! Such a joy.


I agree that those properties are intended.


1 Like