Ergonomics: generic types conforming "in more than one way"

I agree that it is statically knowable, but I have concerns with having the caller compute this information:

  • You would have to compute the union of all potential constraints to pass in (e.g., both Equatable and Q in my extended example), including evaluating all dependent constraints (e.g., you might need to evaluate both T: Collection and T.Element: Equatable for a different form of the example. Some of that computation will be redundant based on the decision procedure.

  • This makes the set of potential witnesses ABI, which I don't think we want for libraries that build with library evolution.


Having the caller compute the information seems to me like the only way to maintain modularity, and thereby retain the ability to specialize in the optimizer and have any hope of rational behavior in the presence of retroactive conformances. It also seems to me like conditional witnesses ABI is not that terrible of an ABI constraint; if you add new conditional requirements, there's reasonable retroactive defaults for them either being "not present" in existing code, or falling back to dynamic lookup if that happens to work better in practice. If the conformance evolves in the other direction not to need the conditional requirements anymore, then the requirements become redundant ignored arguments, which is maybe inefficient but at least is not constraining.


As I read it, "semantically static" is equivalent to saying that one can monomorphize a Swift program without existential or non-final generics, as in (e.g.) Rust's generics system, specializing your way into having no dynamic dispatch.

That is almost true, although the ability to construct infinite types at runtime means it's not completely true. For example, run this:

protocol InfiniteTypes {
  associatedtype Assoc: InfiniteTypes
  var assoc: Assoc { get }

struct ToInfinity<T>: InfiniteTypes {
  var assoc: ToInfinity<Self> { .init() }

func recursive<T: InfiniteTypes>(_ value: T) {
  if Int.random(in: 0..<100) != 42 {


You can't monomorphize that, because each recursive step builds a new ToInfinity<...> type.



That's what I thought I meant…

That is almost true, although the ability to construct infinite types at runtime means it's not completely true.

No fair; you just broke my brain.

/me makes repairs

I guess I don't think of this program as requiring type-based dynamic dispatch, because calls to recursive always end up in the same function body, regardless of the type of the argument. I am talking about being able to decide the program's control flow without having to make decisions based on types at runtime.


I’m pretty sure you can use mutual recursion with @Douglas_Gregor’s scheme to produce Turing-complete nested generics.

It’s, uh…left as an exercise to the reader! :-)

1 Like

Generally polymorphic recursion is supported. You can try it out

enum PolyList<A> {
    case empty
    case nonempty(A, PolyList<[A]>) // Yes [A] not A
protocol Monoid {
    static var empty: Self { get }
    func append(_ rhs: Self) -> Self
extension Array : Monoid where Element: Monoid {
    static var empty: Self { get { [] } }
    func append(_ rhs: Self) -> Self {
        return self + rhs
func combine<A : Monoid>(_ list: PolyList<A>) -> A {
    switch list {
    case .empty: return A.empty
    case let .nonempty(x, list):
      return combine(list).reduce(x, {$0.append($1)}) // A gets instantiated to [A] here!

As to why you'd want to do this, I asked a StackOverflow question which might be relevant: Applications of polymorphic recursion.


Sorry, I don't know what you could possibly mean. Swift's generics are not turing complete at compile time, and it shouldn't surprise anyone that anything written in Swift might be turing complete at runtime.

I'm talking about whether, given full program sources, the system could determine which function bodies execute at any point without inspecting types at runtime. I'll give you a quick example of a program that's semantically dynamic to help illustrate:

protocol P { func f() }
struct X : P { func f() { print("X") } }
struct Y : P { func f() { print("Y") } }
let a = { $0 == "X" ? X() as P : Y() as P }
for x in a { x.f() } // Which f() gets called?

The question of which f() gets called on each iteration of the loop can't be answered without inspecting the type of each element of the array. This program uses existentials to create dynamic semantics. As long as the program encodes the command line arguments into distinct types, we couldn't write an equivalent program using just generics, because (I hope!) today, generic dispatch is semantically static.

Yes, I'm aware. How is that related? Update: After looking at your StackOverflow post I understand why this would be relevant if we were thinking in terms of monomorphization, but that's not really the question I'm trying to address here.


Sorry, I misunderstood what you meant. Your follow up comment with the example makes things much clearer, thanks. Today you can write code like

func f<A>(_ _: A) { 
  if (MemoryLayout<A>.size == 0) {

Is this program "semantically static"? It is certainly not parametric, that's for sure. Not sure if there's a difference between parametricity and the "semantically static" property that you're trying to describe.

1 Like

Yes that is semantically static. It doesn’t do anything. It’s just a little more interesting if you actually call that function, but still semantically static, because which function body runs for a given call in source code can be determined without inspecting types at runtime if all the source is visible to the compiler

Given that my definition of “semantically static” has been revised a few times here, I thought I should try to nail it down rigorously and formally. It's really hard, though, and that is a good indicator that the whole concept might (cough) be nonsense and I might be wasting everybody's time. So, to those of you who hadn't already figured it out, consider yourself warned. For anyone not yet deterred, I'm still going to take one last stab at defining what I meant.

Part of what makes it hard does indeed come from infinite types. Doug demonstrated that the set of concrete types in the program cannot be known at compile time, even if all sources are visible. Here's an even simpler demo:

struct X<T> {
  static func f(_ n: Int) { 
    if n != 0 { X<Self>.f(n - 1) } // nesting level of X's depends on n

Given the possibility of mutual recursion, the set of possible concrete types can be arbitrarily complex and not even easily characterized, much less enumerated.¹

Fortunately, accepting this fact led me to an approach that I think captures what I've been trying to say, and it's better formulated as a property of conformances than of programs, where by “conformance” I mean specifically the set of implementations used to satisfy any given protocol's requirements. I hope @Douglas_Gregor will confirm this to be part of the intended design:

In Swift, at the moment a concrete type is bound to a generic parameter, we can determine all of the type's conformances to any protocols.

I also hope that this is intended:

The determination of conformances can be done entirely based on declarations visible in the scope where the binding occurs.

Thanks for your indulgence, everybody! Now to dig into earlier posts from Doug and Joe…

¹ I can easily see how the possibility of arbitrary types at run time could cause ambiguities that can only be diagnosed at run time (which Doug mentioned), and wonder if that was considered before we made infinite types “a thing.”


Hey Doug, I finally dug.
(sorry everybody)

Since the semantics of the depends on this check, I think the three of us would probably agree that it's more than just a preference: it's essential that it not matter, because those distinctions aren't present in the user model of the language. If that's what Joe meant by “semantically equivalent static and dynamic implementations” I 100% agree to its importance.¹

[schnipp good schtuff]

This is where you started to lose me, so I dumped all of the code into a file that I could compile and analyze. Then I made a slide deck to see the relationships.

Sure, because neither C nor E is more specialized. It's a little surprising that Array<X>() doesn't produce an ambiguity error in today's compiler.

If we model this case today by extending the conformance of Array to P with both conditions on its Element type… we end up getting an ambiguity error

I don't understand what you mean by “model this case today.” Are you just trying to trigger the same error that should have happened with Array<X>() above? Is this just a very complicated way of demonstrating what we can see by doing:

struct Y : Equatable, P, Q {} // same ambiguities

or is there something deeper that should be understood here?

Maybe the reason I am not surprised by any of this is that I'm really only thinking about semantics. I take it for granted that two types matching all the same conformances and where clauses should have their protocol requirements satisfied by the same extensions, even if one of the types is generic.

I consider that to be the correct behavior here, because there is no "best" answer.


I would be fine with the original conformance of Array to P (marked (D) at the beginning) becoming ambiguous and triggering an error at compile time.

With just (A), (B), (C), and (D) in the program? Can you explain what ambiguity you see there?

I think it would be a major problem with the design if this ambiguity could happen at run time.

Given that infinite types can cause arbitrarily complex generic types to be generated at run time, how would you go about proving that none of the possible combinations turn out to be like Array<X> in your example? Is there something about the limited expressivity of where clauses that makes it possible to make all those determinations at compile time?

You may remember, from our days long before Swift, that I'm not 100% sure this kind of ambiguity ought to be fatal. As you said, when one implementation of a requirement is more specialized than another, the first is “assumed to be better (faster, more accurate, etc.).” Why then should we not assume that when neither of two implementations is more specialized, it doesn't really matter which one is chosen (as long as it's deterministic)? If we can't diagnose all possible ambiguities at compile time, I think it would be very interesting to consider making an arbitrary choice and logging a warning.

To be clear, the fix here is not too complicated, and could possibly even be suggested by the compiler. You effectively need to provide another constrained implementation that is more specialized than both (C) and (E)

Sure. Presumably any number of sufficiently specialized extensions on Array could also be used to fix it?

Wow, after this you get into the cross-module stuff, and my brain is shutting down. I'll have to come back to this again later.

¹ I guess I misread what Joe wrote as “equivalent implementation of static and dynamic semantics.” Ah, semantics! Such a joy.


I agree that those properties are intended.


1 Like

Hi Joe,

I think I'm very much in support of some more explicit way of grouping algorithm specializations, like your “multiple bodies with where clauses” approach (although I have other specific ideas about how to express it). However, this does seem in principle to be a mostly syntactic distinction that makes programs clearer and mistakes easier to diagnose, but doesn't fundamentally change what's possible semantically.

I say that because it seems like at least part of the compilation process will always need visibility into all files in any given module, so anything that “bubbles up” from a syntactically unified declaration could also be “collected” during that part of the compilation process. Am I mistaken?


P.S., having realized that infinite types potentially create combinations that are unknowable at compile-time, I think I understand why modularity might be a problem. But unless we're willing to have the compiler explore the space of combinations of generic types and concrete parameters to prove that none of them cause ambiguity errors (and somehow decide that the exploration can stop!), I think it might be inherent in the language design.

Well, my brain might be broken, but at least I find myself in good company :laughing:


Here's an example where the standard library gets this wrong. I do not yet understand why this works, but I've found I can make things work in similar situations by putting implementations of index(_:offsetBy:), index(_:offsetBy:limitedBy:), and distance(from:to:) in both my Collection and my conditional BidirectionalCollection conformances.

Good find.

The issue is similar to your previous mystery/challenge, whose behavior I already explained.

At the point where DefaultIndices conforms to Collection, the only available implementation of distance(from:to:) is the default forward-only version from Collection. So that becomes the witness for it.

Inside an extension of Collection, such as your generic_distance method, only the witnesses for Collection methods are available, and in the case of DefaultIndices that means the default version of distance(from:to:).

However, by adding a concrete implementation of that method within DefaultIndices itself, so that it is visible and available at the point where DefaultIndices conforms to Collection, then the concrete version becomes the best choice for the protocol witness.

In practice, that generally means the extension which conforms DefaultIndices to Collection should include an implementation of distance(from:to:), which simply forwards the call along to _elements.

I have put up a PR to add that (along with index(_:offsetBy:) and index(_:offsetBy:limitedBy:)) to DefaultIndices.

• • •

I agree with you that this is an unfortunate state of affairs. It is confusing to humans, and it requires boilerplate to make dynamic dispatch work as desired. I hope we can find some way to rectify the situation, so that default implementations in sub-protocols can play the same role through conditional conformances. But I’m not sure how to make that work.

The general rule is that, where there are multiple implementations of a method or property that could be applied in a given situation, the more specialized implementation will be used. @dabrahams and others have identified some exceptions to that rule.

If we want to eliminate those exceptions in the long run, then we need to declare and document the exceptional behaviors as bugs. In the interim, we need to explain how to work around the bugs so that, if and when the bugs are fixed, code will not be broken by the fix. We need to warn against writing code that depends upon the behavior provided by the bugs. That documentation needs to appear in The Swift Programming Language.

If we instead anticipate declaring this exceptional behavior as proper, then the behavior defines the language. In that case, we need to document the behavior so that users will understand how the language operates. That documentation needs to appear in The Swift Programming Language.

At the very end of the Protocol Extension section of the Protocols chapter of The Swift Programming Language, we have the following note that comes close to addressing the circumstances at issue:

If a conforming type satisfies the requirements for multiple constrained extensions that provide implementations for the same method or property, Swift uses the implementation corresponding to the most specialized constraints.

That note is limited to cases in which a conforming type satisfies "the requirements for multiple constrained extensions that provide implementations for the same method or property." [emphasis added] The issue, at hand, is slightly different.

The bug identified at SR-12881 involves a type that conforms to two different protocols. Analogous to the circumstances described in the note, one of the protocols inherits from the other protocol. In other words, the child protocol is a more specialized version of the parent protocol.

It appears that, where a type so conforms to two protocols with the parent protocol providing a method that itself calls into a second method for which implementations are provided by both the parent protocol and the child protocol, then, for the second method, Swift will use the implementation provided by the parent protocol, not the implementation provided by the more specialized child protocol. Is that a correct statement of the situation?

It seems to me that the note at the end of the Protocols chapter and this situation and any similar situations all deserve to be gathered together into their own subsection at the end of the Protocol Extensions section of the Protocols chapter, under the subheading:

Special Behaviors Involving Multiple Implementations of a Method or Property

Or, is there a reason why this behavior should not be documented?


That general rule does not really apply to Swift, and trying to describe how the system works in terms of exceptions to that rule would be a lot like trying to describe the motion of the planets as if the Earth were the center of the universe. Swift does overload resolution to pick the most specialized implementation that applies at a use site. Once it picks one, it doesn't change that based on parameters to the use site. This is why, when you reference a name from a generic context, it won't pick overloads that only apply to specific types:

func foo(x: Int) { print("x") }
func foo<T>(x: T) { print("y") }

func bar<T>(x: T) { foo(x: x) } // always calls foo<T>

bar(x: 1) // prints "y"

For protocol conformances, the use site is the point at which the conformance is declared, on the type or extension declaration. The issue here is that, with conditional conformances, we may want to treat the conditional conformances as distinct use sites, so that the compiler and runtime can pick more specific implementations appropriate to when those conditional conformances are available.


Hi @Nevin,

The mystery is why in this component all those implementations of the same methods must be repeated in the BidirectionalCollection conformance. What I've found is that if you don't include both sets of overloads, either the library traps because it incorrectly thinks you're trying to go backwards in a non-BidirectionalCollection, or my tests fail because we get O(N) implementations of the RandomAccessCollection requirements that are supposed to be O(1).

I haven't had time to fully analyze it yet, but it would be trivial to separate from Tensorflow if you want to have a look. The tests it needs to pass are here and you'll also need this support file.

I haven’t read through your full code sample, but it sounds like the type you use for Selection itself does not have concrete implementations of these methods (eg. distance(from:to:)) visible at the place where it conforms to Collection, but it does where it conforms to BidirectionalCollection.

That would explain the behavior your describe: if Sampling only has the methods defined for its Collection conformance, then the only thing the compiler knows about Selection is that it is a Collection, so methods like Sampling.distance(from:to:) can only find the method on Selection that witnesses Collection.distance(from:to:).

However if Sampling also has the methods redefined for its BidirectionalCollection conformance, then the compiler also knows that Selection is bidirectional, so it can find the concrete implementation of distance(from:to:) defined for the conformance of Selection to BidirectionalCollection.

In other words, if you make sure that the types used for Selection have concrete implementations for methods like distance(from:to:) which are available at the point where they conform to Collection, then Sampling will only need one implementation for the same, visible at its point of conformance to Collection.

Sure, if you can separate the example into a piece of free-standing code that I can download and run, I’d be happy to try it out.