Amending SE-0143: conditional conformances don't imply other conformances

Currently SE-0143 is a little unclear on whether a conditional conformance implies conformances to any inherited protocols automatically. I’m proposing we clarify this as a “no”.

The proposal currently says:

With conditional conformances, the constraints for the conformance to the inherited protocol may not be clear, so the conformance to the inherited protocol will need to be stated explicitly. For example:

protocol P { }
protocol Q : P { }
protocol R : P { }

struct X<T> { }

extension X: Q where T: Q { }
extension X: R where T: R { }

Note that both of the constrained extensions could imply the conformance to P. However, because the two extensions have disjoint sets of constraints (one requires T: Q, the other T: R), it becomes unclear which constraints should apply to the conformance to P

which sounds reasonable, but then it goes on to say:

In cases where the different sets of constraints used to describe the implied inherited conformances can be ordered, the least-specialized (i.e., most general) constraints will be used for the implied inherited conformance. For example:

protocol R: P { }
protocol S: R { }

struct Y<T> { }

extension Y: R where T: R { }
extension Y: S where T: S { }

The conformances of Y: R and Y: S both imply the conformance Y: P. […] Therefore, Y will conform to P when T: R,

I think that the unclear logic from the previous situation still applies: it’s likely that Y: P when T: P, not when T: R. Pretty much all Collection wrappers are of this form:

struct MyWrapper<Wrapped: Sequence> {}
extension MyWrapper: Sequence {} // Wrapped: Sequence
extension MyWrapper: Collection where Wrapped: Collection {} // *
extension MyWrapper: BidirectionalCollection where Wrapped: BidirectionalCollection {}

If the * line was omitted, there would be an implied conformance of the form:

extension MyWrapper: Collection where Wrapped: BidirectionalCollection {}

which is unnecessarily strong. And, noticing this later and changing it to Wrapped: Collection is actually a breaking change.

Thus, I think we should disallow the automatic inference and require an explicit conformance to be stated, which comes in two relatively simple forms:

  • adding it to one of the conditional extensions (easiest, and is the explicit form of the implied conformance), extension Y: R, P where T: R {} in the second example, or
  • creating a new extension (to allow having different bounds), extension Y: P where ... {} in the second.


  • we should be able to reverse this decision (i.e. allow conditional conformances to imply others) backwards compatibly in future, if we decide it would be useful

  • that this doesn’t affect inference from non-conditional conformances, even if there are other conditional conformances that theoretically could be a source to infer, e.g. this will still infer P from Q:

    protocol P { }
    protocol Q : P { }
    protocol R : P { }
    struct X<T> { }
    extension X: Q { }
    extension X: R where T: R { }
  • I don’t think this will break much code now, because this implication rarely actually worked due to some bugs. I have an implementation at, and it required no changes to the stdlib (source compatibility testing currently running).

  • The implementation currently includes a fixit for matching the implied behaviour (i.e. adding , P to the existing extension), but not creating a whole new extension.


1 Like

To me it reads straightforwardly like a typo in the proposal, such that the intended result was that there’s an implied Y: P where T: P.

Is the upshot that implementing this implied conditional conformance correctly is proving difficult? If so, temporarily shipping without it seems like a pragmatic solution.

However, I don’t think (from a design perspective) it’s a good idea to have one set of rules for conditional conformances and another for unconditional ones here. There’s an awful lot of bugs filed reporting that implied conditional conformances don’t work, suggesting that users actively expect it to do so. Meeting that user expectation will make the language more intuitive/enjoyable.

For which case? I don't think this works, since the conditional requirements could be completely unrelated to T: P, like X: Q where T: SomeRandomProtocol and X: R where T == Int, and even if they're related, it's going to be hard/impossible to do the intersections (for the first case) or have good rules for relaxing requirements (for the second case) and then somehow synthesise witnesses for Y: Ps requirements that only need T: P, despite all of the user's code using stricter requirements.

The patch I link above actually does most of the work required for this to work properly, so no, this isn't a problem in that respect.

I agree in theory that having inconsistent rules is suboptimal, but I think the benefits outweigh the costs here: the implied conformances are the compiler inserting invisible, often incorrect and backwards-incompatible-to-fix code.

One thing to note here is, at least in the standard library, the recent generics features like recursive constraints and conditional conformances allowed removing a lot of unnecessary protocol inheritance (pretty much all of the _d protocols disappeared), so implying conformances is far less important because it's no longer hiding implementation details so heavily: most conformances are now interesting.

You are correct that they don't work, but I think the major problem is that they have bad errors: users actively expect the compiler to not confuse them. The problem is the compiler currently just happens to recurs incorrectly, which leads to explicit conformances not existing by the time any implied ones are constructed. Having a proper rule and proper handling of them means much better and clearer highlighting of what's gone wrong, plus fixits.

Hmm, you’re right there. I didn’t think this through. I was merely agreeing with your statement earlier…

…but that is not something that the compiler can infer in the general case.

Yes, you’re right that this would go a long way. You’ve convinced me that the proposed amendment is the way forward.

It looks like MyWrapper and Wrapped are a bit mixedup here:

I agree, this needs clarification and we should go towards being unambiguous and explicit. When there are multiple possible matches for a base protocol through subprotocols, compiler should ask for explicit declaration of conformance for the base protocol instead of complex rules.

To reduce code duplication, we might consider some sort of syntax to disambiguate which subprotocol extension should be used for the conformance of base protocol.

1 Like

Can you say more about this? Why would adding Collection conformance to MyWrapper later be a breaking change? That change would make some instances that hadn’t previously conformed to Collection start conforming, but shouldn’t affect any that were already collections by virtue of wrapping bidirectional collections.

One drawback to removing implicit conformances is that it would potentially make inserting a protocol into a hierarchy a breaking change. For example, version 1 of a library defines a protocol Bar:

// Library:
protocol Bar {}
func doSomething<T: Bar>(_ x: T)

// User code:
struct Static: Bar {}
extension MyWrapper: Bar where Wrapped: Bar {}


In the next version, a new protocol is added above Bar, so the library looks like this:

// Library:
protocol Foo {}
protocol Bar: Foo {}
func doSomething<T: Foo>(_ x: T)     // this gets generalized a little

// User code:
struct Static: Bar {}
extension MyWrapper: Bar where Wrapped: Bar {}
doSomething(MyWrapper(Static()))     // this doesn't work without implicit conformance

Under the current model, both Static and MyWrapper<Wrapped: Bar> would conform to Foo without changes, but if we remove implicit conformance, MyWrapper would need the additional declaration. (If I’m wrong, and this is already a breaking change, then this isn’t as much of a problem.)


Oops, thanks!

I think we have that functionality, via explicit conformances: extension ...: Subprotocol -> extension ...: Subprotocol, BaseProtocol. If I’m misunderstanding what you’re going for be sure to tell me!

It’s not adding the Collection conformance, so much as changing the already existing conformance (which just happens to be implicit, initially) to have different bounds. Specifically, the compiler infers requirements on parameters based on how they’re used, like func foo<T>(x: [T: Int]) will have T: Hashable, which can be used inside foo independently of the dictionary and x. In the case of my example, it would mean that if one had X<T: Collection>, and defined a function like func bar<T>(x: X<MyWrapper<T>>), T would go from implicitly conforming to BidirectionalCollection to implicitly only conforming to Collection.

This breaking change is a little niche, and comes up most often with Equatable, Comparable and Hashable (with functions using Dictionarys and ordered collections in their signatures). This conditional implication thing does apply to that case, since the latter two inherit from Equatable: a conditional MyWrapper: Hashable where Wrapped: Hashable would imply MyWrapper: Equatable where Wrapped: Hashable, but one would likely usually want Wrapped: Equatable.

That’s true!

Unfortunately (or fortunately, depending on ones perspective…), it seems the current thinking is that may be a breaking change with resilience enabled, anyway. :(

Great explanation, thanks so much for this. :+1:t2:

Since the discussion seems to be slow and mostly in agreement, I’ve written up the amendment:

1 Like

The discussion above, and the proposal’s example involving func foo<A>(…), had me scratching my head for a good little while here. It seemed to me that the compiler was inferring too much, regardless of the various implicit inherited conformances or lack thereof.

The light bulb went on when I realized — and Huon, please tell me if I’m misunderstanding this! — that the compiler’s type checker reads the where in a conditional conformance not just as “if,” but as “if and only if.”

In other words, this code:

struct Y<T> { }
extension Y: P where T: Q { }

…states not only that if T is a Q then Y<T> is a P, but also the converse: if we can infer that Y<T> is a P, then T must be a Q.

After realizing this, that section made a lot more sense.

This if-and-only-if behavior hadn’t occurred to me when I read SE-0143’s section on overlapping conformances, and even now it’s not clear to me that the proposal implies it. The text mentions the possibility of some kinds of overlapping conformances being allowed in the future. Doesn’t the type checker’s behavior effectively shut the door on that possibility by making inferences based on the impossibility of overlaps? Or is the assumption that if overlapping conformances were ever to be implemented, they would simply be a breaking change for code that relied on such inferences?

1 Like

That’s a… very good point. We may have to reconsider this, because I certainly hadn’t explicitly thought of the implications here, but maybe others had.

Assuming we don’t allow retroactive overlapping conformances (as in, all conformances for a given type to a given protocol must be defined in the same module), we could retain the behaviour and it just becomes a breaking change to introduce an overlapping conformance, or, to default to safety, we’d require a non-overlapping conformance to be marked as “forever unique” or something, to indicate it can participate in such inference.

But, even then, having such overlapping conformances brings a few other things to solve; such as, what’s the generic bounds of f in:

struct X<T> {}
extension X: P where T: P1 {}
extension X: P where T: P2 {}
extension X: P where T == SomeType {}

struct RequiresP<U: P> {}

func f<V>(_: RequiresP<X<V>>) {}

We’d either need to have a bound X<V>: P, or some sort of disjunction V: P1 || V: P2 || V == SomeType, neither of which we currently support (but we control the compiler so presumably we could, but extra complexity!). The former could presumably be inferred, but the latter couldn’t and would thus have to be programmer-visible/writable.

The proposal does touch on the pain of having disjunctions briefly “It is no longer possible to uniquely say what is required to make a generic type conform to a protocol, because there might be several unrelated possibilities”, but as you say, it doesn’t explicitly cover this.

Thoughts, @Douglas_Gregor?

(Although, if we do disable that inference, that opens the way for implied conditional conformances, in my mind!)

1 Like

Leaving the bigger questions to Douglas — I’m well in over my head here! — I’ll just note that in this example paraphrased from the proposal:

protocol P { }
protocol R: P { }
protocol S: R { }

struct Y<T> { }

extension Y: R where T: R { }
extension Y: S where T: S { }

struct Z<A: P> { }

// Compiler could infer:
// 1. It must be that Y<B>: P because Z uses it as type param,
// 2. and the only way Y<B>: P is if either B: R or B: S,
//    per the conformances above,
// 3. and since S: R, then either way
// 4. the type checker can assume B: R.
func foo<B>(x: Z<Y<B>>) { }

…on the face of it, it doesn’t seem onorous at all to me to drop step 2 in the inference above, and require foo to explain explicitly why it thinks Y<B> can be the type param of Z:

func foo<B: R>(x: Z<Y<B>>) { }

Now adding this future / retroactive conformance:

extension Y: P where T: P { }

…is no longer a breaking change, because foo explicitly declared that it expects B: R.

You asked me:

I was thinking bout the above issues @Paul_Cantrell and you discussed above, but was too busy to elaborate.

1 Like

Yeah, that’s the solution I’d start with if we did decide to change this. However, I talked to Doug and others and the thinking is fairly strongly against overlapping conformances: they bring a whole pile of other issues.

I’ve added a paragraph to the pull request summarising this discussion. Thanks for bringing it up!

1 Like

That makes more sense to me. Frankly I was surprised the proposal left the door open as wide as it did, especially to overlapping conformances that form diamonds!

The new paragraph clarifies nicely. This part raised another question (forgive me):

A possible alternative … would be to disable this sort of inference from conditional conformances, and instead require the user to write func foo<A: P>. This could also allow the conformances to be implied, since it would no longer be such a backwards-compatibility problem.

Even if the door is firmly shut on overlapping conformances, is there still something to be said for that approach? I find it less surprising — even expected, if I hadn’t read this proposal — that the compiler would infer Y: P where T: R as opposed to inferring A: R in the func foo<A>(x: Z<Y<A>>) example.

If the compiler didn’t make that foo inference, if it required func foo<A: P> instead, then could it safely assume Y: P where T: R (even without overlapping conformance)?

Or if not that, then could it make an inference more like “there exists some supertype U of R such that Y: P where T: U”?

The amendment was accepted after core team discussion, and an implementation has landed at

It includes fixits, quoting one my comments:

I’ve changed the non-Swifty fixit to … three fixits! Similar to try/try!/try?, it suggests three possibly-sensible things, e.g. given protocol Sub: Base {} and extension X: Sub where T: Sub it offers:

  • “relaxed” requirements: extension X: Base where T: Base,
  • the same requirements: extension X: Base where T: Sub,
  • different requirements: extension X: Base where <#requirements#>.

The relaxed requirements is a heuristic trying to match the common patterns with protocols like Hashable/Equatable, and the collection hierarchy. It only triggers if all the requirements are of the form <parameter>: Sub, and they each get rewritten to <parameter>: Base (meaning this fixit isn’t offered for extension X: Sub where T == Int or extension X: Sub where T: Unrelated). The other two are offered in all cases.

The full experience of accepting any of them is, for instance:

extension X: Sub where T: Sub {}

extension X: Base where <#requirements#> {

extension X: Sub where T: Sub {}

Yeah, I agree that that example is a bit subtle and a bit of an edge case. I actually realised a slightly more convincing one: SomeConditionalSequence<T>.Iterator will infer the appropriate bounds to make SomeConditionalSequence<T> a Sequence, to get access to Iterator.

I think it could for the specific case of a single conformance to a subprotocol of P, but as soon as you have more than one, an explicit conformance is needed. I don’t think the supertype inference is useful: the only thing one can use with that in practice is T: R, as it’s not known which parts of R and its supertypes are used in the conformance.