[Proposal draft] Conditional conformances

Sent from my iPhone

Great job thinking this all through (as usual), and I’ll be very happy to have Optional and Array become Equatable. Here’s some of my thoughts on the library evolution aspect of this:

- Removing a conditional conformance isn’t allowed, obviously.
- Adding a conditional conformance is just like adding an unconditional conformance—it needs availability info.

Right. The main wrinkle I see here is that, when you add a conditional conformance, you will effectively end up with overlapping conformances when running an old application against a new library. Do you want me to capture these cases in the proposal in a section on “Resilience” or “Library Evolution”, like I’ve tried to capture the effect on ABI Stability? (I think that makes sense)

- It would be nice™ if making a conditional conformance more general was allowed. Since the plan doesn't allow overlapping conformances, I think this is actually implementable: just don’t put the constraints in the symbol name. I don’t know how to represent the backwards-deploying aspects of this right now, so it probably makes sense to forbid it today, but I think it would be nice if the implementation left the door open.

Yeah. It’s a different set of witness tables that one would need to gather to use the conditional conformance in the newer version of the library vs. in an older version of a library. That’s okay if we leave the witness-table-gathering to the runtime, but not so great if we statically provide the witness tables.

Would this be a case in which the win by having this feature and letting the runtime gather the witness tables offset the losses from doing his operations at runtime? I would like to think that in cases like this there is at least the option to opt for more flexibility.

I’m sure we can find a reasonable solution to this that doesn’t incur a significant runtime penalty, e.g., by teaching the runtime conformance specialization machinery to handle a “redirecting” specialization that maps from the requirements of the older (more specialized) version to the requirements of the newer (less specialized) version.

  - Doug

Very glad to hear that Doug :).

···

Sent from my iPhone

On 28 Sep 2016, at 19:45, Douglas Gregor <dgregor@apple.com> wrote:

On Sep 28, 2016, at 11:40 AM, Goffredo Marocchi <panajev@gmail.com> wrote:

On 28 Sep 2016, at 17:51, Douglas Gregor via swift-evolution <swift-evolution@swift.org> wrote:

On Sep 27, 2016, at 5:06 PM, Jordan Rose <jordan_rose@apple.com> wrote:

What other designs were considered and rejected? It seems like some kind of escape hatch would be preferred if you happen to get into this situation, though you make some really good points about the pitfalls.

I don’t have a fully-baked alternative proposal—it would probably have to involve some kind of preference rule for picking the “best” set of (consistent!) conformances to satisfy a particular request, introduce a disambiguation syntax for cases where that preference rule does the wrong thing, and some way of teaching the dynamic-casting machinery to do the same thing.

Yeah your description is already sounding like a lot of work :)

Just to clarify when you say “bans” do you mean if Wrapped: Equatable & HasIdentity then SomeWrapper is not Equatable, or do you mean you get a compile error because there are two constrained conformances SomeWrapper: Equatable?

You get a compile error if there are two conformances of SomeWrapper to Equatable; it doesn’t actually matter whether they are conditional, but people are far more likely to expect to be able to having overlapping conditional conformances.

Just to clarify in my mind, the problem here is that Swift would need runtime machinery to look at Wrapped and select the conformance to Equatable based on whether Wrapped: Equatable or Wrapped: HasIdentity. Is that right?

Correct. And diagnose / disambiguate if there are multiple conformances that match.

Otherwise with the proposal as written Swift would need to check Wrapped to validate the constraints but once it does there is only one implementation of the conformance to pick from.

Right.

I believe you about the type checker, I’m just naively assuming inserting a table to select the correct conformance isn’t a big cost because you would canonicalize the constraints and being disjoint for any type T there would only ever be one matching entry.

The table computation would have to be a runtime thing, because we don’t know all of the conformances until then, but yes—it’s doable.

What would be the problem with allowing multiple conformances to Equatable so long as the constraints are disjoint

From the language perspective, “disjoint” would have to mean that there are requirements that actively conflict, e.g., one extension has “Wrapped.Element == Int” and the other has “Wrapped.Element == String”.

Yes, I was also imagining protocols so long as there is no protocol they share in common except the empty protocol.

The compiler won’t know that a given type can’t conform to two specific protocols, though, unless they have some kind of direct conflict (like my example above of Wrapped.Element being equated to two different concrete types) or we have some mechanism in the language to state that two protocols are mutually exclusive.

or the concrete type only adopts one of the available protocols?

Unless you assume that you have a fully-determined, closed system where you know about every potential conformance of a concrete type, this isn’t a question that can be answered at compile time.

The problem is importing a random library can immediately introduce breakage when it is a compile error, or worse if both reference a shared library you also import… unless we’re saying extensions of types outside your module are only visible in the declaring module which is pretty restrictive e.g. some UI toolkit extending UIKit/AppKit classes with conveniences, or extending Array to say it CanLayout if elements are views where calling a.layout() tells all the views in the array to layout. In that example neither the views nor Array would be declared in the module doing the extending.

Now let’s say I want to use the swift-protobuf library and I also use GenericSocialMediaService’ SDK that also incorporates swift-protobuf. I’m just imagining what happens when we both try to define extensions. It would be nice if they could declare Array: ProtobufMessage where Element: GSMSEntityProtocol but I was able to provide Array: ProtobufMessage where Element: MyOwnProtocol.

Yes, I know.

That said the restrictions can always be relaxed later. I’d rather have this feature without overlapping conformances than not have it.

Right. If we find a model for it that works.

With conditional conformances, the question of which extension "wins" the implied conformance begins to matter, because the extensions might have different constraints on them. For example:

struct X4<T> { }

extension X4: Q where T: Q { } // implies conformance to P
extension X4: R where T: R { } // error: implies overlapping conformance to P
Both of these constrained extensions imply a conformance to P, but the actual P implied conformances to P are overlapping and, therefore, result in an error.

If the related P conformance were inherited from conformance to Q or R then the rules would (IMHO) make more sense. Wouldn’t the extra rule you need simply be that either Q or R must provide a complete conformance to P (no mix-n-match)?

A conformance to P introduced by the first extension would not be usable by the second extension, because the conformance to P introduced by the first extension might depend on details of “T: Q”… and the second extension can’t assume that “T: Q”.

What I’m saying is that if T: Q then the entire implementation of P must come from the first extension. If T: R then the entire implementation of P must come from the second extension.

This fails when T: Q&R, though, because you don’t know which implementation of P to choose—the one based on T: Q or the one based on T: R? That’s the reason for completing banning the overlap—it avoids the possibility of ambiguity (at compile time or at runtime).

If T: P then neither extension applies because there are overlapping extensions. Basically if there is any overlap then the compiler only considers explicit extensions that match without ambiguity, otherwise the extension is ignored.

func takes<Value: P>(value: Value) { }

struct CQ: Q { }
struct CR: R { }
struct CP: P { }

takes(X4<CQ>()) //fine, uses first extension
takes(X4<CR>()) //fine, uses second extension
takes(X4<CP>()) //error: ambiguous conformance

In the model you describe, this last one isn’t “ambiguous conformance”, it’s “no conformance” because neither of the (potential) conformances of X4 to P has its requirements satisfied. The case you didn’t enumerate is:

struct CQR: Q, R { }
takes(X4<CQR>()) // error: ambiguous conformance

This relates to the disjoint discussion above:

extension X4: P where T: P { } //error: ambiguous conformance

Even though there is technically a conformance to P available we just ignore it.

I don’t understand your “error: ambiguous conformance” annotation here. This extension of X4 is less specialized than either of the other extensions, yet provides a suitable P conformance, and (if it were present in your example immediately above) would allow “takes(X4<CP>())” to succeed. Indeed, this is the fix for the example in SE-0143 because it eliminates the overlap by providing a common, shared conformance to P.

Seems like this would be knowable statically by looking at the extensions and the protocol relationships in the constraints?

You can see that either of the first two extensions is more specialized than this third extension, if you can see them all. Dynamic work comes in when multiple modules with extensions are linked together.

What is the rationale for picking the least specialized extension? That’s not what I would naively expect to happen. If T: R & S then I would expect the more specialized S:R implementation to be preferred, and the explicit R implementation to kick in when T: R.

We have to pick the least-specialized extension, because it’s the only one that works. If you pick a more-specialized extension to realize the implied conformance to P, the less-specialized extension doesn’t have a conformance to P that it can rely on.

That’s what I was trying to address by saying no mix-n-match. If you’re going to wonder into overlapping territory you must supply two entirely separate implementations of P, one explicit and one inside the Q extension. (I fully admit that might not help the implementation complexity - I don’t know enough about the type checker to answer that).

If we were to allow overlap, then yes, maybe it should have to be explicit. My complaints/concerns about overlapping conformances still apply ;)

  - Doug

···

On Sep 29, 2016, at 3:05 PM, Russ Bishop <xenadu@gmail.com> wrote:

On Sep 29, 2016, at 11:12 AM, Douglas Gregor <dgregor@apple.com <mailto:dgregor@apple.com>> wrote:

On Sep 28, 2016, at 9:48 PM, Russ Bishop <xenadu@gmail.com <mailto:xenadu@gmail.com>> wrote:

Is the decision on "no-overlapping-conformances” something that’s seen-as set in stone permanently, set in stone for the near future, or perhaps at least somewhat open to reconsideration at the present moment?

There hasn’t been a decision per se, so it that sense it’s open to reconsideration.

I have a strong *personal* bias against overlapping conformances, because I feel that the amount of complexity that they introduce into the language and its implementation far outweigh any benefits.

I would like a bit further clarification on what the prohibition of overlapping conformances implies.

I think I presented it poorly in the proposal, and will see if I can find a clearer exposition (for posterity, if not soon enough to aid the review).

For example, consider this modification of your example in a Swift that allows for same type constraints in extensions. Would this be allowed?

Short answer: no. Longer answer below.

There would be two different conformances to Foo for SomeWrapper, but they would never “overlap” (i.e. both be candidates for the same concrete type).
struct SomeWrapper<Wrapped> {
  let wrapped: Wrapped
}

protocol Foo {
associatedtype Bar
  func bar() -> Bar
}

extension SomeWrapper: Equatable where Wrapped == String {
  func bar() -> String {
    return “Hello"
  }
}

extension SomeWrapper: Equatable where Wrapped == Int {
  func bar() -> Int {
    return 0
  }
}

This is a case where we *could* determine that the two conformances will never overlap, because we can statically determine that trying to satisfy the requirements of both extensions at the same time results in a conflict—Wrapped cannot be both equal to an Int and a String. However, I think it’s a bad idea to allow these two conformances, for a couple of reasons:

(1) It’s going to immediately feature-creep as developers want to be able to treat more kinds of extensions as non-overlapping, e.g., calling two protocols mutually-exclusive (Russ’s example), or introducing some kind of negative constraint (‘Wrapper: !Foo’) to arbitrarily break overlaps.
(2) The issues I mentioned to Russ about the type checker having to treat these as disjunctions, which can lead us into yet more exponential behavior.
(3) I don’t think it’s good design; if you’re going to write an extension to make type X conform to protocol P, you should do so in the most general way that is reasonable for X and P—not pick the one-off case you need this moment. This way, when you do hit two ways in which X conforms to P, the compile complains and nudges you to factor the conformance into something more reusable and non-overlapping.

Secondarily, I understand the reason for letting the “least specific” candidate conformance win (it is the most general). But I wonder if this might leave performance on the table in some cases where a more specific implementation could use knowledge of the more specific details to implement the members more efficiently. Using the example in your proposal, what if knowing `T` conforms to `S`, not just `R` allows `X5` to provide a more efficient implementation of the members of `P` and `R`? If so, it seems unfortunate to leave that performance on the table. Is this a valid concern? Or is it unlikely to come up often enough in practice to matter?

It’s a valid concern, and I’m sure it does come up in practice. Let’s create a small, self-contained example:

protocol P {
  func f()
}

protocol Q: P { }

struct X<T> { let t: T}

extension X: P where T: P {
  func f() {
    /* general but slow */
  }
}

extension X where T: Q {
  func f() {
    /* fast because it takes advantage of T: Q */
  }
}

struct IsQ : Q { }

func generic<U: P>(_ value: u) {
  value.f()
}

generic(X<IsQ>())

We’d like for the call to “value.f()” to get the fast version of f() from the second extension, but the proposal doesn’t do that: the conformance to P is “locked in” to the first extension.

If we assume that we can see all of the potential implementations of “f” to which we might want to dispatch, we could implement some dynamic scheme that tries to pick the most specialized one. Of course, as with overlapping conformances in general, this selection process could result in ambiguities.

  - Doug

···

On Sep 28, 2016, at 4:30 PM, Matthew Johnson <matthew@anandabits.com> wrote:

It’s good to see this starting to happen!

Is the decision on "no-overlapping-conformances” something that’s seen-as set in stone permanently, set in stone for the near future, or perhaps at least somewhat open to reconsideration at the present moment?

There hasn’t been a decision per se, so it that sense it’s open to reconsideration.

I see. A related question: if overlapping conditional conformances are disallowed in Swift 4, would e.g. ABI concerns make it infeasible to relax that restriction in future Swift (5, 6, X, etc.)?

It’s hard to be definitive without having a specific design for what overlapping conditional conformances would mean, but I feel fairly confident that we could introduce them later in some form. It would almost certainly involve deployment limitations—one would not be able to dynamically query for an overlapping conformance and have that code run correctly on a Swift 4 runtime—but the general scheme in the ABI should generalize.

I have a strong *personal* bias against overlapping conformances, because I feel that the amount of complexity that they introduce into the language and its implementation far outweigh any benefits. Additionally, they enable use cases (e.g., static metaprogramming-ish tricks) that I feel would be actively harmful to the Swift language’s understandability. Generics systems can get very complicated very quickly, so any extension needs to be strongly motivated by use cases to matter to all or most Swift developers.

This is purely anecdotal but I had a lot of utility code laying around that I’d marked with notes like `// TODO: revisit once conditional conformances are available`.

When I was leaving those notes I was expecting to need overlapping conformances often, but I reviewed them *before* replying and I actually haven’t found an example where having overlapping conformances is both (1) a significant win and also (2) a win in a way that’d be of broad, general interest.

- 80% have no real need for overlapping conditional conformances
- 15% might have “elegance gains” but nothing practically-significant
- 5% would *probably* see real gains but are likely not of broad interest

…which wasn’t what I was expecting, but leaves me a lot more comfortable without overlapping conformances for now than I was in the abstract.

Very interesting, thanks for doing this review!

  - Doug

···

On Sep 30, 2016, at 6:25 AM, plx via swift-evolution <swift-evolution@swift.org> wrote:

On Sep 28, 2016, at 5:53 PM, Douglas Gregor <dgregor@apple.com <mailto:dgregor@apple.com>> wrote:

On Sep 28, 2016, at 1:28 PM, plx via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

Is the decision on "no-overlapping-conformances” something that’s seen-as set in stone permanently, set in stone for the near future, or perhaps at least somewhat open to reconsideration at the present moment?

There hasn’t been a decision per se, so it that sense it’s open to reconsideration.

I have a strong *personal* bias against overlapping conformances, because I feel that the amount of complexity that they introduce into the language and its implementation far outweigh any benefits.

I would like a bit further clarification on what the prohibition of overlapping conformances implies.

I think I presented it poorly in the proposal, and will see if I can find a clearer exposition (for posterity, if not soon enough to aid the review).

For example, consider this modification of your example in a Swift that allows for same type constraints in extensions. Would this be allowed?

Short answer: no. Longer answer below.

There would be two different conformances to Foo for SomeWrapper, but they would never “overlap” (i.e. both be candidates for the same concrete type).
struct SomeWrapper<Wrapped> {
  let wrapped: Wrapped
}

protocol Foo {
associatedtype Bar
  func bar() -> Bar
}

extension SomeWrapper: Foo where Wrapped == String {
  func bar() -> String {
    return “Hello"
  }
}

extension SomeWrapper: Foo where Wrapped == Int {
  func bar() -> Int {
    return 0
  }
}

This is a case where we *could* determine that the two conformances will never overlap, because we can statically determine that trying to satisfy the requirements of both extensions at the same time results in a conflict—Wrapped cannot be both equal to an Int and a String. However, I think it’s a bad idea to allow these two conformances, for a couple of reasons:

(1) It’s going to immediately feature-creep as developers want to be able to treat more kinds of extensions as non-overlapping, e.g., calling two protocols mutually-exclusive (Russ’s example), or introducing some kind of negative constraint (‘Wrapper: !Foo’) to arbitrarily break overlaps.

Sure, we probably would see requests like this. The fundamental issue here IMO is that type systems are great until they don’t let you express something that is relatively obvious conceptually, such as the above example. This if ok as long as there is an alternative available that isn’t clearly worse (as in more boilerplate-y, less expressive, etc) and it is easily assimilated by the community using the language. But it becomes very frustrating when that isn’t the case. I’m not sure how often we will fall into the latter bucket because of this restriction but it seems likely to happen from time to time.

(2) The issues I mentioned to Russ about the type checker having to treat these as disjunctions, which can lead us into yet more exponential behavior.

Does Dave’s idea of not treating SomeWrapper itself as a type help at all here? That is how I also conceptualize things. SomeWrapper is a “type constructor” and SomeWrapper<MyConcreteType> is a type.

(3) I don’t think it’s good design; if you’re going to write an extension to make type X conform to protocol P, you should do so in the most general way that is reasonable for X and P—not pick the one-off case you need this moment. This way, when you do hit two ways in which X conforms to P, the compile complains and nudges you to factor the conformance into something more reusable and non-overlapping.

I agree with this in principle. However, sometimes a more performant implementation might be possible with more type information (as discussed below). It may also be the case that sometimes the general implementation is more difficult to write than is worth it for the current application.

Secondarily, I understand the reason for letting the “least specific” candidate conformance win (it is the most general). But I wonder if this might leave performance on the table in some cases where a more specific implementation could use knowledge of the more specific details to implement the members more efficiently. Using the example in your proposal, what if knowing `T` conforms to `S`, not just `R` allows `X5` to provide a more efficient implementation of the members of `P` and `R`? If so, it seems unfortunate to leave that performance on the table. Is this a valid concern? Or is it unlikely to come up often enough in practice to matter?

It’s a valid concern, and I’m sure it does come up in practice. Let’s create a small, self-contained example:

protocol P {
  func f()
}

protocol Q: P { }

struct X<T> { let t: T}

extension X: P where T: P {
  func f() {
    /* general but slow */
  }
}

extension X where T: Q {
  func f() {
    /* fast because it takes advantage of T: Q */
  }
}

struct IsQ : Q { }

func generic<U: P>(_ value: u) {
  value.f()
}

generic(X<IsQ>())

We’d like for the call to “value.f()” to get the fast version of f() from the second extension, but the proposal doesn’t do that: the conformance to P is “locked in” to the first extension.

If we assume that we can see all of the potential implementations of “f” to which we might want to dispatch, we could implement some dynamic scheme that tries to pick the most specialized one. Of course, as with overlapping conformances in general, this selection process could result in ambiguities.

This is what I suspected. I’ll defer to Dave A on how big a concern this is, but it seems to me like a bit of a slippery slope towards sub-optimal performance.

Matthew

···

On Sep 30, 2016, at 1:23 AM, Douglas Gregor <dgregor@apple.com> wrote:

On Sep 28, 2016, at 4:30 PM, Matthew Johnson <matthew@anandabits.com <mailto:matthew@anandabits.com>> wrote:

This is purely anecdotal but I had a lot of utility code laying around that I’d marked with notes like `// TODO: revisit once conditional conformances are available`.

When I was leaving those notes I was expecting to need overlapping conformances often, but I reviewed them *before* replying and I actually haven’t found an example where having overlapping conformances is both (1) a significant win and also (2) a win in a way that’d be of broad, general interest.

- 80% have no real need for overlapping conditional conformances
- 15% might have “elegance gains” but nothing practically-significant
- 5% would *probably* see real gains but are likely not of broad interest

…which wasn’t what I was expecting, but leaves me a lot more comfortable without overlapping conformances for now than I was in the abstract.

Very interesting, thanks for doing this review!

I've taken the time to provide a bit more color on the 80/15/5 breakdown because I don't see much discussion for this proposal in terms of concrete situations...just theoretical concerns and theoretical possibilities. I don't have any completed code either, but I have notes and to-do lists for things I was planning to do, and I think seeing even some semi-concrete scenarios might be helpful here.

The "80%" are generally analogous to the `SomeWrapper` in the writeup; as a concrete example, I was waiting on the availability of conditional conformances to resume work on an emulation of structural unions, e.g. something like:

  enum Sum2<A,B> {
    case a(A)
    case b(B)
  }
  
...(and analogously for 3, 4, as-necessary).

There's a very obvious way to write `extension Sum2 : Equatable where A:Equatable, B:Equatable {}`...and at the time I set this aside, I was expecting to also want to come back and have additional conformances for things like `...where A:Equatable, B:AnyObject` (using `===` for comparing `B`) and so on for other combinations.

Upon revisiting such things in light of the proposal, I now think differently: for this case it seems like a better long-term approach anyways to stick to a single conformance and work with it like this:

  extension Sum2:Equatable where A:Equatable, B:Equatable {
    // details elided
  }
  
  /// Adaptor using `ObjectIdentifier` to implement `==`.
  struct ObjectWrapper<Wrapped:AnyObject> : Equatable, Hashable {
    let wrapped: Wrapped
  }
  
...as upon reflection I really would prefer dealing with the hassle of working with `Sum2<A,ObjectWrapper<B>>` in situations where -- in theory -- `Sum2<A,B>` could do -- to the hassle of writing out 4+ conformances for `Sum2` (and so on...even with nice code-gen tools that's going to be a lot of bloat!).

What changed my mind was tracing through the implications of conditional conformances for the use-site ergonomics of adaptors like `ObjectWrapper` above; what I mean is, suppose I have a protocol like this:

  protocol WidgetFactory {
    associatedtype Widget
    associatedtype Material
    
    func produceWidget(using material: Material) -> Widget
  }

...then it's rather easy to simply write this type of boilerplate:

  extension ObjectWrapper: WidgetFactory where Wrapped: WidgetFactory {
    typealias Widget = Wrapper.Widget
    typealias Material = Wrapper.Material
    
    func produceWidget(using material: Material) -> Widget {
      return base.produceWidget(using: material)
    }
  }
  
...which thus means I have the tools I need to make my use of wrappers like the `ObjectWrapper` largely transparent at the use sites I care about; e.g. I can write a single conditional conformance like this:

  extension Sum2: WidgetFactory
    where
    A.Material == B.Material,
    A.Widget == B.Widget {
    
    typealias Widget = A.Widget
    typealias Material = A.Material
    
    func produceWidget(using material: Material) throws -> Widget {
      switch self {
        case let .a(aa): return aa.produceWidget(using: material)
        case let .b(bb): return bb.produceWidget(using: material)
      }
    }
    
  }
  
...and it will apply even in situations where circumstances left me using `ObjectWrapper` (or similar) on any of the type parameters to `Sum2` (e.g. if I also needed an `Equatable` conformance for whatever reason).

At least for now--when I'm still just revisiting plans and thinking about it in light of the proposal--I really would prefer having a simpler language and writing this type of boilerplate, than having a more-complex language and writing the *other* type of boilerplate (e.g. the 4+ `Equatable` conformances here, and so on for other situations).

Note that I'm not claiming the above is the only use for overlapping conditional conformances -- not at all! -- just that situations like the above comprise about 80% of the things I was intending to do with conditional conformances...and that after revisiting them expecting to be troubled by the proposed banning of overlapping conformances, I'm now thinking I'd wind up not using the overlapping-conformance approach in such cases even if it were available.

So that's the first 80%.

Moving on, the next 15% are places where there's some forced theoretical or aesthetic inelegance due to the lack of overlapping conformances, but none of these seem to have any significant practical import.

A representative case here is that I currently have the following pair:

  /// `ChainSequence2(a,b)` enumerates the elements of `a` then `b`.
  struct ChainSequence2<A:Sequence,B:Sequence> : Sequence
    where A.Iterator.Element == B.Iterator.Element {
    // elided
  }

  /// `ChainCollection2(a,b)` enumerates the elements of `a` then `b`.
  struct ChainCollection2<A:Collection,B:Collection> : Collection
    where A.Iterator.Element == B.Iterator.Element {
    // ^ `where` is not quite right, see below
  }

...and obviously conditional conformances will allow these to be consolidated into a single `Chain2` type that then has appropriate conditional conformances (and for which the cost/benefit for me will tip in favor of adding conditional conformances to `BidirectionalCollection` and `RandomAccessCollection`, also).

On paper--e.g., theoretically--the lack of overlapping conformances leaves in a few aesthetic issues...for example, at present `ChainCollection2` actually has to be one of these:

  // "narrower" option: not all `A`, `B` can necessarily be used together:
  struct ChainCollection2<A:Collection,B:Collection> : Collection
    where
    A.Iterator.Element == B.Iterator.Element,
    A.IndexDistance == B.IndexDistance {
    typealias IndexDistance = A.IndexDistance
  }

  // "wasteful" option: theoretically in some cases we are "overpaying" and
  // using a stronger `IndexDistance`, but now we can use any `A` and `B`
  struct ChainCollection2<A:Collection,B:Collection> : Collection
    where A.Iterator.Element == B.Iterator.Element {
    typealias IndexDistance = IntMax
  }

With overlapping conditional conformances you could have both: one conformance that uses base collections' `IndexDistance` when possible, and another that uses `IntMax` when necessary...but without conditional conformances it's necessary to choose between the "narrower" approach or the "wasteful" approach (preserving the status quo).

If you're following along I'm sure you're aware that in this specific case, this "choice" is purely academic (or purely aesthetic)...if you go with the `IntMax` route there's almost always going to be between "no actual difference" and "no measurable difference", so even if it *maybe* feels a bit icky the right thing to do is get over it and stop making a mountain out of an anthill.

Note that I'm well aware that you can choose to see this as a concrete instance of a more-general problem -- that the lack of overlapping conformances would potentially leave a lot of performance on the table due to forcing similar decisions (and in contexts where there *would* be a real difference!) -- but speaking personally I couldn't find very much in my "chores pending availability of conditional conformance" that both (a) fell into this category and (b) had more than "aesthetic" implications.

This brings me to that last 5% -- the handful of things for which overlapping conformances have nontrivial benefits -- and my conclusion here is that these tended to be things I doubt are of general interest.

An example here is that I like to use a function that takes two sequences and enumerates their "cartesian product", with the following adjustments:

- no specific enumeration *ordering* is guaranteed
- does something useful even with infinite, one-shot sequences...
- ...meaning specifically that it will eventual-visit any specific pair (even when one or both inputs are infinite, one-shot)

...(useful for doing unit tests, mostly), which to be done "optimally" while also dotting all the is and crossing all the ts would currently require at least 8 concrete types:

- 4 sequences, like e.g.:
  - UnorderedProductSS2<A:Sequence, B:Sequence>
  - UnorderedProductSC2<A:Sequence, B:Collection>
  - UnorderedProductCS2<A:Collection, B:Sequence>
  - UnorderedProductCC2<A:Collection, B:Collection>
- 4 iterators (one for each of the above)

...since you need to use a different iteration strategy for each (yes you don’t *need* 8 types, but I’m trying to “dott all is, cross all ts” here).

In theory overlapping conditional conformances could be used to cut that down to only 5 types:

- 1 type like `UnorderedProduct<A:Sequence,B:Sequence>`
- the same 4 iterators from before, each used with the appropriate conformance

...which *is* less code *and* seemingly provides nontrivial gains (the `SS` variant must maintain buffers of the items it's already seen from each underlying sequence, but the others have no such requirement).

But, to be honest, even if those gains are realized, this is the kind of situation I'm perfectly comfortable saying is a "niche" and neither broadly relevant to the majority of Swift developers nor broadly relevant to the majority of Swift code; if overlapping conformances were available I'd use them here, but I'm not going to ask for them just to be able to use them here.

Also, I'm skeptical these gains would be realized in practice: between the proposed "least specialized conformance wins" rule and Swift's existing dispatch rules in generic contexts, it seems like even if overlapping conformances *were* allowed and I *did* use them, I'd still wind up getting dispatched to the pessimal `SS` variant in many cases for which I'd have been hoping for one of the more-optimal versions.

So between the niche-ness of such uses -- and their being 5% or less of what I was hoping to do -- and my skepticism about how dispatch would pan out in practice, I can't get behind fighting for overlapping conformances at this time unless they'd be permanently banned by banning them now.

As already stated, I do think that their absence *will* reveal some *true* pain points, but I think it makes sense to adopt a "wait-and-see" approach here as some more-targeted solution could wind up being enough to address the majority of those future pain points.

These are my more-detailed thoughts after looking at what I was planning to do with conditional conformances once the became available. I realize it doesn't touch on every conceivable scenario and every conceivable use, but I want to reiterate that I did my review expecting to find a bunch of things that I could use as justifications for why Swift absolutely should have overlapping conditional conformances right now...but on actually looking at my plans, I couldn't find anything for which I actually felt that way.

···

On Sep 30, 2016, at 1:23 PM, Douglas Gregor <dgregor@apple.com> wrote:

    A:WidgetFactory, B:WidgetFactory,

  - Doug

It’s a valid concern, and I’m sure it does come up in practice. Let’s create a small, self-contained example:

protocol P {
  func f()
}

protocol Q: P { }

struct X<T> { let t: T}

extension X: P where T: P {
  func f() {
    /* general but slow */
  }
}

extension X where T: Q {
  func f() {
    /* fast because it takes advantage of T: Q */
  }
}

struct IsQ : Q { }

func generic<U: P>(_ value: u) {
  value.f()
}

generic(X<IsQ>())

We’d like for the call to “value.f()” to get the fast version of f()
from the second extension, but the proposal doesn’t do that: the
conformance to P is “locked in” to the first extension.

I suppose that's true even if the second extension introduces X : Q?

If we assume that we can see all of the potential implementations of
“f” to which we might want to dispatch, we could implement some
dynamic scheme that tries to pick the most specialized one. Of
course, as with overlapping conformances in general, this selection
process could result in ambiguities.

This is what I suspected. I’ll defer to Dave A on how big a concern
this is, but it seems to me like a bit of a slippery slope towards
sub-optimal performance.

Well, it's unfortunate. We have a similar problem today due to the lack
of conditional conformance, and we deal with it by injecting an
underscored requirement where it doesn't belong, then dispatch through
that. I wonder if the workaround for this limitation is like that, or
something completely different.

Does this work? If not, why not? If so, what makes it fundamentally
different, since it is trying to express the same thing through
different means?

···

on Fri Sep 30 2016, Matthew Johnson <swift-evolution@swift.org> wrote:

-----

public protocol P {
  func f()
}

public protocol Q: P { }

public struct X<T> { let t: T}

internal protocol XFImpl {
  associatedtype TT: P
  static func f(X<T>)
}

extension XFImpl {
static func f(X<TT>) { /* general but slow */ }
}

extension XFImpl where TT: Q {
  static func f(X<TT>) { /* fast because it takes advantage of T: Q */ }
}

extension X: P where T: P {
  struct FImpl : XFImpl {
    typealias TT = T
  }

  func f() {
    FImpl.f(self)
  }
}

struct IsQ : Q { }

func generic<U: P>(_ value: u) {
  value.f()
}

generic(X<IsQ>())

--
-Dave

Interesting comment about worries that you'd be dispatched to the least
good SS version . A different language with different constraints, but when
LINQ to Objects implementers needed to provide optimized c# implementations
for some operations they chose a runtime type check to dispatch to the
optimized version. For example, while API is exposed as IEnumerable<T>
there are method implementations that check for ICollection<T> at runtime
in order to hit more efficient implementations. So static dispatch is good,
but win for collections often big enough to overcome a hit from dynamic
dispatch.

···

On Sunday, October 2, 2016, plx via swift-evolution < swift-evolution@swift.org> wrote:

On Sep 30, 2016, at 1:23 PM, Douglas Gregor <dgregor@apple.com > <javascript:_e(%7B%7D,'cvml','dgregor@apple.com');>> wrote:

This is purely anecdotal but I had a lot of utility code laying around
that I’d marked with notes like `// TODO: revisit once conditional
conformances are available`.

When I was leaving those notes I was expecting to need overlapping
conformances often, but I reviewed them *before* replying and I actually
haven’t found an example where having overlapping conformances is both (1)
a significant win and also (2) a win in a way that’d be of broad, general
interest.

- 80% have no real need for overlapping conditional conformances
- 15% might have “elegance gains” but nothing practically-significant
- 5% would *probably* see real gains but are likely not of broad interest

…which wasn’t what I was expecting, but leaves me a lot more comfortable
without overlapping conformances for now than I was in the abstract.

Very interesting, thanks for doing this review!

I've taken the time to provide a bit more color on the 80/15/5 breakdown
because I don't see much discussion for this proposal in terms of concrete
situations...just theoretical concerns and theoretical possibilities. I
don't have any completed code either, but I have notes and to-do lists for
things I was planning to do, and I think seeing even some semi-concrete
scenarios might be helpful here.

The "80%" are generally analogous to the `SomeWrapper` in the writeup; as
a concrete example, I was waiting on the availability of conditional
conformances to resume work on an emulation of structural unions, e.g.
something like:

  enum Sum2<A,B> {
    case a(A)
    case b(B)
  }

...(and analogously for 3, 4, as-necessary).

There's a very obvious way to write `extension Sum2 : Equatable where
A:Equatable, B:Equatable {}`...and at the time I set this aside, I was
expecting to also want to come back and have additional conformances for
things like `...where A:Equatable, B:AnyObject` (using `===` for comparing
`B`) and so on for other combinations.

Upon revisiting such things in light of the proposal, I now think
differently: for this case it seems like a better long-term approach
anyways to stick to a single conformance and work with it like this:

  extension Sum2:Equatable where A:Equatable, B:Equatable {
    // details elided
  }

  /// Adaptor using `ObjectIdentifier` to implement `==`.
  struct ObjectWrapper<Wrapped:AnyObject> : Equatable, Hashable {
    let wrapped: Wrapped
  }

...as upon reflection I really would prefer dealing with the hassle of
working with `Sum2<A,ObjectWrapper<B>>` in situations where -- in theory --
`Sum2<A,B>` could do -- to the hassle of writing out 4+ conformances for
`Sum2` (and so on...even with nice code-gen tools that's going to be a lot
of bloat!).

What changed my mind was tracing through the implications of conditional
conformances for the use-site ergonomics of adaptors like `ObjectWrapper`
above; what I mean is, suppose I have a protocol like this:

  protocol WidgetFactory {
    associatedtype Widget
    associatedtype Material

    func produceWidget(using material: Material) -> Widget
  }

...then it's rather easy to simply write this type of boilerplate:

  extension ObjectWrapper: WidgetFactory where Wrapped: WidgetFactory {
    typealias Widget = Wrapper.Widget
    typealias Material = Wrapper.Material

    func produceWidget(using material: Material) -> Widget {
      return base.produceWidget(using: material)
    }
  }

...which thus means I have the tools I need to make my use of wrappers
like the `ObjectWrapper` largely transparent at the use sites I care about;
e.g. I can write a single conditional conformance like this:

  extension Sum2: WidgetFactory
    where
    A:WidgetFactory, B:WidgetFactory,
    A.Material == B.Material,
    A.Widget == B.Widget {

    typealias Widget = A.Widget
    typealias Material = A.Material

    func produceWidget(using material: Material) throws -> Widget {
      switch self {
        case let .a(aa): return aa.produceWidget(using: material)
        case let .b(bb): return bb.produceWidget(using: material)
      }
    }

  }

...and it will apply even in situations where circumstances left me using
`ObjectWrapper` (or similar) on any of the type parameters to `Sum2` (e.g.
if I also needed an `Equatable` conformance for whatever reason).

At least for now--when I'm still just revisiting plans and thinking about
it in light of the proposal--I really would prefer having a simpler
language and writing this type of boilerplate, than having a more-complex
language and writing the *other* type of boilerplate (e.g. the 4+
`Equatable` conformances here, and so on for other situations).

Note that I'm not claiming the above is the only use for overlapping
conditional conformances -- not at all! -- just that situations like the
above comprise about 80% of the things I was intending to do with
conditional conformances...and that after revisiting them expecting to be
troubled by the proposed banning of overlapping conformances, I'm now
thinking I'd wind up not using the overlapping-conformance approach in such
cases even if it were available.

So that's the first 80%.

Moving on, the next 15% are places where there's some forced theoretical
or aesthetic inelegance due to the lack of overlapping conformances, but
none of these seem to have any significant practical import.

A representative case here is that I currently have the following pair:

  /// `ChainSequence2(a,b)` enumerates the elements of `a` then `b`.
  struct ChainSequence2<A:Sequence,B:Sequence> : Sequence
    where A.Iterator.Element == B.Iterator.Element {
    // elided
  }

  /// `ChainCollection2(a,b)` enumerates the elements of `a` then `b`.
  struct ChainCollection2<A:Collection,B:Collection> : Collection
    where A.Iterator.Element == B.Iterator.Element {
    // ^ `where` is not quite right, see below
  }

...and obviously conditional conformances will allow these to be
consolidated into a single `Chain2` type that then has appropriate
conditional conformances (and for which the cost/benefit for me will tip in
favor of adding conditional conformances to `BidirectionalCollection` and
`RandomAccessCollection`, also).

On paper--e.g., theoretically--the lack of overlapping conformances leaves
in a few aesthetic issues...for example, at present `ChainCollection2`
actually has to be one of these:

  // "narrower" option: not all `A`, `B` can necessarily be used together:
  struct ChainCollection2<A:Collection,B:Collection> : Collection
    where
    A.Iterator.Element == B.Iterator.Element,
    A.IndexDistance == B.IndexDistance {
    typealias IndexDistance = A.IndexDistance
  }

  // "wasteful" option: theoretically in some cases we are "overpaying"
and
  // using a stronger `IndexDistance`, but now we can use any `A` and `B`
  struct ChainCollection2<A:Collection,B:Collection> : Collection
    where A.Iterator.Element == B.Iterator.Element {
    typealias IndexDistance = IntMax
  }

With overlapping conditional conformances you could have both: one
conformance that uses base collections' `IndexDistance` when possible, and
another that uses `IntMax` when necessary...but without conditional
conformances it's necessary to choose between the "narrower" approach or
the "wasteful" approach (preserving the status quo).

If you're following along I'm sure you're aware that in this specific
case, this "choice" is purely academic (or purely aesthetic)...if you go
with the `IntMax` route there's almost always going to be between "no
actual difference" and "no measurable difference", so even if it *maybe*
feels a bit icky the right thing to do is get over it and stop making a
mountain out of an anthill.

Note that I'm well aware that you can choose to see this as a concrete
instance of a more-general problem -- that the lack of overlapping
conformances would potentially leave a lot of performance on the table due
to forcing similar decisions (and in contexts where there *would* be a real
difference!) -- but speaking personally I couldn't find very much in my
"chores pending availability of conditional conformance" that both (a) fell
into this category and (b) had more than "aesthetic" implications.

This brings me to that last 5% -- the handful of things for which
overlapping conformances have nontrivial benefits -- and my conclusion here
is that these tended to be things I doubt are of general interest.

An example here is that I like to use a function that takes two sequences
and enumerates their "cartesian product", with the following adjustments:

- no specific enumeration *ordering* is guaranteed
- does something useful even with infinite, one-shot sequences...
- ...meaning specifically that it will eventual-visit any specific pair
(even when one or both inputs are infinite, one-shot)

...(useful for doing unit tests, mostly), which to be done "optimally"
while also dotting all the is and crossing all the ts would currently
require at least 8 concrete types:

- 4 sequences, like e.g.:
  - UnorderedProductSS2<A:Sequence, B:Sequence>
  - UnorderedProductSC2<A:Sequence, B:Collection>
  - UnorderedProductCS2<A:Collection, B:Sequence>
  - UnorderedProductCC2<A:Collection, B:Collection>
- 4 iterators (one for each of the above)

...since you need to use a different iteration strategy for each (yes you
don’t *need* 8 types, but I’m trying to “dott all is, cross all ts” here).

In theory overlapping conditional conformances could be used to cut that
down to only 5 types:

- 1 type like `UnorderedProduct<A:Sequence,B:Sequence>`
- the same 4 iterators from before, each used with the appropriate
conformance

...which *is* less code *and* seemingly provides nontrivial gains (the
`SS` variant must maintain buffers of the items it's already seen from each
underlying sequence, but the others have no such requirement).

But, to be honest, even if those gains are realized, this is the kind of
situation I'm perfectly comfortable saying is a "niche" and neither broadly
relevant to the majority of Swift developers nor broadly relevant to the
majority of Swift code; if overlapping conformances were available I'd use
them here, but I'm not going to ask for them just to be able to use them
here.

Also, I'm skeptical these gains would be realized in practice: between the
proposed "least specialized conformance wins" rule and Swift's existing
dispatch rules in generic contexts, it seems like even if overlapping
conformances *were* allowed and I *did* use them, I'd still wind up getting
dispatched to the pessimal `SS` variant in many cases for which I'd have
been hoping for one of the more-optimal versions.

So between the niche-ness of such uses -- and their being 5% or less of
what I was hoping to do -- and my skepticism about how dispatch would pan
out in practice, I can't get behind fighting for overlapping conformances
at this time unless they'd be permanently banned by banning them now.

As already stated, I do think that their absence *will* reveal some *true*
pain points, but I think it makes sense to adopt a "wait-and-see" approach
here as some more-targeted solution could wind up being enough to address
the majority of those future pain points.

These are my more-detailed thoughts after looking at what I was planning
to do with conditional conformances once the became available. I realize it
doesn't touch on every conceivable scenario and every conceivable use, but
I want to reiterate that I did my review expecting to find a bunch of
things that I could use as justifications for why Swift absolutely should
have overlapping conditional conformances right now...but on actually
looking at my plans, I couldn't find anything for which I actually felt
that way.

- Doug

Interesting comment about worries that you'd be dispatched to the least good SS version . A different language with different constraints, but when LINQ to Objects implementers needed to provide optimized c# implementations for some operations they chose a runtime type check to dispatch to the optimized version. For example, while API is exposed as IEnumerable<T> there are method implementations that check for ICollection<T> at runtime in order to hit more efficient implementations. So static dispatch is good, but win for collections often big enough to overcome a hit from dynamic dispatch.

Yep, and Swift's specialization optimization already knows how to fold `is` and `as` checks after specializing a generic, so checking for a specific type or sub-protocol and jumping into a more optimized implementation for that subtype "dynamically" is still likely to be optimized away.

-Joe

···

On Oct 2, 2016, at 8:56 AM, Callionica (Swift) via swift-evolution <swift-evolution@swift.org> wrote:

On Sunday, October 2, 2016, plx via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Sep 30, 2016, at 1:23 PM, Douglas Gregor <dgregor@apple.com <javascript:_e(%7B%7D,'cvml','dgregor@apple.com');>> wrote:

This is purely anecdotal but I had a lot of utility code laying around that I’d marked with notes like `// TODO: revisit once conditional conformances are available`.

When I was leaving those notes I was expecting to need overlapping conformances often, but I reviewed them *before* replying and I actually haven’t found an example where having overlapping conformances is both (1) a significant win and also (2) a win in a way that’d be of broad, general interest.

- 80% have no real need for overlapping conditional conformances
- 15% might have “elegance gains” but nothing practically-significant
- 5% would *probably* see real gains but are likely not of broad interest

…which wasn’t what I was expecting, but leaves me a lot more comfortable without overlapping conformances for now than I was in the abstract.

Very interesting, thanks for doing this review!

I've taken the time to provide a bit more color on the 80/15/5 breakdown because I don't see much discussion for this proposal in terms of concrete situations...just theoretical concerns and theoretical possibilities. I don't have any completed code either, but I have notes and to-do lists for things I was planning to do, and I think seeing even some semi-concrete scenarios might be helpful here.

The "80%" are generally analogous to the `SomeWrapper` in the writeup; as a concrete example, I was waiting on the availability of conditional conformances to resume work on an emulation of structural unions, e.g. something like:

  enum Sum2<A,B> {
    case a(A)
    case b(B)
  }
  
...(and analogously for 3, 4, as-necessary).

There's a very obvious way to write `extension Sum2 : Equatable where A:Equatable, B:Equatable {}`...and at the time I set this aside, I was expecting to also want to come back and have additional conformances for things like `...where A:Equatable, B:AnyObject` (using `===` for comparing `B`) and so on for other combinations.

Upon revisiting such things in light of the proposal, I now think differently: for this case it seems like a better long-term approach anyways to stick to a single conformance and work with it like this:

  extension Sum2:Equatable where A:Equatable, B:Equatable {
    // details elided
  }
  
  /// Adaptor using `ObjectIdentifier` to implement `==`.
  struct ObjectWrapper<Wrapped:AnyObject> : Equatable, Hashable {
    let wrapped: Wrapped
  }
  
...as upon reflection I really would prefer dealing with the hassle of working with `Sum2<A,ObjectWrapper<B>>` in situations where -- in theory -- `Sum2<A,B>` could do -- to the hassle of writing out 4+ conformances for `Sum2` (and so on...even with nice code-gen tools that's going to be a lot of bloat!).

What changed my mind was tracing through the implications of conditional conformances for the use-site ergonomics of adaptors like `ObjectWrapper` above; what I mean is, suppose I have a protocol like this:

  protocol WidgetFactory {
    associatedtype Widget
    associatedtype Material
    
    func produceWidget(using material: Material) -> Widget
  }

...then it's rather easy to simply write this type of boilerplate:

  extension ObjectWrapper: WidgetFactory where Wrapped: WidgetFactory {
    typealias Widget = Wrapper.Widget
    typealias Material = Wrapper.Material
    
    func produceWidget(using material: Material) -> Widget {
      return base.produceWidget(using: material)
    }
  }
  
...which thus means I have the tools I need to make my use of wrappers like the `ObjectWrapper` largely transparent at the use sites I care about; e.g. I can write a single conditional conformance like this:

  extension Sum2: WidgetFactory
    where
    A:WidgetFactory, B:WidgetFactory,
    A.Material == B.Material,
    A.Widget == B.Widget {
    
    typealias Widget = A.Widget
    typealias Material = A.Material
    
    func produceWidget(using material: Material) throws -> Widget {
      switch self {
        case let .a(aa): return aa.produceWidget(using: material)
        case let .b(bb): return bb.produceWidget(using: material)
      }
    }
    
  }
  
...and it will apply even in situations where circumstances left me using `ObjectWrapper` (or similar) on any of the type parameters to `Sum2` (e.g. if I also needed an `Equatable` conformance for whatever reason).

At least for now--when I'm still just revisiting plans and thinking about it in light of the proposal--I really would prefer having a simpler language and writing this type of boilerplate, than having a more-complex language and writing the *other* type of boilerplate (e.g. the 4+ `Equatable` conformances here, and so on for other situations).

Note that I'm not claiming the above is the only use for overlapping conditional conformances -- not at all! -- just that situations like the above comprise about 80% of the things I was intending to do with conditional conformances...and that after revisiting them expecting to be troubled by the proposed banning of overlapping conformances, I'm now thinking I'd wind up not using the overlapping-conformance approach in such cases even if it were available.

So that's the first 80%.

Moving on, the next 15% are places where there's some forced theoretical or aesthetic inelegance due to the lack of overlapping conformances, but none of these seem to have any significant practical import.

A representative case here is that I currently have the following pair:

  /// `ChainSequence2(a,b)` enumerates the elements of `a` then `b`.
  struct ChainSequence2<A:Sequence,B:Sequence> : Sequence
    where A.Iterator.Element == B.Iterator.Element {
    // elided
  }

  /// `ChainCollection2(a,b)` enumerates the elements of `a` then `b`.
  struct ChainCollection2<A:Collection,B:Collection> : Collection
    where A.Iterator.Element == B.Iterator.Element {
    // ^ `where` is not quite right, see below
  }

...and obviously conditional conformances will allow these to be consolidated into a single `Chain2` type that then has appropriate conditional conformances (and for which the cost/benefit for me will tip in favor of adding conditional conformances to `BidirectionalCollection` and `RandomAccessCollection`, also).

On paper--e.g., theoretically--the lack of overlapping conformances leaves in a few aesthetic issues...for example, at present `ChainCollection2` actually has to be one of these:

  // "narrower" option: not all `A`, `B` can necessarily be used together:
  struct ChainCollection2<A:Collection,B:Collection> : Collection
    where
    A.Iterator.Element == B.Iterator.Element,
    A.IndexDistance == B.IndexDistance {
    typealias IndexDistance = A.IndexDistance
  }

  // "wasteful" option: theoretically in some cases we are "overpaying" and
  // using a stronger `IndexDistance`, but now we can use any `A` and `B`
  struct ChainCollection2<A:Collection,B:Collection> : Collection
    where A.Iterator.Element == B.Iterator.Element {
    typealias IndexDistance = IntMax
  }

With overlapping conditional conformances you could have both: one conformance that uses base collections' `IndexDistance` when possible, and another that uses `IntMax` when necessary...but without conditional conformances it's necessary to choose between the "narrower" approach or the "wasteful" approach (preserving the status quo).

If you're following along I'm sure you're aware that in this specific case, this "choice" is purely academic (or purely aesthetic)...if you go with the `IntMax` route there's almost always going to be between "no actual difference" and "no measurable difference", so even if it *maybe* feels a bit icky the right thing to do is get over it and stop making a mountain out of an anthill.

Note that I'm well aware that you can choose to see this as a concrete instance of a more-general problem -- that the lack of overlapping conformances would potentially leave a lot of performance on the table due to forcing similar decisions (and in contexts where there *would* be a real difference!) -- but speaking personally I couldn't find very much in my "chores pending availability of conditional conformance" that both (a) fell into this category and (b) had more than "aesthetic" implications.

This brings me to that last 5% -- the handful of things for which overlapping conformances have nontrivial benefits -- and my conclusion here is that these tended to be things I doubt are of general interest.

An example here is that I like to use a function that takes two sequences and enumerates their "cartesian product", with the following adjustments:

- no specific enumeration *ordering* is guaranteed
- does something useful even with infinite, one-shot sequences...
- ...meaning specifically that it will eventual-visit any specific pair (even when one or both inputs are infinite, one-shot)

...(useful for doing unit tests, mostly), which to be done "optimally" while also dotting all the is and crossing all the ts would currently require at least 8 concrete types:

- 4 sequences, like e.g.:
  - UnorderedProductSS2<A:Sequence, B:Sequence>
  - UnorderedProductSC2<A:Sequence, B:Collection>
  - UnorderedProductCS2<A:Collection, B:Sequence>
  - UnorderedProductCC2<A:Collection, B:Collection>
- 4 iterators (one for each of the above)

...since you need to use a different iteration strategy for each (yes you don’t *need* 8 types, but I’m trying to “dott all is, cross all ts” here).

In theory overlapping conditional conformances could be used to cut that down to only 5 types:

- 1 type like `UnorderedProduct<A:Sequence,B:Sequence>`
- the same 4 iterators from before, each used with the appropriate conformance

...which *is* less code *and* seemingly provides nontrivial gains (the `SS` variant must maintain buffers of the items it's already seen from each underlying sequence, but the others have no such requirement).

But, to be honest, even if those gains are realized, this is the kind of situation I'm perfectly comfortable saying is a "niche" and neither broadly relevant to the majority of Swift developers nor broadly relevant to the majority of Swift code; if overlapping conformances were available I'd use them here, but I'm not going to ask for them just to be able to use them here.

Also, I'm skeptical these gains would be realized in practice: between the proposed "least specialized conformance wins" rule and Swift's existing dispatch rules in generic contexts, it seems like even if overlapping conformances *were* allowed and I *did* use them, I'd still wind up getting dispatched to the pessimal `SS` variant in many cases for which I'd have been hoping for one of the more-optimal versions.

So between the niche-ness of such uses -- and their being 5% or less of what I was hoping to do -- and my skepticism about how dispatch would pan out in practice, I can't get behind fighting for overlapping conformances at this time unless they'd be permanently banned by banning them now.

As already stated, I do think that their absence *will* reveal some *true* pain points, but I think it makes sense to adopt a "wait-and-see" approach here as some more-targeted solution could wind up being enough to address the majority of those future pain points.

These are my more-detailed thoughts after looking at what I was planning to do with conditional conformances once the became available. I realize it doesn't touch on every conceivable scenario and every conceivable use, but I want to reiterate that I did my review expecting to find a bunch of things that I could use as justifications for why Swift absolutely should have overlapping conditional conformances right now...but on actually looking at my plans, I couldn't find anything for which I actually felt that way.

  - Doug

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Below I’ve provided a more fleshed out version of what Dave is suggesting, for anyone who had trouble parsing the very hypothetical example. It reflects the kind of implementation specialization I would expect to see in the standard library. In fact we have exactly this concern baked into value witness tables today by differentiating the various *Buffer methods so that they can be no-ops or memcpys for trivial values (rather than requiring code to loop over all the elements and call potentially-no-op methods on each element).

But the value witness table’s approach to this problem suggests some healthier solutions to the problem (for Swift’s particular set of constraints):

1) Add default-implemented fullAlgorithm() methods to the protocol. Anyone who can beat the default algorithm provides their own implementation. Consumers of the protocol then dispatch to fullAlgorithm(), rather than the lower-level primitives.

2) Add “let hasInterestingProperty: bool” flags to the protocol. Consumers of the protocol can then branch on these flags to choose a “slow generic” or “fast specific” implementation. (this is, after all, exactly what we’re asking the runtime to do for us!)

Of course, (1) and (2) aren’t always applicable solutions. Both only really apply if you’re the original creator of the protocol; otherwise no one will know about fullAlgorithm or hasInterestingProperty and be able to modify the default. It can also be really tedious to provide your own implementation of fullAlgorithm(), especially if everyone overloads it in the same way. These are, however, perfectly reasonable approaches if you’re just trying to specialize for a small, closed, set of types. Something like:

genericImpl()
stringImpl()
intImpl()

You can handle that pretty easily with extensions or super-protocols, I think.

I’m cautiously optimistic we can get pretty far before we really feel the need to introduce specialization like this. Although I’m used to handling this issue in a world of monomorphic generics; so I’m not sure if the performance characteristics of polymorphic generics will shift the balance to making specialization more urgent. Or perhaps the opposite — the runtime impact of specialization could be too high!

// Some kind of "super copy" operation
public protocol Clone {
  func clone() -> Self
}

// Can just memcpy instead of calling Clone
public protocol TrivialClone: Clone { }

// A terrible data structure
public struct FakeArray<T> { let vals: (T, T, T) }

// --------------------------------------------------
// A dirty hack to get overlapping impls (specifically specialization)
// through overlapping extensions.

internal protocol CloneImpl {
  associatedtype TT: Clone
}

extension CloneImpl {
  static func clone(input: FakeArray<TT>) -> FakeArray<TT> {
    // Have to manually invoke generic `clone` on each element
    FakeArray(vals: (input.vals.0.clone(),
                     input.vals.1.clone(),
                     input.vals.2.clone()))
  }
}

extension CloneImpl where TT: TrivialClone {
  static func clone(input: FakeArray<TT>) -> FakeArray<TT> {
    // Can just copy the whole buffer at once (ideally a memcpy)
    FakeArray(vals: input.vals)
  }
}

// Inject our specialized Clone impl
// (doesn't compile today because this is a conditional conformance)
extension FakeArray: Clone where T: Clone {
  // A dummy to get our overlapping extensions
  // (doesn't compile today because we can't nest types in a generic type)
  struct CloneImplProvider : CloneImpl {
    typealias TT = T
  }
  
  func clone() -> FakeArray {
    CloneImplProvider.clone(input: self)
  }
}

// -------------------------------------------------
// Using Clone and the specialization

// Some plain-old-data
struct POD : TrivialClone {
  func clone() -> POD { return self }
}

// Works with any Clone type
func generic<T: Clone>(_ value: T) -> T {
  return value.clone()
}

// Pass in a FakeArray that should use the fast specialization for Clone
generic(FakeArray(vals: (POD(), POD(), POD())))

···

On Sep 30, 2016, at 11:18 PM, Dave Abrahams via swift-evolution <swift-evolution@swift.org> wrote:

on Fri Sep 30 2016, Matthew Johnson <swift-evolution@swift.org> wrote:

It’s a valid concern, and I’m sure it does come up in practice. Let’s create a small, self-contained example:

protocol P {
func f()
}

protocol Q: P { }

struct X<T> { let t: T}

extension X: P where T: P {
func f() {
   /* general but slow */
}
}

extension X where T: Q {
func f() {
   /* fast because it takes advantage of T: Q */
}
}

struct IsQ : Q { }

func generic<U: P>(_ value: u) {
value.f()
}

generic(X<IsQ>())

We’d like for the call to “value.f()” to get the fast version of f()
from the second extension, but the proposal doesn’t do that: the
conformance to P is “locked in” to the first extension.

I suppose that's true even if the second extension introduces X : Q?

If we assume that we can see all of the potential implementations of
“f” to which we might want to dispatch, we could implement some
dynamic scheme that tries to pick the most specialized one. Of
course, as with overlapping conformances in general, this selection
process could result in ambiguities.

This is what I suspected. I’ll defer to Dave A on how big a concern
this is, but it seems to me like a bit of a slippery slope towards
sub-optimal performance.

Well, it's unfortunate. We have a similar problem today due to the lack
of conditional conformance, and we deal with it by injecting an
underscored requirement where it doesn't belong, then dispatch through
that. I wonder if the workaround for this limitation is like that, or
something completely different.

Does this work? If not, why not? If so, what makes it fundamentally
different, since it is trying to express the same thing through
different means?

-----

public protocol P {
func f()
}

public protocol Q: P { }

public struct X<T> { let t: T}

internal protocol XFImpl {
associatedtype TT: P
static func f(X<T>)
}

extension XFImpl {
static func f(X<TT>) { /* general but slow */ }
}

extension XFImpl where TT: Q {
static func f(X<TT>) { /* fast because it takes advantage of T: Q */ }
}

extension X: P where T: P {
struct FImpl : XFImpl {
   typealias TT = T
}

func f() {
   FImpl.f(self)
}
}

struct IsQ : Q { }

func generic<U: P>(_ value: u) {
value.f()
}

generic(X<IsQ>())

--
-Dave

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Thanks, Alexis.

Doug and I talked this morning and fortunately I think we came up with
something rational that will support the kind of specialization we want
without any hoop-jumping. It basically involves precomputing the
overload lattice and filling witness tables dynamically based on the
conformances that are statically visible at the moment they are
instantiated. We'll post more detail soon.

···

on Mon Oct 03 2016, Alexis <abeingessner-AT-apple.com> wrote:

Below I’ve provided a more fleshed out version of what Dave is
suggesting, for anyone who had trouble parsing the very hypothetical
example. It reflects the kind of implementation specialization I would
expect to see in the standard library.

--
-Dave