Ergonomics: generic types conforming "in more than one way"

I keep getting bitten by the restriction that generic types can only conform to a protocol in one way, and I think this happens to me because the language allows me to say things that are almost, but not quite, true.

I'm not making the classic faux-pas: if I have X<T> : P I'm well aware that I'm not allowed to write a conditional extension of X and change the way it conforms to P, so I don't even try to do that.

Instead, this comes up when I have constrained extensions of P that provide default implementations. Here's an unrealistically-reduced example (I'm actually trying to do something useful):

protocol P {
    static var isEquatable: Bool { get }

extension P {
    static var isEquatable: Bool { false }

extension P where Self : Equatable {
    static var isEquatable: Bool { true }

This code appears to say that if my P-conforming type is Equatable, it will, by default have isEquatable == true. Almost anything I can do seems to confirm that idea.

func isEquatable<T: P>(_ t: T.Type) -> Bool { t.isEquatable }

extension Int : P {}
struct X : P {}
extension Array : P {}

// X is not equatable
assert(!X.isEquatable && !isEquatable(X.self))
// Int is equatable
assert(Int.isEquatable && isEquatable(Int.self))

// Arrays of non-equatable types are not equatable
assert(!Array<X>.isEquatable && !isEquatable(Array<X>.self))
// Arrays of equatable types are equatable

But, I've misled myself.

// That property is not being used to satisfy the requirement.
assert(isEquatable(Array<Int>.self)) // Boom

In fact, it's really hard to explain what that simple bit of code at the top actually means, and for me that's a red flag. Try it yourself, and keep in mind that this works just fine, so it's not just about conditional conformances:

struct Y<T> {}
extension Y : P, Equatable where T: Equatable {
    static func ==(_: Self, _: Self) -> Bool { true }

It seems to me that we should not—in the long run—allow simple code to mean something we can't explain simply, and that means breaking some source. What are your thoughts?


I agree that this is not at all intuitive, but I can’t imagine any other way the code could behave as long as there is only one witness table entry for isEquatable in an unconditional Array: P conformance. Are you suggesting changing that and allowing a generic type to have different witness tables depending on the concrete type parameters? If not, what changes do you have in mind?


So I had to read this about 8 times before I understood what’s happening (I think?). Is the problem that Array is only conditionally equatable, so the blanket conformance of Array: P doesn’t get the conditional implementation of P where Self: Equatable?


AFAICT, that's about as good a description of the problem as one can come up with… and the difficulty of coming up with a description (much less an explanation of why things work that way!) is a big part of what I'm pointing at.

That would be almost ideal. IIUC it's totally implementable, as long as we are willing to have unspecialized witness tables dispatch dynamically to the right implementations. The efficiency of unspecialized generic code is poor enough already that I don't mind having the unspecialized array witness table entries doing whatever dynamic dispatching was needed to get the right semantics. The only downside I see from here is that it changes semantics of existing code.

Another possibility would be making it illegal to conditionally provide a default implementation of a requirement, when a default implementation of that requirement is also provided unconditionally. This both breaks source code and takes power away from the programmer, but at least it doesn't silently change semantics, and it would close this hole until such a time as we came up with a better answer.

Those are the first ideas I came up with. My instinct is that the best answers to this problem are tied in with a much-requested-but-not-satisfactorily-designed feature that would allow programmers to be more explicit about what protocol requirements they're satisfying by writing a declaration, FWIW.


It's not implementable without completely disabling generic specialization, because it makes the question of what method implements the protocol requirement a whole-program property—there may be witnesses in other separately-compiled files, or in dynamic libraries that are loaded only at runtime. The fact that unspecialized code is slow now is also not an excuse to make its code size, memory, and other performance impacts worse. A general principle in Swift is that overloading is not a solution to generic programming problems. If you want conditional logic, write conditionals in your implementation.

To avoid breaking source, we could introduce a warning if we see any candidate declarations that look like they're trying to be default implementations but fail because of unsatisfied requirements, so that when you add extension Array: P it warns about the extension P where Self : Equatable. That won't be perfect because the expected-to-be-default-implementation declarations might not be visible in the context where the conformance is declared, but seems likely to catch common attempts like this. I agree in general we should lead people away from overloading tricks that won't work with diagnostics where possible.


I guess I figured those scenarios have to be broken already; it's probably just a long-standing misunderstanding of mine, but it seems to me that what method implements a protocol requirement is currently a whole-module property, and separately compiled modules can easily introduce extensions that make a type conform to a protocol in two different ways. If you can load those modules dynamically, how can the result be well-defined? How this all works out has always been mysterious to me, so an explanation would be most appreciated.

I guess. I was just saying I'm willing to pay that price… I realize other folks (Apple) care about code that can't be specialized because of resilience, but to a first approximation, nobody else has that issue… ya pick yer battles ;-)

Yeah, see, I realize this is technically an overload, but that's not how it occurs to me when I'm writing it. I have to continually remind myself that these things can act like overloads, when what I'm really trying to do is just satisfy a single requirement in different ways. That's why, as I mentioned earlier, I think the best answer to this is tied into the “what requirement am I implementing?” feature. If there were a way to mark these functions as only being implementations of protocol requirements, I could express my true intention.

That would certainly technically address the situations at issue, but I don't know if it solves the understandability/mental model problem. Maybe, combined with the "I'm just implementing this requirement" feature—which should make for better and clearer generic programs, it would act as an incentive to improve code.

I want to be really clear about this: I didn't think I was doing anything “tricky,” and wasn't even really conscious that I had created an overload. Mea culpa, or whatever, but I've made this mistake enough times in my years of programming with Swift that I don't think it's just me being dumb. IMO it's an expressivity gap in the generics system that I can't really say what I mean without creating unintended effects, so I think in the long run we need something more substantial than a warning.

Until then, we'd at least need a way to disable this warning. Any ideas?


We'd need an annotation to say, "I really meant to provide a tricky overload; I know that it's not meant to be a second implementation of a protocol requirement."

Probably could be rolled into what @jrose mentioned here, on a somewhat related issue about labeling protocol extension methods as not intended for default implementations (for reasons of avoiding unintentional recursion):

I had intended to have a go at implementing @jrose's solution sometime this week, but now that Covid-19 has arrived in New York, my day job is getting a tad bit spicy, so I'll circle back when things calm down a bit.

If you had this feature, wouldn't you mark both of these extensions?

extension P {
    static var isEquatable: Bool { false }

extension P where Self : Equatable {
    static var isEquatable: Bool { true }

I don't see how annotating these helps make it more clear that Array is going to get the first one whether or not Element: Equatable.

I have wanted a related feature when providing default implementations - specifically the ability to mark a group of them as mutually exclusive.

This is usually the case when the default implementations of different requirements would be mutually recursive if both are used. I don't want to allow users to be able to use both defaults, but I do want to let them choose which requirement to implement.

If we had a capability like that, you could express the mutual exclusivity of these "overloads" as well.

Fwiw, another related feature I've wanted is the ability to mark a "default" as final, i.e. not allow users to implement the requirement themselves. This comes up when there is a refinement relationship between protocols and a library wants a guarantee that its implementation is used by conformances to the refined protocol.


That's a different situation. Protocol conformances are distinct entities, and if you have more than one, then one of them is picked whenever a conformance is needed. So if module A declares a type to be Hashable in one way, and module B declares it also to be Hashable in another way, then code in A will use A's conformance, and code in B will use B's conformance, and if a module C uses both, it would have to pick one or the other. A's and B's conformances would however still both be made up of witnesses that were picked once at compile time. For what you want, we'd have to dynamically re-pick the witnesses for every instantiation of a generic type.

And in fact, the possibility of multiple conformances is another obstacle for implementing something like your example: In your case, Array only conditionally conforms to Equatable, and that conformance is dependent on the conformance used for the elements of the array, which is independent of the Array type itself because Equatable isn't a fundamental requirement of the type. In order to successfully dynamically instantiate the witness for P conditional on whether Array is Equatable, we would need to pass down the conditional conformance of the element type (along with any other possible protocols for which there might be conditional witnesses available), and there's no way to know up front that those additional parameters are necessary or what they are without whole program knowledge.

I totally agree, providing a protocol conformance or default implementation ought to be much more intentional now than it currently is, because the current design is not very usable.


Exactly. If I had this feature, I'd want it to DWIM, not what the language currently does :-). I'm not sure what the cost of that would be, or if it's in fact impossible; I'll have to dive into Joe's explanation and get back to y'all on the morrow.

I'll look into this, thanks. And thanks for being on the front lines of the COVID thing. Take care of yourself.

What if we had the ability to conditionally conform/extend based on the inverse of conditions?

// extend P where Self is not Equatable
extension P where Self !: Equatable {
    static var isEquatable: Bool { false }

Is that undesirable for other reasons or technically unimplementable?

1 Like

I've often wanted to model something exactly this way, but I seem to remember someone previously explaining why this wasn't workable. Can't remember the explanation, but I'm sure someone else will jump in.

1 Like

Let's have a type Foo that implements P

public protocol P { }
extension P where Self !: Equatable {
    static var isEquatable: Bool { false }
public struct Foo: P { }

Can I call isEquatable on Foo?


If I can call it, then what should happen if another module makes Foo conform to Equatable?
If I cannot call it, then what is this extension good for?

This isn’t much different from two different modules providing different default implementations. Module A gets A’s implementation and module B gets B’s implementation. So in this situation Module A can call it, but the other module can’t.

That said, I thought the reason for disallowing negative constraints had to do with compiler complexity rather than semantics.

All well above my head so what follows is purely guesswork :slight_smile:.

You could call it and in general, would return false.

If another module added the conformance to Equatable then my first guess would be that it would need to switch to the other implementation and return true.

If a conformance where Self is Equatable wasnt provided, like in your example, then conforming to Equatable wouldn’t compile without providing an implementation of the method.

I agree it feels a little strange that the behaviour would change depending on the presence of another module, however similar behaviours already exist in the language:

// Module 1
public protocol P { 
    static var value: Bool { get } 
extension P {
    public static var value: Bool { false }
public struct Foo: P {}
public func check<T: P>(_ p: T.Type) -> Bool { p.value }
public func checkFoo() -> Bool { Foo.value }

// Module 2
extension Foo {
    static var value: Bool { true }
print(Foo.value) // true
print(checkFoo()) // false
print(check(Foo.self)) // false

Interestingly, if I move the true extension to a third module, imported into module2 above, then everything goes back to false except from within that new module. If I then make it public, the Foo.value in module 2 restores to true.

If we followed the above behaviours then isEquatable would be false in the original module, true in the module that added the Equatable conformance (assuming it was implemented that way), false in any modules that only link the original module, and true in any modules that link both.

I guess the rule that is being broken here is messing with types and protocols that you don’t own which we already know is weird at best and causes compilation clashes at worst.

I believe you’ve explained why it is bad practice for a module to extend a type it doesn’t own with a conformance to a protocol it doesn’t own but this applies universally, not just with this negation-based conformance.

There's no way to know that a type does not conform to a protocol, because extensions you can't see can add a conformance. In general, the idea that types "conform to protocols" as an intrinsic property in Swift is a bit of an illusion, because conformances are independent entities from either the type or the protocol, and not necessarily unique. Negative constraints could at best help guide overload resolutions based on contextually available information, but they wouldn't be a robust solution to code that wants to have dynamically different behavior within a common declaration, and they also wouldn't address the model and implementation issues in trying to conditionally change conformances.

@dabrahams If you're able to flesh out a more concrete example of what you're trying to do, there may be another way of expressing it.

Terms of Service

Privacy Policy

Cookie Policy