Allowing self-conformance for protocols

The magic bit of _openExistential is that, if you tried to write out its declaration, you’d end up with something like:

func _openExistential<Existential, Result>(
   _ existential: Existential,
   do fn: <Opened: Existential>(Opened) -> Result
) -> Result

The problem is that, although this signature might (might!) make sense to a human Swift programmer, to the compiler it is utter gibberish that doesn’t even parse correctly. Since Existential is a generic parameter, Opened: Existential is not valid to write, and the type of a parameter (or of any variable/value) can’t be generic anyway. And yet special cases have been hacked into the compiler to make _openExistential behave like it has a signature like this.

(The fact that _openExistential’s type is so weird is also why the type checker tends to say vague things like “type of expression is ambiguous without more context” instead of telling you what’s actually wrong. _openExistential’s type checking failures look different from failures of types you can actually express in the language, and little effort has been put into polishing them.)

8 Likes

_openExistential is a good puzzle and helped me understand some things. I don't want to derail the self-conformance thread though. Thank you all for pushing my understanding (and hopefully everybody else's understanding) forward one step!

It's important to distinguish two ideas when talking about self-conformance:

  1. The ability of a protocol or protocol composition type to conform to a protocol, most likely (but not necessarily) itself.
  2. The ability of such a conformance to automatically satisfy its requirements by forwarding to the underlying value.

The first is always logically possible. The second is fundamentally restricted because the operation on the underlying value may not be able to satisfy the requirements (or even the type signature) of the protocol as applied to the protocol type. For example:

  • A static method requirement cannot simply forward to the conformance for the underlying value because there is no underlying value. When a static method of some type T is called, it's passed a value of the type T.Type. For a protocol self-conformance P: P, this corresponds to the protocol metatype P.Protocol, which doesn't carry a specific conforming type, so we don't know which conformance to forward to. It obviously isn't reasonable behavior to just pick one at random.

  • An initializer requirement has the same restriction for essentially the same reason.

  • Requirements whose signature uses the Self type in an input position (e.g. as a parameter) cannot simply forward because there's no always-valid way to turn an input value of type P into an input value of the same type as the underlying value. This problem is defined away in the current language because we don't allow protocols with these requirements to be used in protocol types, but if we lift that restriction ("generalized existentials"), it'll surface here immediately.

So, if we allow self-conformance but inherently tie it to the ability to forward implementations, we're actually only allowing self-conformance for protocols where all the members satisfy these restrictions. And maybe that's okay, but it becomes a permanent constraint on those protocols: once they declare self-conformance, they can never add a requirement that violates these restrictions without irrevocably breaking clients that were relying on self-conformance.

So I think the right design direction here is to pursue fully-general conformances of protocol types to protocols, and then say that conformances on protocols have extra defaulting powers. But that would start with just allowing people to add members to protocols that are actually members of the protocol type rather than being implicitly added to all conforming types.

16 Likes

Thanks for teasing apart these two concepts, John—I agree it's good to think about them independently.

This doesn't seem more problematic than the situation we have today with the fact that adding Self and associatedtype requirements breaks any clients that were formerly using the protocol as an existential. (Unless, perhaps, it's more common for authors to add static/init requirements to an existing protocol than it is Self/associatedtype requirements, so making their introduction a source-breaking change is more problematic.)

In some ways, self-conformance even seems less problematic than the status quo—if we require the self-conformance to be marked explicitly, then the author cannot break clients silently by adding members. E.g., if in FooKit v1.0 I have:

// Straw syntax
@selfconforming
protocol Foo {
  func frobnicate()
}

and I try to update Foo in v2.0 to

@selfconforming
protocol Foo {
  func frobnicate()
  static func frobnicateStatically()
}

I would get a compile error.

OTOH, in Swift today, adding a Self/associatedtype requirement to a protocol P won't necessarily break my own module (since I might only be using P as a generic constraint anyway), but once I ship, clients who were using P as an existential are suddenly broken!

If the fully-generalized "custom conformances for existentials" is considered the natural next step of self-conformance, it doesn't strike me as obvious that just allowing self-conformance for protocols is a bad resting place, aside from the fact that we would want to choose a syntax for declaring the conformance that could be extended to cover the general case as well.

(I have minor concerns about the general protocol-existentails-conform-to-protocols direction as well, but if the discussion is going to take that direction it's probably worth starting another thread for the general feature.)

It's a fair point that protocols have this evolution problem today because of the restrictions on protocol types. On the other hand, that's something we specifically want to solve by generalizing existentials, not something we want to double down on.

We do also have this evolution restriction on @objc protocols, although there it's also tied to the ObjC class model, which doesn't support adding new methods through defaulting.

I agree that the syntax for declaring the conformance seems to be the main thing informed by the general picture.

2 Likes

This may be a phantasy because it's quite source-breaking, but would it make sense if clients would need to opt in to let's call them "meta-type requirements" (static, init, Self)?

The some keyword could be used to require that the type must be known at compile time:

func takeASpecificAnimal<T: some Animal>(_ animal: T) {
    // access to T.someStaticMethod()
}

func takeASpecificAnimal<T: Animal>(_ animal: T) {
    // no access to T.someStaticMethod() 
}

Essentially, without the some keyword the client would only get access to a subset of the protocol requirements (i.e. the non-meta-type requirements) and adding requirements later on wouldn't break clients that were relying on (partial) self-conformance.

Couldn't the solution be even more flexible, by simply requiring, and allowing, the user to implement any static methods/inits that may be required, when he declares the conformance?

protocol Foo: Self { // I would prefer this syntax
    func frobnicate()
    // Error: "Static methods on self-conforming protocols need to have an implementation"
    static func frobnicateStatically() 
}

I think it should be another way around to preserve source compatibility:

func takeASpecificAnimal<T: Animal>(_ animal: T) {
    // access to T.someStaticMethod()
    // T cannot be existential container
}

func takeASpecificAnimal<T: any Animal>(_ animal: T) {
    // no access to T.someStaticMethod()
    // T can be existential container
}

Where T: any Animal means T is a type which is a subtype of (or should it be implictly castable to?) the existential container for Animal (aka any Animal or Any<Animal>).

Yes, but reading Improving the UI of generics and the linked discusstion thread, it looks like the any syntax might also be source breaking eventually (with a deprecation period):

I'm not aware of any strong reasons to make this change source-breaking. We could still promote P to any P when there is no ambiguity in the role of a protocol. And require any only when there is need to disambiguate.

But regardless of the source compatibility, my main point was to give kudos to the idea. I think it solves the original problem of using existential containers with generics better than self-conformance.

protocol Q {}

protocol P {
    associatedtype AT: Q
    func a()
    static func b()
    func c() -> AT
    func d(_ x: AT)
}

func f<T: any P>(...) { ... }

Is equivalent to following code:

protocol Q {}

protocol _P {
    func _a()
    func _c() -> any Q
}

protocol P: _P {
    associatedtype AT: Q
    func a()
    static func b()
    func c() -> AT
    func d(_ x: AT)
}

extension P {
    func _a() { self.a() }
    func _c() -> Q { return self.c() }
}

extension (any P): _P {
    func _a() {
        let <T: P> zelf = self
        zelf.a()
    }
   func _c() -> any Q {
        let <T: P> zelf = self
        return zelf.c()
   }
}

func f<T: _P>(...) { ... }

It is more powerful, as it allows protocol in question to have static requirements and associated types. And it will not break when new members are added to the protocol.

On the other hand, such equivalence shows that we don't strictly need new kind of generic constraint, if we have fully-generalized "custom conformances for existentials". But that's a lot of boiler-plate code to write, and understanding what code needs to be written actually requires deep understanding of protocols, existential containers and generics.

If the caller knows the type statically, that's just generics.

The idea of using something like some P as a shorthand for declaring a generic function, and making it easier to call generics with values of existential and existential-metatype type, is something that I know @Joe_Groff has thought about a lot.

Terms of Service

Privacy Policy

Cookie Policy