Pitch: Allow Protocols to be Nested in Non-Generic Contexts

Understood. But I also think we can disable satisfying associatedtype requirement with nested protocol in Swift 5.9 or later, since this is a new feature which will be introduced after SE-0335 is introduced.
I believe source break creates a bit difficult situation, consider the following case.

// Swift 5
public protocol P {
     associatedtype Q
}
public struct A: P {
    // this is now satisfying associatedtype requirement
    protocol Q {}
}

// usage as constraint
extension B: A.Q { /* ... */ }
// usage as type
let value: A.Q = /* ... */

In Swift 6, as Q is no longer usable as a type, you have to change the type into like this.

public struct A: P {
    public typealias Q = any _Q
    public protocol _Q {}
}

However, making such a change is API-breaking, because the usage of A.Q as constraint is no longer possible. While --enable-upcoming-feature ExistentialAny will expose such risks, I also think avoiding such scenario from the beginning of nested protocol introduction is worth considering.

3 Likes

Very excited to see this being worked on! I was planning to learn a lot more about the compiler and take it on last year, but alas, life got in the way, so I'm happy to see it be picked up!

4 Likes

This pitch is good, but should at least come with a caveat about this spelling. Outside of a function, it would be a protocol extension.

fileprivate extension Abstraction {
  var impl: ResultType { // …
}

Without nested protocol extensions, switching to the function form will be necessary, trading good spelling for good scoping. :pensive: So I don't think this pitch should be implemented without being paired with nested extensions.

I think it would be nice to allow nesting extensions within functions - it's one of the missing pieces before we're able to nest entire Swift programs within functions, which would be extremely cool (there are some things you can only do with an extension -- like adding a protocol conformance to a stdlib type).

I think there is enough here even without support for nesting extensions. There's nothing here which prevents that being added to the language at a later date.

5 Likes

This should definitely not be possible at the function-level. That would mean that the vtable grabbed by as? SomeProtocol would differ depending on the current call stack, and it wouldn’t surprise me if that can’t be done.

Yeah I suppose - I was thinking more of conformances to function-local protocols:

func foo() {

  protocol Nested { ... }

  // How do I make 'Int' conform to 'Nested'?
  // I can't write an extension, so I need to wrap it in a local type,
  // which is not nice.
}

But yeah, conformances to other protocols could be problematic.

Apologies for not responding to this before. I agree that a nested protocol should not satisfy an associated type requirement.

Especially with the loss of bare protocol existential syntax, the language is moving towards putting a clearer distinction between protocols (i.e. constraints) and concrete types. associatedtype is currently used to refer to concrete types, and it would be problematic in a lot of senses for it to bind to a nested protocol.

There have been requests in the past for "associated protocols" or something broadly to that effect:

If we ever did add some kind of associated protocol feature, it might be reasonable to use nested protocols to define them, as we currently can with associated types and nested concrete types.

1 Like

I agree with this reasoning. However, it occurs to me that this could lead to a wee bit of a problem—well, let's not call it a problem but maybe a complexity or nuance for which we need to specify the behavior in this proposal.

Suppose we have the following type, which would be allowed under your proposal:

struct S {
  protocol P { }
}

I retroactively adopt conformance to the following protocol like so:

protocol Q {
  associatedtype P = Void
}
extension S: Q { }

If the protocol S.P does not satisfy the requirement Q.P (as you and I agree), then what is the behavior here?

[Dramatic pause to allow readers to consider their answer before continuing.]


As we do not have a partitioned namespace for protocols versus types, I can see two possibilities, one not exactly satisfying for its complexity and one seemingly straightforward but actually unacceptable:

Option A:
By analogy with the case where a protocol requires a property with a different type but same name as a property already on the concrete type, S.P shadows but not override the required associated type, which defaults to Void. Users really don't like this sort of behavior, but it's quite precedented when it comes to protocol adoption.

In the absence of a default specified by `Q`... users would have to rely on associated type inference or a future formalization of the `@_implements` feature in order to fulfill the requirement. Otherwise, again by analogy with the case where a protocol requires a property with a different type but same name as a property already on the concrete type, users just wouldn't be able to conform.

Option B:
The compiler can just straight-up refuse to compile the retroactive conformance extension. Super straightforward.

Why even contemplate the complexities of Option A? Unless I'm mistaken, Option B breaks the following promise about what's supported under library evolution:

A new associatedtype requirement may be added (with the appropriate availability), as long as it has a default implementation. If the protocol did not have one or more associatedtype requirements before the change, then this is a binary-compatible source-breaking change.

By implication (and afaik it's in fact currently true), where a protocol already has associatedtype requirements, adding another one is a binary-compatible and source-compatible change. With lifting of the "Self or associated type" restriction on the use of existentials, I believe we're heading towards a state where the later addition of a first associatedtype requirement is source-compatible as well.

However, if the compiler would refuse to compile if a nested protocol takes the same name as an associated type requirement, then no protocol could adopt any new associatedtype requirements in a source-compatible way even if it supplies a default.

7 Likes

And the user would be able to work around this by adding typealias P = Self.P to their extension?

No: I wouldn't think we'd do anything to change the fact that typealias P and protocol P in the same lexical scope would be (as it currently is) a redeclaration error.

But that is actually immaterial—even if it were not a redeclaration error, the typealias declaration would have no effect under Swift 6 rules. The "this" you're asking about a "work around" for is SE-0355: i.e., you're asking if there's some incantation to opt into P (the protocol) and any P (the existential type) not being distinguished from each other.

I think if we wanted to do that we'd just revoke SE-0355 and allow nested protocols to satisfy associated type requirements as existentials—we wouldn't adopt the diametrically opposite rule into the language and then add additional special-case features to "work around" it on a case-by-case basis.

2 Likes

It is interesting that we do this for properties but not associated types AFAICT:

Property:

struct S {
    var foo: String { "" }
}

protocol Q {
    var foo: Int { get }
}
extension Q {
    var foo: Int { 42 }
}

// OK
extension S: Q {}

Associated Type (with a constraint that means the existing P cannot be a witness):

struct S {
    typealias P = String
}

protocol Q {
    associatedtype P: Numeric = Int
}

// ❌ error: type 'S' does not conform to protocol 'Q'
extension S: Q {}

I think it should act like the latter. A nested protocol is not a valid witness to an associated type because it is not a concrete type, just as String cannot witness the Q.P associated type because it does not meet the Numeric constraint.

In fact, despite Q.P having a default, I couldn't find any way to implement this conformance. Neither by using inference nor by using @_implements. Granted, I was testing within a single file, but I would be surprised if this behaved differently were I to split the types across different modules.

protocol Q {
    associatedtype P: Numeric = Int
    func foo(_: P)
}

// ❌ error: type 'S' does not conform to protocol 'Q'
extension S: Q {

  @_implements(Q, P)
  typealias Foo = Int

  // None of these make a difference, either.
  // @_implements(Q, foo)
  // @_implements(Q, foo(_:))
  func foo(_: Int) {}
}

Godbolt if you'd like to try for yourself.

So it seems to me that we are already not meeting the goal that adding an associated type with default is a source-compatible change. It's definitely an issue worth solving, but if it's not an issue introduced by this feature I would say we can consider its solution to be out of scope for this proposal.

The solution would probably be to productise @_implements. It doesn't surprise me that it is unable to disambiguate associated types as currently implemented; it is an unofficial feature and appears to be limited in its capabilities.

6 Likes

swift's extension-based API model and total lack of namespacing facilities means there are actually very few kinds of additions that can be made to a type's API in a way that is strictly "source-compatible".

even seemingly innocuous changes, like adding an initializer to a protocol as an extension member can cause client code to fail to typecheck or dramatically increase compilation times due to name collisions.

in my view, the fact that adding an associated type requirement risks a name collision with another protocol nested in a retroactively-conformed type is not particularly novel or even remarkable. we have been dealing with variations this problem for a very long time.

1 Like

Hey folks, I've kicked this off for review here starting now through August 7th. Please move any further discussion about the witness matching behavior over to the review thread.

Thank you!

Holly Borla
Review Manager

6 Likes