An Implementation Model for Rational Protocol Conformance Behavior

No, I think you've misunderstood SR-12692. It is not about Collection. It is about what the language allows a generic library author to express. Making changes to the standard library doesn't do anything to address that bug.

I don't think I understand how that's a response to what I said, or even really what you might mean by it. I do get the sense you feel criticized, so let me be clear: I'm not criticizing any exploration you might be doing. I am pointing out, since you brought up code breakage, that addressing SR-12692 by changing the semantics of existing code will definitely break some users.

We could also address SR-12692 by adding new language features, and while I'm not yet convinced it's the answer, it could avoid breaking existing code and provide some other benefits, so we should explore that direction too.

Didn't mean to sound that way. You are leading what, in my view, is a very important and much needed discussion, with many facets to it. I'm working hard to contribute, but am pushing the limits of my own knowledge and experience. So, I'm quite seriously saying, let me know if I get too far afield.

Well said.

Do you have something particular in mind?

You're doing great, Matt, and I really appreciate your deep engagement on these topics!

Well, roughly speaking, I think maybe protocol requirements are great for what is known as the basis operations of types, but that each specializable algorithm should be a first-class entity:

  • that can be defined by anyone (not just the maintainer of the least-refined protocol it operates on)
  • that can have a continuous name apart from the collection of fragments/labels we use to call it, and
  • whose intended specializations are unambiguously marked with that name (types entirely aside).

One can think of many ways the syntax of such a thing could be realized, and I haven't given that much thought yet, but it could be something like this:

extension MutableCollection where Element: Comparable {
   @specializable("partition")
   mutating func partition(around pivot: Element) { ... }
}

extension MutableCollection & RandomAccessCollection where Element: Comparable {
   @specializing("MutableCollection.partition") // <== Note: no "around"
   mutating func partition(around pivot: Element) { ... }
}

I wish that syntax didn't make it seem like the algorithm is subsidiary to the protocols, but that's really a secondary aesthetic concern. Also you could imagine adding parameters to these attributes that help guide ambiguity resolution, which as I've noted I expect to be something that has to be handled at run time in some cases.

1 Like

I take it that, if a type is in the presence of multiple versions of a specialization, the most constrained version wins. Is that the intent? EDIT: You mention parameters as a way of dealing with ambiguity.

Would a concrete type be able to override a specialization?

Are you suggesting that the arguments of a specialization might differ among the versions of a specialization? It sounds like you might intend that, but not sure...

Apologies for the multiple posts.

If I understand your intent, a specializable is declared and exists on its own. It is a first-class type. One of its characteristics is the universe of types on which it operates. That universe is described by a Venn diagram set or sets of protocols and protocol constraints.

Is there need to develop a more robust system of type-level logical operations?

It has specializations that apply to given regions within its universe.

The specializations exist in a formal hierarchy? Or not? Not being a hierarchy is an interesting possibility. The regions of application might overlap.

The specializable would carry around multiple functions, each of which is at the heart of a specialization?

The specializable is a type containing some header information plus a collection of specializations?

Like any other type, it is visible only in a defined scope.

Dispatching to a specialization of a specializable would be very similar to dispatching to a static method on any other type?

Unless I'm misunderstanding SR-12692, it's related to what has long been my #1 frustration with Swift:

When trying to write reusable and efficient code, at some point, I'll re-realize that I have to choose between reusable or efficient code, because AFAICS Swift's current generics and protocols won't let me have it both ways all the way (except in very simple cases).

  • To make my code/library efficient, I'll sooner or later have to give up code reuse and repeat/generate code for every set of types/constraints at every level of the call hierarchies I wish to support, and the users of my code/library will also have to do the same (if they want to use my code to write efficient algorithms that works for multiple types or constraints), etc. Sometimes, for specific cases of this problem, you can find various complicated workarounds, but I've never found or seen of any such workarounds that are general or nice enough to seem practical.

  • To make my code/library reusable/generic, I'll sooner or later have to give up efficiency. (Because my special/effective implementations won't be truly reusable (without code repetition).)

I made an attempt to describe the same thing here.

Not a problem, but if you don't mind I'd rather not try to design the feature in this thread. It's just a sketch of an idea at this point, and if, for example, @Douglas_Gregor has firm plans to address all these things within the existing language, it might be a waste of time to work on it at all.

Good call.

Turning back to identifying and discussing behavior that might be regarded as difficult to reason about and/or unexpected, thus far I see the following broad categories:

  • Quirks of Conditional Conformance in a Protocol Hierarchy. Working with a hierarchy of protocols, if a conforming type is more tightly constrained with respect to a more-specialized protocol and less tightly constrained with respect to a less-specialized protocol, we often see the system selecting the less-specialized conformance. When writing a program, it often is difficult to see that that behavior will result. We have not yet determined whether the behavior is intended in some or all or no respects.
  • Conflicting Imported Conformances. When two modules that each provide their own public conformance of a type to a protocol are imported into the same file, the code compiles and runs with indeterminate behavior. We have confirmed that this behavior is not intended, and should raise an error.
  • Leaking Imported Conformances. When Module A includes a public conformance of a type to a protocol is imported into a file of a Module B, the conformance is visible in other files of Module B. We have not yet determined whether this behavior is intended.
  • Quirks of Class/Subclass Interactions with a Protocol. We have not yet addressed the behavior that results when a class conforms to a protocol and its subclass seeks to provide its own implementation of a requirement of the protocol.
  • Differing Conformance Behavior Between Generic and Non-Generic Contexts. We have not yet addressed some of the confusion that results when essentially the same code results in differing conformance behavior when moving from a non-generic to a generic context. Whether or not this behavior is fully intended, it certainly adds to the difficulty of building one's mental model of protocol conformance.
  • Other. Investigation is continuing.

Thoughts, clarifications, corrections, more apt terminology, and additional examples would be appreciated. Would it be helpful to annotate this list with links to examples and discussions?

2 Likes

Thank you for posting this summary! I haven't had time to follow the discussion closely but I am very happy to see it happening.

I would find this very useful, especially if it was kept up to date as the discussion continues.

1 Like

I'm nervous about getting involved here because there are gaps in my understanding.

Two things:

  1. My high-level read on this whole thread is that Quirks of Conditional Conformance in a Protocol Hierarchy covers a lot of what the thread is talking about. The other four (five counting other) are mentioned but not central. (Not central to this thread, I mean. They are central to other things.) It might be good to focus on that and analyze it more. It's big enough.

  2. I think there are two drivers coming into this: first, the desire to set up generic programs that choose more efficient behavior based on conformances, and second, the way Swift handles methods in protocol extensions that are not in the protocol definition itself. It might be good to separate those two things either in the thread or explicitly when we talk about stuff in the thread. They're related but sort of independent.

The generic conformances thing, which I sort of think of as the 'bidirectional collection' thing because you want to pick the more-efficient last method on a bidirectional collection but maybe some other means for a collection that does not conform to bidirectional collection, seems to be one thing that needs some focus.

The protocol extension thing, where the same code will dispatch either dynamically or statically depending whether the conformance comes from the protocol definition or only in an extension, causes a lot of confusion. But the rule for it is clear, once you know it.

I guess my thoughts are that we should focus this down to the protocol conformance thing (the bidirectional collection thing I meant above) with respect to the Quirks of Conditional Conformance in a Protocol Hierarchy thing. I have some sense that the more-specific conformance selection system might be where the complications are, and that I would go through the compiler source right there to figure out exactly what the reality is. I have not had time to do that myself so far.

Hope that helps. Like I said I have been watching the thread and learning stuff, but also trying to see what the kernel of this thread is.

@dfsweeney, thanks for these thoughts.

You are right that the first two items on the list have predominated and that others deserve some attention.

You also are right that there are multiple axes at work. You point to whether a method is a default implementation of a requirement vs. extra functionality. Some other axes that may or may not make a difference include (1) generic vs. non-generic, (2) existential vs. concrete, (3) imported vs. local, (4) hierarchical vs. flat, (5) conditional versus unconditional, etc. These axes produce a multi-dimensional space about which the programmer must reason.

Certainly, you are right that getting Collection when one wants BidirectionalCollection is a commonly reported unexpected behavior.

If there is an update to this list, I'd be tempted to recharacterize the list as, "Discussion Points," and add the following items:

  • Need for Documentation (!!!)
  • Multi-Dimensionality of the Model
  • Standard Library as a Case Study of the Complexity of the System
1 Like

@Douglas_Gregor, for your consideration sometime after the WWDC crush ends:

Regarding conformance of a type to a progressively refined protocol hierarchy, it appears that the Protocol Witness Table Pattern for a conditional conformance to a "child" protocol disregards the default implementations declared by that protocol.

For the sake of clarity, here is a minimum example:

protocol P { var id: Int { get } }
extension P { var id: Int { 1 } }

protocol Q: P {}
extension Q { var id: Int { 2 } }

struct X<T>: P {}
extension X: Q where T: Equatable {}

func callId<T: Q>(_ t: T) -> Int { t.id } // dispatches to `P.id` even thought T: Q

print( callId( X<Int>() ) ) // "1", not "2"

In the resultant binary, the Protocol Witness Table Pattern for X: Q does not point to Q.id as the witness for the id requirement. AFAICS, the table pattern for X: Q solely points to the protocol witness table for X: P.

excerpts from binary
_$s4main1XVyxGAA1QAASQRzlWp:  
    // protocol witness table pattern 
    // for <A where A: Swift.Equatable> main.X<A> : main.Q in main
0000000100002030         db  0xe4 ; '.'  << first word points to Q protocol descriptor
0000000100002031         db  0x1e ; '.'
0000000100002032         db  0x00 ; '.'
0000000100002033         db  0x00 ; '.'
0000000100002034         db  0x01 ; '.'
0000000100002035         db  0x00 ; '.'
0000000100002036         db  0x00 ; '.'
0000000100002037         db  0x00 ; '.'
0000000100002038         db  0x20 ; ' '  << second word points to PWT for X: P
0000000100002039         db  0x20 ; ' '
000000010000203a         db  0x00 ; '.'
000000010000203b         db  0x00 ; '.'
000000010000203c         db  0x01 ; '.'
000000010000203d         db  0x00 ; '.'
000000010000203e         db  0x00 ; '.'
000000010000203f         db  0x00 ; '.'
_$s4main1XVyxGAA1PAAWP:        
    // protocol witness table for main.X<A> : main.P in main
0000000100002020         db  0x18 ; '.'  << points to protocol conf descriptor for X: P
0000000100002021         db  0x1e ; '.'
0000000100002022         db  0x00 ; '.'
0000000100002023         db  0x00 ; '.'
0000000100002024         db  0x01 ; '.'
0000000100002025         db  0x00 ; '.'
0000000100002026         db  0x00 ; '.'
0000000100002027         db  0x00 ; '.'
0000000100002028         db  0x60 ; '`'  << points to P.id.getter witness
0000000100002029         db  0x19 ; '.'
000000010000202a         db  0x00 ; '.'
000000010000202b         db  0x00 ; '.'
000000010000202c         db  0x01 ; '.'
000000010000202d         db  0x00 ; '.'
000000010000202e         db  0x00 ; '.'
000000010000202f         db  0x00 ; '.'

When calling into the Q-constrained generic context of callId<T: Q>(_ t: T) -> Int, the assembly code correctly selects and passes an instantiated X: Q Protocol Witness Table. But, since the table doesn't point at the Q.id witness, the call to id is dispatched to the P.id witness.

excerpt from assembly code
                     _main:
push       rbp
mov        rbp, rsp
sub        rsp, 0x20
mov        rax, qword [_$sSiN_100002008]      ; _$sSiN_100002008
mov        dword [rbp+var_4], edi
mov        rdi, rax                           ; argument #1 for method X.init
mov        qword [rbp+var_10], rsi
call       _$s4main1XVACyxGycfC               ; main.X.init() -> main.X<A>
xor        ecx, ecx
mov        edi, ecx                           ; argument #1 for method _$s4main1XVySiGMa
call       _$s4main1XVySiGMa                  ; type metadata accessor for main.X<Swift.Int>
mov        qword [rbp+var_18], rax
mov        qword [rbp+var_20], rdx
call       _$s4main1XVySiGACyxGAA1QAASQRzlWl   ; lazy protocol witness table accessor
                                               ; for type X<Int> and conformance 
                                               ; <A where A: Equatable> X<A> : main.Q

mov        rsi, qword [rbp+var_18]             ; argument #2 for method main.callId
mov        rdx, rax                            ; argument #3 for method main.callId
call       _$s4main6callIdySixAA1QRzlF         ; callId<A where A: main.Q>(A) -> Int
xor        ecx, ecx
mov        qword [_$s4main6resultSivp], rax    ; _$s4main6resultSivp
mov        eax, ecx
add        rsp, 0x20
pop        rap
ret

Yes, the X: Q conformance is conditional, but since the table is used only in contexts in which the condition is satisfied, the table should (?) point to Q's implementations.

Am I misunderstanding?

Hi Matt,

AFAICT this code is almost the same as my earlier example. My id2 (which corresponds to callID in your example) would be on an extension of Q rather than of P. So your example effectively points out that the P requirement id is satisfied the same way for the X<T>:P witness table as for the X<T>:Q witness table. I find that mostly unsurprising given the (always re-surprising!) fact that X<T> can only conform to P “in one way.” It's also nearly the same as your earlier example except in that example your P requirement a() (which corresponds to id) is being accessed through a different P requirement b(), rather than a non-requirement (callId/id2).

Also, all of these semantics seem consistent with what @Jens pointed out about the original proposals: we were warned. It's only since I realized this had serious negative consequences for generic programming that I've started hearing @Douglas_Gregor suggest that it could be changed. However, there are two very open questions still, AFAICT:

  • Is Doug actually talking about the same things we are? So far his description has been in very implementation-centric terms that don't map clearly onto a programming model.
  • Can we change the language semantics and call it a bug fix, even if the language implements exactly or nearly what was proposed and accepted?

Surely WWDC pressure explains why we don't have answers yet; I look forward to Doug's re-engagement after the conference is over.

1 Like

Ok. This is awful, wonderful, ironic, annoying and hopeful. Awful: my mental model of the system is broken. Wonderful: my eyes have been opened, now, thanks to patient feedback from you and several others. Ironic: my mental model was learned from experimenting with Sequence, Collection, et al. Annoying: it is remarkable how much isn't written down. Hopeful: mental models can be repaired.

After very carefully re-reading SE-0143 and, especially, the parts pointed to by @Jens, I can say that @Douglas_Gregor certainly used some very precise language ("When satisfying a protocol requirement, Swift chooses the most specific member that can be used given the constraints of the conformance") to explain a very specific example ((X1<X2>() as P).f()). Originally, I'd understood, "given the constraints of the conformance," to refer to the constraints imposed at the point of use of the type. I also understood that (X1<X2>() as Q).f() (note the Q rather than P) would yield the specialized implementation of f(), as defined on Q. As I'll explain in a moment, there exists good precedent for that (mis-)interpretation.

BTW, hats off to the compiler engineers. After digging into the guts of the implementation, I am amazed by the level of complexity required to model the semantics, by the precision with which they do so, and by the enormous amounts of care and attention that must go into making all the parts work together correctly. I have a great deal of respect for them and what they do.

"In One Way"

So, about the in-one-way rule: exactly how is it defined? It sounds simple enough, but, in practice, it appears more nuanced. In the context of my example, I see at least three different ways that it could be applied:

Sequentially conforming to `P` and then to `Q`.

X<T> conforms to P, as follows:

  1. P requires that a conforming type have a var id: Int { get }.
  2. X<T> is unconditionally conformed to P.
  3. X<T> itself does not implement P's id requirement, and so it needs to rely upon a default implementation of id.
  4. In the context of X<T> conforming to P, the only default implementation available is P.id, and so, through that implementation, X<T> satisfies the requirement.

X<T> conditionally conforms to Q, as follows:

  1. Q: P.
  2. Q has no requirements of its own other than that the conforming type must conform to P.
  3. The conformance to P already is satisfied, and so X<T> conforms to Q.
  4. Since the Q.id implementation is not used by X<T> to conform to Q, it merely "shadows" P.id.

OR

Holistically conforming to both `P` and `Q` in the most specialized way possible.

X<T> conforms to both P and Q, as follows:

  1. The conformance to P is unconditional.
  2. The conformance to Q is conditional.
  3. The only requirement is the P.id requirement.
  4. X<T> itself does not implement P's id requirement, and so it needs to rely upon a default implementation of id.
  5. Both P and Q have default implementations of id.
  6. We would prefer the Q.id implementation, as the more specialized implementation of id.
  7. But Q.id is available only when the condition for the Q conformance is satisfied, so Q.id is not always available to satisfy the id requirement.
  8. Since X<T> can conform to P only in one way, the sometimes not available Q.id implementation cannot be used to satisfy the requirement.
  9. The P.id default implementation is available at all times, and can satisfy the requirement, so it is used by X<T> to conform to P.
  10. Since Q has no requirements of its own other than conformance to P, X<T> conforms to Q by virtue of its having satisfied the requirements of P in the manner specified, here. In other words, for purposes of conforming to Q, X<T> does not independently redetermine if there is a better way to conform to P under the state that exists when the condition to the conditional conformance is satisfied.

OR

Independently conforming to each of `P` and `Q`.

As described, above, X<T> satisfies the requirements of P via the default implementation provided by P.id. Then, independently, the best possible conformance for X<T> to Q is determined, with the conformance to P effectively being re-evaluated on the basis of the condition to the Q conformance assumed to be true. On that basis, Q.id would be selected as the implementation that satisfies the id requirement for purposes of Q.

I'm not sure there are on-point discussions or examples in SE-0143 that clearly specifies the intent. And, AFAICT, the formal documentation of protocol conformance does not address this aspect of the matter.

It seems that, what I characterize as the "holistic" conformance method, is closest to what the compiler actually does and what those well-versed in the system understand it to be doing.

If the possibility of revising conformance behavior is put on the table, I'd suggest consideration of whether the "independent" conformance method (A) is better aligned with what intuition would provide and (B) might help to facilitate providing more specialized conformances at the ultimate point of use.

The Collection Protocol Irony

The irony is that (unless I am mistaken) the "expected" behavior of P, Q and X<T> in my example in the post, above, is the behavior that is actually provided today by Swift via the canonical example of Collection, BidirectionalCollection and custom collection types.

I first learned about conditional conformance and the power of a protocol hierarchy by progressively conforming types to Sequence, Collection/MutableCollection, BidirectionalCollection, et al. I observed and internalized their behavior.

Of particular relevance, I saw and appreciated that, as a type was progressively, conditionally conformed through the steps of the hierarchy, certain methods gained functionality and/or improved in efficiency. Using our standby example of Collection's requirement of distance(from:to:), it goes from a default implementation of one-way on Collection to bidirectional on BidirectionalCollection. That enhanced capability is available both on concrete instances and in appropriately-constrained generic contexts.

Take for example:

A walk through a custom type conformed to `Collection`, and its protocol conformance behavior.
// Create a type.
public struct Things<T> {
  private var _elements: [T]
  init(_ elements: T...) { _elements = elements }
}

// Extend it to conform to MutableCollection with just a few keystrokes.
extension Things: MutableCollection {
  public var startIndex: Int { 0 }
  public var endIndex: Int  { 3 }
  public func index(after i: Int) -> Int { _elements.index(after: i) }
  public subscript(position: Int) -> T {
    get { _elements[position] }
    set { _elements[position] = newValue }
  }
}

// Use Collection's algorithms, like its distance(from:to:) requierment.
let fourStrings = Things("A", "B", "C", "D")
let d1 = fourStrings.distance(from: 0, to: 3) // 3

// The default implementation of distance(from:to:) declared by Collection is restricted to forwards-only traversal.
let d2 = fourStrings.distance(from: 3, to: 0) // TRAP: Can't go backwards

// Extend it to conform to BidirectionalCollection.
extension Things: BidirectionalCollection where T: StringProtocol {
  public func index(before i: Int) -> Int { _elements.index(before: i) }
}

// Now, distance(from:to:) works backwards.  BidirectionalCollection declares a better default implementation in one of its extensions.
//let d2 = fourStrings.distance(from: 3, to: 0) // -3

// We can use it in a generic context.
// And we still get that cool bidirectional implementation of distance(from:to:).
func distanceFromEndToStart<T: BidirectionalCollection>(from t: T) -> Int {
  t.distance(from: t.endIndex, to: t.startIndex)
}
let d3 = distanceFromEndToStart(from: fourStrings) // -3

// But, wait. The methods distance(from:to) is a requirement of 
// Collection, and the conformance of Things to BidirectionalCollection 
// is conditional, and since Things<T> can conform in "only 
// one way" to Collection's requirement of distance(from:to), 
// that one way should be the unconditional Collection.distance(from:to:)
// implementation.  But, Collection and BidirectionalCollection 
// aren't playing by that rule.

My example, in the post, above, mirrors this sort of usage of the Collection hierarchy:
X<T> = a generic type ready to be turned into a collection
P = Collection
Q = BidirectionalCollection
id = direction(from:to)

I posit that, what people intended with conditional conformance is the behavior exhibited by direction(from:to). In a generic context constrained to BidirectionalCollection, when one calls direction(from:to), they get the more-specialized BidirectionalCollection implementation. By contrast in my example in the previous post, in a generic context constrained to Q, when one calls id, they get the less-specialized P implementation.

Miscellaneous

some specific responses that may not be of general interest

Plagiarism.

Precisely. I posit that, by not specializing the PWT for X<T>:Q, the system may be missing an opportunity. In my (totally incomplete and ill-informed) look at the implementation details, it seems that Q.id could be noted and used in the X<T>:Q PWT without any impact on the runtime and with little impact on binary size and compile times. I must be wrong, but until someone shows me why, I'll keep asking about it.

At the end of the day, it doesn't get you all the way to being able to have a P.id2 default implementation that can dynamically dispatch to P.id or Q.id depending upon the underlying concrete type, but it gets you pretty far down that road.

I simplified it (taking your suggestion), and then moved it into a generic function to highlight that the context is constrained to Q.

2 Likes

There are many threads going on here, so I'll redirect to where folks have uncovered more. The override/@_nonoverride behavior for protocols is there to approximate a model where one gets the most-specifialized witness at the point where all types have been made concrete (even if that's at runtime).

Doug

2 Likes

Hi Doug,

I took some care to craft questions for you in this thread that would help move the conversation forward. I understand of course that WWDC prevented timely answers, but seeing your post here I'm wondering how to proceed. Do you want to catch up with this thread and discover the questions, or shall I re-post them, and if they should be re-posted, would you prefer to follow up in the other thread?

[EDIT: FWIW, I've been thinking of the other thread as a free-wheeling place to explore details. It seems like it has multiple different ideas being discussed. I'd like to suggest keeping the questions raised by Dave, here, in this thread.]

Thanks, Doug.

On a separate note, FWIW, I've been writing up a semantic-level explanation of protocol conformance. Out of that explanation, I am starting to see the makings of a mental model that might helpfully encapsulate the workings of the system, as it exists today. To be continued...

At a technical implementation level, I'm wondering about the plusses and minuses of slightly revising the logic for selecting witnesses for protocol witness tables. Take our running example involving P, Q: P, X<T>: P, and X: Q where T: Equatable. The "slight" change would be, when the PWT for X: Q is constructed, it is recognized that Q has an id implementation and that that implementation happens to correspond to a requirement of a protocol from which Q inherits, namely P. Based on that recognition, the Q.id implementation is inserted into the PWT for X: Q. As is already the case in the current system, when entering a context in which a given X<T> actually does conform to Q, the Q witness table will be selected. Thus, when a call to id is made in that context, the call would be dispatched to the Q.id witness. If generalized, does that approach seem feasible, technically? Am I failing to think of obvious other cases where that approach creates trouble? (Leaving aside the fundamental issue of source breakage.)

Thanks, again.

Yes, it's the same things. In the example with P, Q, and X<T>. I think expanding the example is useful to illustrate the "conforms in one way" bit, e.g., add

struct NotEquatable { }

print( callId( X<NotEquatable>() ) ) // "1"

When the conformance of X<T>: P is established, there is only a single definition of id that is guaranteed to work for any T: the one on the extension of P. So that one gets picked, and will be the same regardless of what T ends up being substituted with. The conformances for any X<T> to P behave uniformly.

Y'all have found override and @_nonoverride. One can use @_nonoverride to introduce a new requirement in a more-refined protocol, which then selects a witness on its terms. Let's extend the example to introduce a non-overriding id requirement in Q.

protocol P { var id: Int { get } }
extension P { var id: Int { 1 } }

protocol Q: P {
  @_nonoverride var id: Int { get } // introduces a unique requirement `id`
}
extension Q { var id: Int { 2 } }

struct X<T>: P {}
extension X: Q where T: Equatable {}

func callIdAsP<T: P>(_ t: T) -> Int { t.id } // dispatches through `P.id`
func callIdAsQ<T: Q>(_ t: T) -> Int { t.id } // dispatches through `Q.id`

print( callIdAsP( X<Int>() ) ) // "1"
print( callIdAsQ( X<Int>() ) ) // "2"

In the conformance of X<T>: P, the id defined in the extension of P is the only viable witness that works for any T. Hence, the requirement P.id is satisfied by the id defined in the extension of `That's the same answer we had in the prior example.

In the conformance of X<T>: Q, both the id defined in the extension of P and the id defined in the extension of Q can be used to satisfy the requirement Q.id. The id defined in the extension of Q is more specific, so that is used to satisfy the requirement Q.id.

These two choices are made independently.

Now, to the uses. In callIdAsP, the only way to type-check t.id is to go through the requirement P.id, so we get "1" in the call given X<Int>.

In callIsAsQ, the expression t.id can either call through the requirement P.id or the requirement Q.id. Overload resolution picks Q.id, because it is more specific, so we get "2" in the call given X<Int>.

We can change the semantics. That might need to be staged in with a version bump (e.g., Swift 6), but I wouldn't get too caught up with the "when" part of this. Figure out what the model should be.

The model I've been imagining would collect the set of potential witnesses at the point where the conformance was declared, but delay the selection of the "best" witness to satisfy a given requirement until enough information is available---whether that happens at compile time or at run time.

For example, rewind back to the conformance X<T>: P. The id defined in the extension of P is a viable witness that works for any T. The id defined in the extension of Q is potentially viable; it depends on whether the actual type substituted for T conforms to Equatable and, if so, it is more specific than the id defined in the extension of P. Hence, we would need to delay the decision of the "best" witness until we have a concrete type substituted in for T. For T=Int, Int conforms to Equatable so the id defined in the extension of Q will be selected as the witness. For T=NotEquatable, the id defines in the extension of P will be selected as the witness.

For the conformance X<T>: Q, both the iddefined in the extension ofPand theiddefined in the extension ofQare viable, and the one in the extension ofQis better, so the one in the extension ofQ` is selected as the witness. There is no reason to delay the decision until concrete types are available, because the decision won't change.

All of the examples written/mentioned above would print 2 for X<Int>, regardless of how they are called. And 1 for X<NotEquatable>, of course.

In this model, @_nonoverride can probably remain a hidden relic, needed only to maintain ABI for the Standard Library where it is used.

The shape of a protocol's witness table is established by the protocol itself; each type that conforms to a protocol must provide a witness table that matches that shape. The protocol Q from the original example does not contain an entry for id; only P contains such an entry, because P declares the requirement id. What you describe matches up with the @_nonoverride trick I describe above.

There are 135 messages in this thread, 65 in this thread, and 67 in this thread.

Doug (just one me)

2 Likes

Yeah, I see why you say that this band-aid only provides “mostly” the right behavior, because it's not really reflective of the right semantics. It only happens to work out to the extent that every specializable algorithm is reimplemented for for a generic type's conditional conformance, so that the instance never “escapes” into a context where a less-refined protocol is the constraint. As SR-12881 shows, that requires more care than even the standard library team can apply, and is not going to scale at all to a world where new specializable algorithms can be added retroactively—a feature we need in order to fulfill the goal of fully supporting generic programming.

The right semantics are that an algorithm defined for protocol P and specialized for protocol Q uses the Q implementation when invoked on a Q-conforming type, no matter the constraints in effect when the algorithm is invoked.

Good. If we can agree that the current semantics should change, whether we call the current behavior a bug or just a design that needs to be updated doesn't matter.

(emphasis mine)

Okay, before we go much further: from what we're doing here @mattrips and I intend to produce reference documentation that can be used by programmers to understand the intended behavior of the language. I consider it my job in this discussion to ensure the terms we end up using to describe the semantics are in the domain of a viable programming model. I trust you to keep an eye on implementability and to translate between those terms and your implementation model. I'm not saying we can't discuss implementation details here, just that we need to draw a line that distinguishes implementation details from the programming model and end with a result that can be described entirely on the programming model side of the line.

  • Right now, “witness” is not a part of the Swift programming model. I think it could be, especially if it has a simple definition in programming-model terms, like “implementation or type used to satisfy a protocol requirement.” Do you think we need this term in the programming model, and is that a good definition?
  • I would like to keep “witness tables” out of the programming model if at all possible; my sense is that it would couple the model to the current implementation too tightly, and the implied obligation to keep the term meaning the same thing in both domains would make the programming model hard to understand. In particular…
  • I am very motivated to avoid describing the programming model in terms of anything like the decision procedure you've talked about putting in PWTs. I believe that creates an unnecessary level of complexity for users and the semantics of the system can be described much more simply, whatever the implementation. [edit: to be clear, I mean that if we talk about PWTs containing entries that may point to decision procedures, we won't end up with a good programming model. I'm actually quite certain that the logic encoded in your decision procedure is going to have to be described in one form or another as part of the programming model]
  • The “as if” rule, of course, applies to the programming model. Given the agreed-upon desire to change the semantics, and the facts of separate compilation and infinite types, some generic dispatching decisions always need to be made at run time. Therefore for the programming model, there's no point in discussing compile time dispatching decisions except as a possible optimization.

All of the examples written/mentioned above would print 2 for X<Int> , regardless of how they are called. And 1 for X<NotEquatable> , of course.

Good, that's the right behavior.

In this model, @_nonoverride can probably remain a hidden relic, needed only to maintain ABI for the Standard Library where it is used.

Also good. I still believe we want a feature to unambiguously identify intended specializations, but @_nonoverride is only in the same ballpark, not the right feature.

I understand. I'm still very interested in your reaction to the suggestion at the bottom of this post, if you have a moment to think about it.

3 Likes

Scoped conformances are a reasonable direction; I have quibbles with the dynamic casting approach taken in that proposal, but having the ability to keep conformances scoped within a module seems reasonable. Capping retroactive conformances at internal visibility seems surprising to me, because I don't think programmers tend to think about retroactive conformances any differently from "normal" conformances, so it would take a lot of explaining to say "ahh, for this one you need an explicit public, but over here it's implicit."

From an implementation standpoint, I'd really rather see us fix the implementation of the model that exists before extending it with more features. The type checker needs to properly diagnose ambiguous conformances when it sees them at compile time. The runtime needs to diagnose ambiguous conformances when it sees them at run time. The runtime demangle-to-type-metadata path needs to account for retroactive conformances---they're literally there in the mangling scheme in the ABI, but we ignore them---so we get consistent behavior. You can't implement scoped conformances without getting these fundamentals right.

Doug

2 Likes