Could you show an example of what you mean here, Joe? It seems entirely reasonable to me that, to keep the last assertion from firing in my example, the extensions of P and the conditional conformance of Array to Equatable would all have to be visible in the scope where Array<Int> was bound to a P-constrained generic parameter, i.e. at the point of the last isEquatable call. It is the context of that scope in which I expect the lookups to happen.
Having a way of declaring up front the conditional aspects of P conformance with ?: or something similar might be enough to fix that problem.
Not having the up-front declaration is obviously more work for the compiler, but it doesn't seem like a requirement. I'm not dead-set against doing that up-front declaration, but unless I'm missing something important, it's something we should decide to require or not based on what makes the best language design for users.
I think Rust's model is probably the closest to Swift, though they do allow specialization as an experimental feature (with the drawbacks that that does rely on global coherence of conformances and whole-program information, which are not really options for Swift's compilation model). I think the way Rust handles impl conformance declarations and default implementations is a good model, though.
I'll spend some time playing with that and see what I can learn, thanks.
Thanks, I took a look. I've always assumed that because Swift tries to make protocol conformance “truly, semantically, dynamic,” it would have to (at best) pick an arbitrary conformance in some situations… But I created a simpler example so I could do some experimentation, which seems to demonstrate it's much worse than that, and this part of the compiler is nondeterministic, if not insane.
We are well aware that the implementation is not very robust in the face of multiple conformances, because there are many places in the compiler where it ought to carry conformance information forward but gives up and does global lookup again. The intended semantics should be that you get the one conformance that's visible at the point of use (or else an ambiguity if there are more than one), and that generics with protocol requirements which get instantiated with different conformances end up instantiating different types.
That's what I've always thought the intended semantics should be. It's never been clear to me that anyone else—particularly those putting intentions into code—had the same semantics in mind, though. And absent an implementation, there doesn't seem to be any mechanism for establishing an official consensus.
You aren't declaring the conformance for Array<Int>, though, you're declaring the conformance for Array for all Elements, in a context that may be independent of any other conditional conformances on Array or conditionally-eligible default implementations for P. If you declared extension Array: P where Element == Int, we would in fact pick up the most specific available methods to satisfy the requirements for P for Array<Int>.
The Swift runtime implements the semantics I described. It just needs a suitably correct compiler to feed it the right inputs.
Responding to @Paul_Cantrell's post from another thread here, because it's really this topic IMO:
It's much worse than that, unfortunately. Whether that second extension acts as a shadow or an override depends on the concrete type it applies to:
struct X : P, Equatable {...} // override
struct Y<T> : P, Equatable { ... } // override
struct Z<T> : P {}
extension Z : Equatable where T: SomeProtocol { ... } // shadow!
I don't want to disagree exactly, but I'd like to draw a distinction between programs that are semantically static and those that are semantically dynamic, because that's an entirely different question than whether the program happens to be implemented using dynamic dispatching. A semantically static program is one in which, if the whole program is visible to the compiler, all dynamic dispatching based on types could be optimized away by specialization. ¹
In Swift, all unoptimized generic code is implemented with dynamic dispatching, but the code is semantically static as long as no existentials or non-final classes (the basic type-erasing mechanisms) are involved. ² (at the point where these erased types are created the semantically static information is captured and, as you say, “lives with the value itself”). IMO it's hugely valuable that generic code is always semantically static, and the semantics I have in mind for getting “this to make sense” preserve that property.
¹ I invented these terms AFAIK; if there are established words for this distinction, let's use those instead.
² Closures with captures can be viewed as another type erasing mechanism. I'm not overlooking those, and if you want to fold them into this discussion, we can, but it will complicate things.
Wow, I'm back here again. Sorry to keep picking at this one, but the model we have just seems really broken. This example shows how algorithms on conditionally-conforming collections can end up having the wrong complexity:
I might be asking about something obvious here but isn't this what is explained in this and this section of SE-0143, ie:
As noted in the section on multiple conformances, Swift already bans programs that attempt to make the same type conform to the same protocol twice. This proposal extends the ban to cases where the conformances are conditional.
/.../
For these reasons, this proposal bans overlapping conformances entirely.
/.../
Conditional conformances may exacerbate existing problems with overloading behaving differently with concrete types vs. in a generic context. For example, consider:
/.../
This is not a new problem to Swift. We can write a similar example using a constrained extension and non-conditional conformances:
/.../
That said, the introduction of conditional conformances might increase the likelihood of these problems surprising developers.
?
If so, I keep making mistakes having to do with this, even though I understand it every time I read it. It seems like I can't truly internalize it.
It is. I just never quite realized until recently how serious the consequences are for generic programming. Namely, you have to learn one or both of these lessons:
You can't reliably provide a specialized implementation of a base protocol requirement in a derived protocol extension. It will seem to work in many scenarios, but will break down in corners as described here:
Conditionally conforming a generic type to a more refined protocol than the one it already conforms to doesn't work if any of the base protocol's requirements have been specialized in the more refined protocol. Those specializations become shadowing overloads that are not used in all contexts.
Me too; I think if you do any algorithm specialization mistakes are unavoidable (unless you resort to beautiful/horrifying hacks like the one described by @Nevinhere, which don't currently optimize well). My take on it is:
Protocol extensions as they exist today are an inappropriate mechanism for specialization of public protocols in libraries.
You can use them for requirement specialization privately if you're very careful about the lessons cited above
To avoid changing the semantics of existing code, fixing this will require:
New language features to describe requirement specializations explicitly
Warnings to be issued whenever something in a protocol extension could match a requirement from a less-refined protocol.
An annotation that you can add to such a protocol extension to denote, “yes, that's OK, I intended to shadow that requirement and I know it's not going to provide an implementation of the requirement”
Fortunately I have been advised that there's room in the ABI for the things needed to fix the problem, so there's hope for the future at least. I think any reasonable solution to these problems could easily also address this one.
Here's a mystery/challenge related to this problem: these lines demonstrate how a conditionally-conforming collection implementation may exhibit the wrong behavior (not just efficiency): it actually traps. Why doesn't the same problem occur when x1 is one of the standard library's conditionally conforming types, like FlattenSequence?
// Try this definition instead.
let x1 = repeatElement(0...9, count: 10).joined()
Endless bragging rights for the dev who explains that (I don't know the answer).
The stdlib collection types implement index(_:offsetBy:) unconditionally (or at least, with no condition beyond Collection).
If you move index(_:offsetBy:) from the conditional extension into either the main body of CountSteps1 or an unconditional extension thereof, then it will work.
The reason your example crashes is twofold. First, offsetEnd(by:) is written in an extension of Collection. So within its body, all the compiler knows statically is that Self: Collection.
The call to index(_:offsetBy:) resolves to the function which satisfies the conformance of Self to the Collection protocol. And for CountSteps1 that is the default implementation from the stdlib, which traps on negative offsets.
Second, the conditional extension on CountSteps1 does not take part in its conformance to Collection, so the methods therein are not implementations of Collection requirements. (They are, however, implementations of BidirectionalCollection and RandomAccessCollection requirements.)
By moving those implementations out of the conditional extension and into an unconditional one (or the main body of the type), then they become part of the Collection conformance.
Oh, I see now: the standard library just gives up on performance guarantees for its conditionally-conforming RandomAccessCollections. That is really sad. I never would have allowed FlattenRandomAccessCollection to be collapsed in this way until we had a way to prevent that.
The reason your example crashes is twofold…
Yeah, I know why my example crashes; I designed it to show how the existing system is broken.
[Edit: The following is all wrong; see below]
The point is that if the type (CountSteps1) were a legit wrapper over its base, it would use the base's index to implement its own, and those operations wouldn't be available to it in the main body of the type, making the problem completely unfixable short of beautiful/awful existential hacks.
I believe that is not correct. The conditionally-conforming types provide their own implementations of Collection requirements, which forward to the base collection. Therefore, if the base provides performant random-access guarantees, then so does the wrapper.
(Where possible that is—of course LazyFilterCollection can only provide Bidirectional, not RandomAccess, whereas LazyMapSequence can do both.)
If CountSteps1 were a legit wrapper over its base, it would use the base’s index to implement its own, and it would forward index(_:offsetBy:) to its base, and that operation would be available to it in the main body of the type, making the problem entirely nonexistent.
This is its implementation of index(_:offsetBy:), which is indeed only conditioned on T.Element: Collection. (The limitedBy version is a few lines down. The following applies to both, as well as distance(from:to:) and maybe some other stuff.)
Currently, its index(_:offsetBy:) method works by repeatedly stepping the index one position at a time. But it could be rewritten to “hop”, by checking the count of each outer element in turn. (The first one of course would check the distance from the starting point to whichever end is in the proper direction.)
When the outer elements are RandomAccess, this means it would not have to walk the inner elements at all. It would happily hop along the outer elements, subtracting their counts until the distance remaining to travel reaches (or crosses) zero. Then it would drop into the correct outer element, and hop to the desired position.
The tradeoff here is that in the non-RandomAccess case it would go more steps than necessary, because it would have to find count for whichever outer element contains the destination, rather than stopping immediately when it reaches the destination.
That could probably be rectified through some cleverness, since it does know the remaining distance ahead of time.
Basically, we would want something like index(_:offsetBy:limitedBy:) that returns a tuple of (indexReached: Index, actualDistanceTraveled: Int). Or alternatively, distance(from:to:butNotMoreThan:) with a similar return type.
This really needs some elaboration, because this is an area where implementation constraints have a significant impact on what is possible with the language design. The current implementation model is that a protocol conformance is described by a "witness table", which is a mapping of each protocol requirement to its "witness", i.e., the one declaration that satisfied that particular requirement. With generic types, that witness has to be one that works for any generic arguments. To get a different witness for different generic arguments, which is fundamentally what this thread is about, requires the Swift runtime to inspect the capabilities of the actual argument types dynamically, and perform some decision procedure to pick the best witness. There are effectively two places where we can introduce a decision procedure that works with the current runtime:
Within the implementation of a function itself. For example, if a single function had multiple bodies (e.g., each with a different where clause), the decision procedure would be at the beginning of the witness function.
When the witness table for a concrete type's conformance to a protocol is realized (e.g., Array<Int> conforming to P in @dabrahams's example). Here, the decision procedure would be in some runtime hooks we have during witness table realization. We can make decisions for each protocol requirement (independently).
Now, the fact that this decision process occurs at runtime implies a few things about our design constraints:
The process should be fast. We'll have to accept some dynamic lookups (e.g., does Int conform to Equatable?), but we probably have to avoid global lookups that produce an unknown set of potential candidates to rank.
The process should never fail. Overload resolution and similar ranking metrics have ambiguities. We should either define away those ambiguities (i.e., there is always a single best answer) or we need some kind of fallback (warn and pick at random? pick what today's language picks?).
If we take the "multiple bodies with different where clauses" approach, it seems to me like we could bubble up the information in those where clauses to the conformance, so that we know statically that the conformance needs to take added conditional arguments in order to instantiate the witness table, so that we could still pass down the Int: Equatable conformance (or lack thereof) at the point of instantiation for the witness table instead of doing a global search.
Would you be open to thinking differently about how this can work? I've always had in mind a system where these determinations can be made at compile time (while still allowing separate compilation of course). I've always thought viewing something that's semantically static through a dynamic lens has been limiting our implementation choices, and I think there are other possibilities.
Whether we can even begin that conversation depends on whether you agree that generics should/could be semantically static. If not, the options are indeed much more limited.
I don't think it's just about viewing the problem through a "dynamic lens". The important property I think we want to maintain in the generics system is modularity. That allows us to implement generics with semantically equivalent dynamic and static implementations, but that's not the only benefit; it also allows for separate compilation and analysis, avoids introducing fundamental compiler performance problems, and minimizes side effects from adding additional packages to a system.
Of course we want modularity, motherhood, and all other good things
I don't know why you think that's related to what I brought up. What I have in mind is modular and preserves all of the properties you listed except possibly the thing about semantically equivalent static and dynamic implementations—depending on what you mean by that—whose value I am inclined to question.
I think what I have in mind could even improve the modularity story, but it's hard to tell because you haven't been very precise about the properties you listed. It would be really helpful if you could unambiguously describe the semantics you think are crucial, ideally with examples that illustrate the differences between the results you want and those you would like to prevent, so we can evaluate both those judgements and any proposals that may arise from this thread.