An Implementation Model for Rational Protocol Conformance Behavior

Yep.

If we want to be able to make these witness choice decisions deterministically, then we need to somehow incorporate the relevant conditional conformances into the inputs to that conformance function.

If you said, “if we want to make the witness choices based on conditional conformances of X…” I'd agree. I think determinism is totally desirable, but also orthogonal. Am I missing something?

Today, in a conformance like your example struct X<T, U>: P , the only input to the conformance is the conforming type X<T, U> , which by itself carries no conformance information about T or U, so in the most general case, there's too little information to see the conditional conformances

Agreed so far.

without doing global lookup

When you say “without doing global lookup” I think I know what you mean: that lookup can't proceed from any data structure that is currently passed as a parameter into the context where the decision must be made at runtime. Is that right?

One of the questions I keep asking in this thread is whether there is any way of expanding the information passed into these contexts, or whether we have in fact frozen that limitation into the ABI. It seems to me that, because conformances can already differ across module contexts, the witness table for T: P (for any type T and protocol P) can't be assumed to be unique across the whole program, and, so long as there is any constraint at all on the generic, witness tables themselves could serve as such a vehicle. Why couldn't that work?

(and thereby introducing nondeterminism due to the shared mutable nature of the global conformance set).

This part seems a little too significant to pass off in a parenthesized aside. How do you imagine this becomes nondeterministic? Are you thinking of conformances coming in from dynamically loaded shared libraries?

I've got a few of problems with this approach. To begin with, I find the notion of an “optional constraint” as absurd and meaningless as that of an “optional requirement,” which you'll remember we had to cope with for interoperability reasons, but relegated to Objective-C protocols to keep them from weakening the language. These things don't appear to constrain anything. You could say, “OK, we'll pick a better name,” but I think the poor name comes from the fact that the meaning of the feature is hard to describe and understand.

Next, the approach seems like it's entirely directed at mechanics of the language implementation that are—and should remain—below the level of the programming model. To understand what this does, and why it might be needed, one has to think about conditional conformances of generic types, how witness tables are generated, etc.

Most importantly, it seems to make the programming model much worse. The situation today is that simple, valid code appears to have a straightforward meaning but doesn't. Adding the ability to complicate code with these ?-“constraints” doesn't fix that: the same simple code would have the same surprising and hard-to-describe meaning. Your proposal adds another wrinkle to all the things the authors and consumers of these generics need to consider.

The feature doesn't lead me to any obvious understanding of how to use the language to express the classic ideas of generic programming. I would certainly find ways to get the system to do what I wanted, and would probably even be able to identify some idioms and programming techniques, but I can almost guarantee that you'd loathe the results even more than you abhor the hackery of C++ type-based metaprogramming :wink:.

Lastly, as I've remarked in another thread that I can't find at the moment (but where IIRC you didn't challenge my assertion), I don't think advance declaration of which conformances may be relevant to witness selection should be necessary, because the potential relevance of a conformance can be known from declarations that are visible in the same way that overload sets are resolved based on which declarations are visible.

The problem with having to do this where Collection is declared is that one doesn't always know the shape of these “capability towers” at the point where their roots are discovered. It's very similar to the reason people want to be able to retroactively add requirements to a protocol: the set of interesting customizable operations is not known at the point where the protocol is declared.

2 Likes