[Pitch 2] Light-weight same-type requirement syntax

The common example of a generic protocol is something like Rust’s From<T>, which has been referenced several times in this and other threads. I gave an illustration of what such a type might look like in a Swift with generic protocols in an earlier post in this thread.

1 Like

On the contrary, I haven’t seen a strong defense of the assertion that generic protocols are inferior to or necessitate multi-Self protocols. I challenged this earlier but was ignored.


That’s an artificially high standard. If cases like constraining collections are indeed far more common, which seems highly likely, multi-Self protocols don’t need to be better than parameterized protocols, they merely need to be good enough.

On the contrary, introducing the topic of multi-Self protocols has created an impossibly high standard for proponents of generic protocols. Instead of simply explaining their utility, perhaps by way of comparison to Rust which already has them, they are now forced to defend the introduction of an esoteric rank-N type class systems—one that, again, Rust is able to hide from the language user.


This made things click for me. I wonder if this association could be made clearer somehow, maybe through naming. I don’t feel like "primary associated type" really captures this notion.


There is zero interest on the Core Team in reserving P<Int> for “generic protocols”. I can say that with confidence because we’ve had multiple conversations about it. To give you an idea of the tenor of those discussions, they’ve all turned into discussions about whether it’s acceptable Evolution procedure to just rule something out by fiat or if we formally need to go through the proposal process for a negative feature.

Arguments against using this syntax include:

  • There are other reasonable and available spellings for that concept if we decide we want it, such as (T,U) : Convertible. Thus we’re not talking about ruling the feature out completely; we’re talking about the practical consequences of using this particular spelling.
  • The generic spelling inappropriately elevates one parameter over the other when in fact there’s no relationship.
  • The second parameter would behave unlike an ordinary associated type, so it ought to be declared differently, and it certainly needs to be used differently. For example, since there is no functional dependency, Self has no unique member of that name. Using member syntax to declare the “associated type” would be misleading. Using member syntax to name it would immediately break down in any context where multiple conformances are known.
  • The generic spelling is familiar and widely used, and it would induce programmers to use the feature when in fact they ought to have a functional dependency, as they would with an ordinary protocol.

Additionally, aside from the syntax, there are fundamental soundness problems with the feature in terms of the type-level computation it accidentally enables. It’s roughly analogous to an automaton going from one stack to two. If memory serves, folks in the Haskell world were exploring the research potential and expressive power of lifting some of the restrictions on type class instances when suddenly they realized that everything they were doing was already enabled by multi-parameter type classes, just with less convenient syntax. So it’s not something we should do casually.


We can certainly include a more conceptual definition of "primary associated type" in the proposal to capture this phenomenon with conforming types. As far as the terminology itself, I don't think any of us proposal authors have been able to come up with a better name for the concept, but we'd welcome other suggestions!


Quoting myself yet again:

I feel like there’s a significant disconnect between what people are asking for and what the Core Team is hearing. People are asking for a language-level abstraction over writing out a bunch of (e.g.) ConvertibleFrom* types. They’re not asking for types with multiple Selfs, and I don’t think there has been a thorough argument that proves that would indeed be necessary achieve what they’re asking for.

People aren’t asking for a new kind of associated type. They are asking for a factory for protocols. In other words, they’re asking for a type constructor T -> U -> V. To substitute real names, Int -> ConstructibleFrom<T> -> ConstructibleFrom<Int>. ConstructibleFrom<Int>.Self has a dependency on ConstructibleFrom<T>, and thus transitively on T.


The Self type of a protocol is a parameter to the conformance predicate. Generic protocols introduce additional parameters. That is why we talk about them as multi-parameter protocols.

The fact that the parameters would be written differently is an artificial distinction with significant consequences. Rust having an unnecessary distinction between From and Into purely to allow a contextually-typed into() member requirement to be added to the second parameter of From is a fine example of that.

When people talk about functional dependencies, they mean opaque types in the requirements having functional dependencies on other opaque types, which can then be thought of as primary opaque types / parameters. This is highly valuable for most protocols, which do not need more than one parameter, but conversions are a well-known exception. In fact, conversions are the only exception that I've ever seen, across many languages that have explored this space; if there's a pool of amazing things other than conversions waiting to be unlocked by this capability, well, it's well-hidden.

Regardless, my argument is not that we should never support functionality like this. I can certainly see the value in allowing abstraction over convertibility. I do, however, thinks it's quite a bit more specific to convertibility than you're admitting. More importantly, I think it would bad to use this specific syntax for such a narrow purpose, and there are other ways to express these requirements.

It's also worth pointing out that the soundness problems with multi-parameter protocols that I alluded to come up immediately with conversions. (Again, this is very well-studied in other languages.) Abstractions over convertibility provide most of their value by deriving conversions on aggregates (optionals, arrays, etc.) from conversions on components. Those derivation rules have an irritating tendency to be overlapping and potentially unsound, and libraries often run into unanticipated limits. (For example, you cannot generally provide a transitive derivation rule; users will need to manually chain conversions.)


Honestly, in another context, I would hardly associate such a spelling with a generic protocol, and I have still no intuition how exactly this should work.
I don't think it makes sense to use the obvious syntax for something different, and come up with yet something else later.

Wait… exactly that was presented as a major goal of the pitch, wasn't it??

Guessing what people actually want is tough business ;-) — so I consider the arguments based on that very weak. However, I can say that this is exactly what I'd be asking for:
A more elegant way to declare ProtocolXForInt, ProtocolXForString, ProtocolXForFoo… and imo protocol ProtocolX<T> would nail it.

Sidenote: even when it's already decided that generic protocols will never be added, I don't think the concept of primary associated types as it is presented has a good tradeoff (see [Pitch 2] Light-weight same-type requirement syntax - #100 by Karl).


It's also worth noting that Swift generics are already undecidable as specified due to the combination of same-type requirements in protocols and recursive conformance constraints. The new formulation in terms of a rewrite system carves out a decidable subset of the language by imposing a limit on the total size of the rewrite system constructed by the completion procedure, but it's hard to get an intuitive understanding of what this actually means in terms of the constraints the user is able to write. Adding more "advanced" features such as MPTCs, generic associated types will further complicate the theoretical and intuitive model of Swift generics in a way that is probably not desirable.


I don’t think that applies to the kinds of protocol factories people are asking for. Wouldn’t that be more like a function which took all the non-Self parameters and produced a protocol which accepted Self as a parameter?

Quoting myself from upthread:

Perhaps not coincidentally, collision handling is also a common illustration of the power of multi-methods. There’s probably an analogy hiding in here, not just between multi-methods and multi-Self protocols but in specializations of both which language designers have taken as practical optimizations.

I would find it very strange not to use angle bracket syntax for such a feature. But I believe you made a point earlier that associated types are not necessarily the dual of type parameters, so perhaps there’s room for both types of parameters in the angle bracket syntax.


The "protocol factory" is a different way of looking at the same amount of expressivity. In the same way that you can go from a "curried" representation of a function that takes its first argument, then produces a function that takes the second argument like Int -> String -> Float, to a function that takes its arguments all at once like (Int, String) -> Float, you can look at a protocol factory as a function that you pass the generic arguments into to get a concrete protocol, which you then pass in a conforming Self type to get to a specific conformance. This is ultimately as expressive as having a model where all of the arguments are provided to the protocol at once.


This is effectively just currying the conformance predicate. Can you explain why you think that this is a fundamental difference?

It’s the difference between (T x Self) -> Conformance and (T, Self) -> Conformance. In the first, T and Self are peers; in the second, T is “superior” to Self because it can be curried out.

I bring this up because so much of the pushback against generic protocols has been about the “unprivileged” nature of Self. It just doesn’t seem “unprivileged” to me when seen through this lens. Within a protocol definition, Self is known to refer to the conformance identified by the type argument(s) as much as the conformance identified by what precedes the colon.

I understand the argument is deeper than that, especially with respect to associated types. I am not discounting that argument, because I don’t yet completely understand it. But that’s not where the initial opposition seemed to come from.

I'm not sure that's true. The primary place you "apply" protocol requirements in Swift is in generic constraints. If we had multi-parameter type classes, I would think we'd have syntax that lets you constrain any or all arguments no matter what surface syntax you use; you would be able to write both <T> where Foo(T, Int) and where Foo(Int, T). Since we allow protocol constraints to be applied directly to type arguments, the generic protocol syntax gives just a bit of sugar to one direction, since you could write just <T: Foo<Int>> but would have to write out <T> where Int: Foo<T>, but you can also "curry" either direction.

But Swift doesn’t have multi-parameter typeclasses. So why does this syntax factor into the discussion? The fact of the matter is that every Swift protocol conformance has a well-known Self type, and I’m not sure anyone has suggested changing that fact except as part of a potential generalization of the more restricted feature people are actually requesting.

In other words, could we not keep the expressive power of <T> where T: Foo<Int> to that which can be expressed today by <T> where T: FooFromInt? Or is there something absolutely fundamental to how associated types would behave in such a regime that necessitates bringing in the whole multi-parameter typeclass idea?

If you imposed some heavy restrictions on the "generic protocol" syntax, such as prohibiting generic protocols from declaring associated types and insisting that the generic parameter is always instantiated with a concrete type (so FooProtocol<Int> would be okay, but not FooProtocol<T>) then it can probably desugar to something equivalent to today's system of non-generic protocols. As soon as you allow slightly more generality, then you open a pandora's box of type-level computation and implementation complexity.

OK, I think this is where the knowledge gap lives. I know it’s been referenced previously, but I don’t think it’s been conclusively illustrated how associated types turn a relatively simple request (shorthand for generating protocols) into something too complex to implement.

It's no longer quite so simple once you allow the generic protocol to be parametrized by another type parameter (Foo<T>) and not just a concrete type (Foo<Int>). But even the latter interpretation runs afoul of some pretty fundamental assumptions in the generics system.

For better or worse, today if a type parameter T conforms to two protocols P and Q that both define an associated type named A, we introduce an implicit same-type requirement T.[P]A == T.[Q]A, with the reasoning being that a concrete type that conforms to both P and Q will use the same type witness for both instances of A. This no longer makes sense if P and Q are two instantiations of a "generic protocol" where you want A to depend on the type parameter of the protocol, for example this becomes ambiguous:

protocol Foo<T> {
  associatedtype A where A == Array<T>

struct Bar : Foo<Int> {}
struct Bar : Foo<Double> {}

What is Bar.A?