[Pitch 2] Light-weight same-type requirement syntax

Yeah, that basically sums up my feelings about this feature too.

Meta-process note

This "rethrowing protocol conformances" feature really needs to receive a proper review and acceptance. If its current not-really-official-Swift-but-in-main status is now impeding the evolution process, that should be rectified ASAP. I don't think its very reasonable for rethrowing conformances to constrain the design of other features until it has been reviewed and its own design finalized (or an alternative solution adopted).


A thought that just came to me:
The Swift runtime already supports a type having multiple conformances to a protocol,
Does/can the Swift runtime support looking up a conformance of a type to a protocol which has associated types set to particular type values, eg: "find a/the conformance of T to Numeric with Magnitude == UInt if one exists"?
If it can, wouldn't generic protocols merely be a syntax for multiple conformance in the surface language?

The issue I raised is not just about @rethrows it applies to any protocol conformance that has an effect.

The sample code I wrote actually presents the problem just as easily w/ types that do not use @rethrows as it does with that decorator.

I think what you're describing is overlapping conditional conformances:

extension Array : Equatable where Element : Equatable {}
extension Array : Equatable where Element : SomeOtherProtocol {}

This is a very complicated feature which raises a lot of questions and probably makes type checking undecidable.


I think the way we would support effects propagation through opaque types is with rethrowing protocols or similar, and annotate the opaque type, e.g. some AsyncSequence throws. If the compiler infers effects from a concrete type and propagates them, that would break the ability to change the underlying concrete type, which is an important feature of opaque return types.


Since opaque types do not expose their underlying concrete type (by definition), I think the only general solution to your problem is some kind of effect declaration on the opaque type itself. For example, if AsyncSequence is @rethrows, you want to be able to write

-> some AsyncSequence<Element, nothrow>

or something like that. Otherwise, we don't have a concrete conformance to look at (we're not allowed to) in order to determine if the witness throws or not.


I don't think it necessarily does?
An example:

protocol Into { associatedtype Other }
// bad syntax because of duplicate type aliases:
// but imagine both of these conformances exist
extension Int8: Into { typealias Other = Int16 }
extension Int8: Into { typealias Other = UInt16 }
// generic protocols would just enforce the above conformances were related in some way, I think.

func foo<T: Into>(_ T) where T.Other == Int16 {}

// we need not just some conformance of Int8 to Into,
// but specifically one which has `Other = Int16`
foo(8 as Int8)

Yeah I'm coming around to the idea that we may indeed want generic protocols eventually. Rust moves a lot faster than we do, so their generics system is a lot more capable than ours, and they provide some compelling examples of generic traits in practice.

I think that anybody dismissing that idea should ensure that their opinions are properly informed, by examining languages where this has actually been implemented, and using that as a basis for explaining why it is not right for Swift.

Definitively settling that discussion is a prerequisite to consider this proposal, IMO. We can't realistically consider it if there's even a chance we might add generic protocols in the future.

EDIT: Although to be clear: I don't think this thread is the right place for a detailed discussion about whether we'll ever want generic protocols. It's just something we need to decide before this syntax becomes viable.


Don't they expose the protocol conformance? For example if a protocol requires a function foo that throws and a return type of String that requirement is surfaced to the opaque type. One way of approaching that would be: that the manner in which that function conforms to the sub type of the function requirement of the protocol is surfaced in the same manner.

Therefore the type of the function on the opaque type mimics the type of the function on the non opaque type. If the non opaque type throws for a conformance then the opaque type would throw, likewise if the non opaque type does not throw as a satisfaction to the protocol then the opaque type should not throw for its satisfaction.

Doing it that way splits the problem into two parts, one: being the reflection of the conformance sub typing of the functions as witnesses to the protocol, and two: the generic effects of said conformance.

The gotcha is that using a some return type cannot change its effects. If it ever threw it may not become non throwing without breaking at the very least API contract (perhaps ABI) or visa-versa.

But that gotcha exists no matter what. So unless we surface generic effects or some other solution in that space of determining the throwyness of a conformance I fear that this feature won't be able to be used meaningfully for any AsyncSequence work.

Could we punt and change the proposed declaration syntax to protocol AsyncSequence<associatedtype Element>? If a definite decision is made not to ever implement generic types, the associatedtype keyword could become optional.


It's a good idea, but we'd still have 2/3 places where these very different concepts would share the same syntax: at the site where you declare a conformance, and the site where you use the protocol.

I don't think it's obvious that sharing that syntax would be okay, or make the system simpler overall.

I also wonder about things like associated protocols, which would be huge, and how this might work if the associated protocol were also generic. I know these things can seem a bit abstract or "too advanced", but they have real, practical uses - like saying I have a protocol TestSuite with an associated protocol TestSuite.Stubs (i.e. each suite has its own set of stubs), and maybe that protocol can be parameterised to declare the various different TestEnvironments it supports.

I don't know. It needs a lot of careful thought, and as I mentioned before, some ideas about where we want to ultimately go and where the limits are. I just don't think it's obvious that we should start mixing up the syntax like this before we have that big picture.

1 Like

What you're referring to "generic protocols" are really more like "multi-Self" protocols, where there are multiple types involved in a conformance without a functional dependency between them, in contrast to the relationship from the Self type to associated types in a protocol conformance today. Although Rust uses generics syntax for these, I don't think that's necessarily the best choice, because it implies that one type is more important to the relationship, and that is the exact opposite of what the feature means. Generic argument syntax on the other hand already implies a functional dependency for non-protocol types—given any instance of Array, for instance, you can recover its Element type from that instance, since there is no value that is both an Array<Int> and Array<String>. By analogy, any generic value using a particular conformance has only one possible binding for its associated types, so it seems appropriate for primary associated types on a protocol to be notated that way as well. We can adopt this syntax now, and still consider other ways to express multiple-parameter conformances. (One strawman might be to declare such a protocol as protocol Convertible(from: T, to: U), provide conformances by extension Convertible(from: Int32, to: Int64), and express constraints as <T, U> where Convertible(from: T, to: U).)


At the protocol declaration, only a single primary associated type is allowed.

I don’t understand the motivation for this restriction. Why can’t I have a protocol Functor<Arg, Return>?

Right, as you mention, this would expose part of the concrete type's API through the opaque type, whereas normally the opaque type's behavior is defined entirely in terms of its generic requirements. I would rather we come up with a way of abstracting over the throwing effect via an opaque type than go down the route of exposing implementation details of the concrete type.

The gotcha is that using a some return type cannot change its effects. If it ever threw it may not become non throwing without breaking at the very least API contract (perhaps ABI) or visa-versa.

Right, exactly. This would defeat the purpose of an opaque type at least to some extent.

So unless we surface generic effects or some other solution in that space of determining the throwyness of a conformance I fear that this feature won't be able to be used meaningfully for any AsyncSequence work.

You would only be able to write code that assumed the AsyncSequence throws -- that doesn't seem like the end of the world, does it?

1 Like

At this point several people have pushed back. There's no real motivation for this restriction, except that it made my prototype implementation with the @_primaryAssociatedType attribute simpler. In principle I'm not opposed to generalizing it to allow multiple associated types (but at that point I'd like us to come up with a better term than "primary associated type").


Well by your own suggestion; the signature should be some AsyncSequence<T, nothrow> or something like that. So to me that feels like this is incomplete and should not be used yet for AsyncSequence yet the pitch uses that as one of the first justifications.

edit: clarification on my saltiness
I think that generally for non effectful types such as Collection etc this pitch is amazingly useful. I just wish we had a better way of expressing the types that have effects. some Sequence<String> is great syntax imho. I just hope we can get to a point very soon that allows for some AsyncSequence<String> too without much more overhead to the developer.


I am confused by this explanation. An expression that evaluates to an instance of Array must have a static type of Array<Element>, so “recovery” is trivial. For generic protocols, does “recovery” of type parameters occur only in the presence of existentials? Can the same restriction not be applied to generic protocols to require their types always be fully specified? E.g.:

protocol Convertible<To> {
  func convert() -> To

extension Int: Convertible<Float> {
  func convert() -> Float { ... }

extension Convertible<String> {
  func convert() -> String { ... }

let myNumber: Int = 1234
(myNumber as Convertible<Float>).convert() // 1234.0
myNumber.convert() // error: multiple matches for func convert()
myNumber as any Convertible // error: incomplete type Convertible<_>
1 Like

+1, I'm very much in favor of this change. If it means we can have opaque return values with specified associated types, this is absolutely a much-needed feature.

On the topic of "generic protocols" and Rust, I'm not sure if the feature I'm thinking of in Rust is the same as what people mean by "generic protocols". If I understand correctly, Rust's feature is more about how to express many-to-many ad-hoc type relationships. The canonical example being what types can be cast to what types.

For that feature, I don't think we want type-parameter syntax, because what we're defining is not parametric, it's ad-hoc. It's analogous to function overloading rather than generics. A subset of these would be a single type conforming to a protocol in multiple ways.

I've hit this need before with String, which is a type that provides multiple models of string: extended grapheme clusters, Unicode scalar values, and UTF-8 code units. If a type wanted to conform to a protocol dealing with String, but provide a different conformance for each of String's models, then currently that type must be parameterized somehow (e.g. over the Element of the corresponding view). Or, the type has to provide a unique type for each conformance and add API to access them.

For example, in the consumer/searcher prototype from way back when, I was trying to conform CharacterSet to a protocol over String. I wanted to support both scalar and grapheme cluster processing. For that, I had to make separate types. Funnily enough, while the fundamental limitation of not allowing multiple conformances is still present, allowing these types to be opaque return values could alleviate some of the bloat associated with these kinds of workarounds.

That prototype is graduating into something real. If this avoids having every single generic algorithm needing to define multiple public types (and symbols!), that honestly could be the difference between shipping this in the stdlib or not.

I've also hit this same issue in multiple different API I wanted to add to System. Without opaque return values with associated types, I couldn't justify shipping them.


Yeah, I was discussing this with @Joe_Groff privately and he explained to me that Rust's generic traits are analogous to Haskell's multi-parameter typeclasses. So rather than a type T conforming to a "generic instantiation" of a protocol MyProto<U>, it is more that a pair of types (T, U) together conform to MyProto.