[Pitch 2] Light-weight same-type requirement syntax

Agreed—an example like this would be great to note in the proposal text.

In particular, banning struct Lines: Sequence<String> helps reinforce the notion that these are not generic protocols: a developer coming from e.g. Java can be guided into the realization that only one conformance to a protocol is allowed, by nature of only : Sequence being supported.

1 Like

What is the performance impact of using this? For example; would it be reasonable to have had the map operation return a some type? If a type is frozen and inlined for performance do we keep that performance or is it obscured by the some-ness?

1 Like

No, just conceptual simplicity at this point.


If the function is inlinable, the opaque type is effectively just sugar and callers will see the concrete underlying type at the SIL level. If the function is not inlinable, callers will manipulate it abstractly.

The underlying concrete type does not need to be frozen for the inlinable optimization to occur; it being frozen is orthogonal to the opaqueness of the function's result.


I was initially quite hesitant about the use of generics syntax to parameterize protocols by their associated types, but I am much more happy about this iteration.

I think my happiness is because of the alignment between how the parameterization is declared and how it’s used. To my mind, it’s basically saying that this is the way in which protocols are parameterized in Swift. Yes, it eliminates room for generic protocols but (as explained in the Generics Manifesto) that feature isn’t really what we’d want to support for very good reasons.

I think this iteration makes it plain, as pointed out by @Jumhyn, that the limitation to one parameter seems arbitrary. Indeed the proposal text offers an example off the bat of SetProtocol<Element>, which makes it pretty glaring that the corresponding hypothetical DictionaryProtocol<Key, Value> isn’t allowed. Not sure that an arbitrary restriction adds to rather than detracts from conceptual simplicity here.

Otherwise I think I’m growing to really like how this is shaping up.

Not to derail the main conversation, but is #if $feature ever going to be pitched for Swift Evolution, or will it remain an undocumented internal feature?


I'm still convinced that this feature is both bad syntax, as well as semantics-wise, due to conflating two orthogonal concepts (generic parameters and associated types) into one syntax:


Previous comments

I can't comment on how feasible generic protocols would be to implement in Swift, but as an avid user of generic traits in Rust I can at least try to give some context as to why one would want them (or something equivalent) in Swift:

It's fairly long, so folding it:

Type-safe, compiler-checked value conversions

Type-safe, compiler-checked value conversions

Swift (like Rust) chose to make type conversions explicit. This is great from a correctness and safety perspective, but tends to be very hindering when trying to write abstract code that's generic over its types. The main reason for this is that Swift lacks a way to abstractly express type convertibility.

Say you have a function that is supposed to calculate the fitness of a given value in respect to a certain "ideal" value:

func fitness(actual: Double, ideal: Double) -> Double {
    (actual == ideal) ? 0.0 : abs(actual - ideal) / max(abs(actual), abs(ideal))

(This formula is taken from step 7 of the "fitness distance" algorithm defined by the WebRTC spec, which I had to implement recently and where I found myself—yet again—in need of generic protocols.)

Now say that the values you need to compute the fitness for have different types then Double, say Int:

func fitness(actual: Int, ideal: Int) -> Double {
    fitness(actual: Double(actual), ideal: Double(ideal))

or Float:

func fitness(actual: Float, ideal: Float) -> Double {
    fitness(actual: Double(actual), ideal: Double(ideal))

Now, these two latter functions work just on their own. But they are of no use when you need to abstract over the input type.

Sure, for this particular scenario you could write a protocol like this:

protocol DoubleConvertible {
    func asDouble() -> Double

and just have all necessary types conform to said protocol.

func fitness<Value>(actual: Value, ideal: Value) -> Double where Value: DoubleConvertible {
    fitness(actual: actual.asDouble(), ideal: ideal.asDouble())

But this approach has two major limitations:

  • it only "solves" the issue for Double. What about Float? Or Int? Or String? …

  • you can't make fitness(actual:ideal:) generic over multiple types:

    func fitness<Input, Output>(actual: Input, ideal: Input) -> Output where Input: ???Convertible {
        fitness(actual: actual.as???(), ideal: ideal.as???())

If we were to declare type-specific protocols for every type we possibly might want to convert to, we would be polluting our project's namespace with an immense amount of redundant garbage (and not be gaining much from it semantically, either).

Just do the math: Given N interchangeably convertible types we would need do define N individual protocols of the pattern protocol <…>Convertible { … } and come up with N unique, yet expressive method names to go with them?

And what if this N gets getting bigger and bigger? And what if instead of dealing with concrete types you were working on generic types or methods/functions, and hence no way to have the Swift compiler pick the right explicit protocol for a given generic type T?

This is where generic protocols safe the day!

protocol ConvertibleInto<T> {
  func into() -> T

But wait, can't we do this already with a protocol with associated types, like so? …

protocol ConvertibleInto {
    associatedtype T
    func into() -> T

Well, it depends. While you could easily implement Foo: CustomStringConvertible in terms of …

struct Foo {
    let bar: Int

extension Foo: ConvertibleInto {
    typealias T = String
    func into() -> T {
        return …

… you would get in trouble as soon as you were to decide that it would be nifty to also be able to convert instances of Foo to Data:

// error: redundant conformance of 'Foo' to protocol 'ConvertibleInto':
extension Foo: ConvertibleInto {
    // error: invalid redeclaration of 'T':
    typealias T = Data
    // error: invalid redeclaration of 'into()':
    func into() -> T {
        return  …

In today's Swift a protocol (regardless of whether it has associated types, or not) can only be conformed to once by any single type. In order to achieve polymorphic semantics of protocol ConvertibleInto we however would need to have a way to allow for multiple conformances per type.

Generic protocols would allow for this:

extension Foo: ConvertibleInto<String> {
    func into() -> String {
        return  …

extension Foo: ConvertibleInto<Data> {    
    func into() -> Data {
        return  …

In addition to a hypothetical ConvertibleInto<T> we would probably also want to have access to a corresponding ConstructibleFrom<T> protocol going the other way:

protocol ConstructibleFrom<T> {
    init(from value: T)

As such a hypothetical Real type might be constructible from both Float and Double, e.g.:

struct Real { /* ... */ }
extension Real: ConstructibleFrom<Float> { /* ... */ }
extension Real: ConstructibleFrom<Double> { /* ... */ }

(We will be using extension<…> as a hypothetical syntax for introducing generic arguments into an extension scope.)

Going even further one might want to have the Swift compiler automatically derive conformance of ConstructibleInto<U> for every type U: ConstructibleFrom` with a default implementation like so:

extension<T, U> T: ConvertibleInto<U> where U: ConstructibleFrom<T> {
    func into() -> U {
        return U(from: self)

One might also want to have every type T auto-derive conformance to T: ConstructibleFrom<T> like so:

extension<T> T: ConvertibleInto<T> {
    func into() -> T {
        return self

With generic protocols at one's disposal one could —for example— unify all of …

  • UIColor's var cgColor: CGColor
  • UIImage's var cgImage: CGImage?
  • UIColor's var ciImage: CIImage?

… into the single universal ConvertibleInto<T> protocol, and individually conform to it like so:

extension UIImage: ConvertibleInto<CGImage> {
    func into() -> CGImage {
        return …

… which would then allow one to nicely write …

func draw(image: T) where T: ConvertibleInto<CGImage> {
    let cgImage: CGImage = image.into()
    // …

… having it accept images of any type that's convertible to CGImage (e.g. UIImage, NSImage, CGImage, …).

Generic overloading

Generic overloading

With all this talk about multiple conformances of protocols one might wonder "wait, isn't what what Swift has function overloading for? We already can implement variants of a function based in argument and/or return types, why need protocols for that?"

And of course some truth to this sentiment.

Let's assume we wanted to write an efficient and type-safe linear algebra framework for Swift. We would probably end up defining types for scalars, vectors and matrixes, like so:

struct Scalar<T> { /* ... */ }
struct Vector<T> { /* ... */ }
struct Matrix<T> { /* ... */ }

And it would not take long until one needed some way to perform arithmetic operations on them, such as multiplication:

extension Vector {
    // vector scaling:
    func *(lhs: Self, rhs: Scalar<T>) -> Vector<T> { /* ... */ }
    // dot product:
    func *(lhs: Self, rhs: Vector<T>) -> Scalar<T> { /* ... */ }

    // vector matrix product:
    func *(lhs: Self, rhs: Matrix<T>) -> Vector<T> { /* ... */ }

Notice how each method's return type directly depends on the type of rhs.

This works as long as one is dealing with explicit types, exclusively. But sooner or later one would want to be able to generalize over scalars, vectors and matrixes. (After all from the point of algebra they are just tensors of 0, 1, or 2 dimensions respectively.)

As it turns out there is no way to express overloading in today's Swift from the perspective of protocols. It's a dead spot in Swift's generics type system.

If one had generic protocol at one's disposal however one could express things like this:

protocol Multiplication<Rhs = Self> {
    associatedtype Output
    func *(lhs: Self, rhs: Rhs) -> Output

// vector scaling:
extension Vector: Multiplication<Scalar<T>> {
    typealias Output = Vector<T>
    func *(lhs: Self, rhs: Rhs) -> Output { /* ... */ }

// dot product:
extension Vector: Multiplication<Vector<T>> {
    typealias Output = Scalar<T>
    func *(lhs: Self, rhs: Rhs) -> Output { /* ... */ }

// vector matrix product:
extension Vector: Multiplication<Matrix<T>> {
    typealias Output = Vector<T>
    func *(lhs: Self, rhs: Rhs) -> Output { /* ... */ }

Now whenever one needs to be generic over a type U and require it to be multipliable with Vector<T>, one could express it through where Vector<T>: Multiplication<U>.

These are just two of the many use-cases for generic protocols off the top of my head. There are many more.

I don't see how avoiding having to write Collection<.Element == Int> in favor of Collection<Int> is worth effectively shutting the door for any possibility of proper generic protocols ever making it into Swift.

Not being able to express overloading (and abstracting of it in generic contexts) in protocols is one of one of the biggest gaps in Swift today and a daily annoyance for anybody writing generics-heavy code in Swift.

Details hidden because this is a bit of a tangent, but still slightly relevant insofar as it impacts the syntax here

I have always understood the generic protocols as having semantics similar to the (intuitive to myself) semantics of the Generic<T>.Interface example. Just as Generic<Int> and Generic<String> are distinct types related by being parameterizations of the same generic base, Generic<Int>.Interface and Generic<String>.Interface are distinct protocols related by being (indirect) instantiations of the same generic base.

For a hypothetical "true" generic protocol MyGenericProto<T>, types would be able to conform separately to MyGenericProto<Int>, MyGenericProto<String>, etc. etc.

Beyond my intuition about what "generic protocols" mean, though, I have no idea about the ultimate soundness or feasibility of such a system. I myself have not really encountered a spot where I wanted to use a generic protocol, so the using the "obvious" syntax for generic protocols as the syntax for same-type constraints is not that concerning for me.


Yeah, that basically sums up my feelings about this feature too.

Meta-process note

This "rethrowing protocol conformances" feature really needs to receive a proper review and acceptance. If its current not-really-official-Swift-but-in-main status is now impeding the evolution process, that should be rectified ASAP. I don't think its very reasonable for rethrowing conformances to constrain the design of other features until it has been reviewed and its own design finalized (or an alternative solution adopted).


A thought that just came to me:
The Swift runtime already supports a type having multiple conformances to a protocol,
Does/can the Swift runtime support looking up a conformance of a type to a protocol which has associated types set to particular type values, eg: "find a/the conformance of T to Numeric with Magnitude == UInt if one exists"?
If it can, wouldn't generic protocols merely be a syntax for multiple conformance in the surface language?

The issue I raised is not just about @rethrows it applies to any protocol conformance that has an effect.

The sample code I wrote actually presents the problem just as easily w/ types that do not use @rethrows as it does with that decorator.

I think what you're describing is overlapping conditional conformances:

extension Array : Equatable where Element : Equatable {}
extension Array : Equatable where Element : SomeOtherProtocol {}

This is a very complicated feature which raises a lot of questions and probably makes type checking undecidable.


I think the way we would support effects propagation through opaque types is with rethrowing protocols or similar, and annotate the opaque type, e.g. some AsyncSequence throws. If the compiler infers effects from a concrete type and propagates them, that would break the ability to change the underlying concrete type, which is an important feature of opaque return types.


Since opaque types do not expose their underlying concrete type (by definition), I think the only general solution to your problem is some kind of effect declaration on the opaque type itself. For example, if AsyncSequence is @rethrows, you want to be able to write

-> some AsyncSequence<Element, nothrow>

or something like that. Otherwise, we don't have a concrete conformance to look at (we're not allowed to) in order to determine if the witness throws or not.


I don't think it necessarily does?
An example:

protocol Into { associatedtype Other }
// bad syntax because of duplicate type aliases:
// but imagine both of these conformances exist
extension Int8: Into { typealias Other = Int16 }
extension Int8: Into { typealias Other = UInt16 }
// generic protocols would just enforce the above conformances were related in some way, I think.

func foo<T: Into>(_ T) where T.Other == Int16 {}

// we need not just some conformance of Int8 to Into,
// but specifically one which has `Other = Int16`
foo(8 as Int8)

Yeah I'm coming around to the idea that we may indeed want generic protocols eventually. Rust moves a lot faster than we do, so their generics system is a lot more capable than ours, and they provide some compelling examples of generic traits in practice.

I think that anybody dismissing that idea should ensure that their opinions are properly informed, by examining languages where this has actually been implemented, and using that as a basis for explaining why it is not right for Swift.

Definitively settling that discussion is a prerequisite to consider this proposal, IMO. We can't realistically consider it if there's even a chance we might add generic protocols in the future.

EDIT: Although to be clear: I don't think this thread is the right place for a detailed discussion about whether we'll ever want generic protocols. It's just something we need to decide before this syntax becomes viable.


Don't they expose the protocol conformance? For example if a protocol requires a function foo that throws and a return type of String that requirement is surfaced to the opaque type. One way of approaching that would be: that the manner in which that function conforms to the sub type of the function requirement of the protocol is surfaced in the same manner.

Therefore the type of the function on the opaque type mimics the type of the function on the non opaque type. If the non opaque type throws for a conformance then the opaque type would throw, likewise if the non opaque type does not throw as a satisfaction to the protocol then the opaque type should not throw for its satisfaction.

Doing it that way splits the problem into two parts, one: being the reflection of the conformance sub typing of the functions as witnesses to the protocol, and two: the generic effects of said conformance.

The gotcha is that using a some return type cannot change its effects. If it ever threw it may not become non throwing without breaking at the very least API contract (perhaps ABI) or visa-versa.

But that gotcha exists no matter what. So unless we surface generic effects or some other solution in that space of determining the throwyness of a conformance I fear that this feature won't be able to be used meaningfully for any AsyncSequence work.

Could we punt and change the proposed declaration syntax to protocol AsyncSequence<associatedtype Element>? If a definite decision is made not to ever implement generic types, the associatedtype keyword could become optional.


It's a good idea, but we'd still have 2/3 places where these very different concepts would share the same syntax: at the site where you declare a conformance, and the site where you use the protocol.

I don't think it's obvious that sharing that syntax would be okay, or make the system simpler overall.

I also wonder about things like associated protocols, which would be huge, and how this might work if the associated protocol were also generic. I know these things can seem a bit abstract or "too advanced", but they have real, practical uses - like saying I have a protocol TestSuite with an associated protocol TestSuite.Stubs (i.e. each suite has its own set of stubs), and maybe that protocol can be parameterised to declare the various different TestEnvironments it supports.

I don't know. It needs a lot of careful thought, and as I mentioned before, some ideas about where we want to ultimately go and where the limits are. I just don't think it's obvious that we should start mixing up the syntax like this before we have that big picture.

1 Like

What you're referring to "generic protocols" are really more like "multi-Self" protocols, where there are multiple types involved in a conformance without a functional dependency between them, in contrast to the relationship from the Self type to associated types in a protocol conformance today. Although Rust uses generics syntax for these, I don't think that's necessarily the best choice, because it implies that one type is more important to the relationship, and that is the exact opposite of what the feature means. Generic argument syntax on the other hand already implies a functional dependency for non-protocol types—given any instance of Array, for instance, you can recover its Element type from that instance, since there is no value that is both an Array<Int> and Array<String>. By analogy, any generic value using a particular conformance has only one possible binding for its associated types, so it seems appropriate for primary associated types on a protocol to be notated that way as well. We can adopt this syntax now, and still consider other ways to express multiple-parameter conformances. (One strawman might be to declare such a protocol as protocol Convertible(from: T, to: U), provide conformances by extension Convertible(from: Int32, to: Int64), and express constraints as <T, U> where Convertible(from: T, to: U).)