Improving the UI of generics

@Joe_Groff Do you see no chance to free up the .Type namespace in Swift?

Btw. @Nickolas_Pohilets here is the translation from Joe's notation to the other meta keyword I was using.

typealias Meta<T> = meta T

P.Protocol == (any P).Type == Meta<any P> == Meta<P> == meta P
    P.Type ==   any P.Type == any Meta<P> == any meta P

Why is there any in the last line?

Right now there are two kind of meta types which are merged into one type, which makes it really hard to work with and which also explains why there is no pure Swift implementation of type(of:) function.

Around Swift 3 end we were working on a proposal to try to push a meta type revamp though while you didn't need to provide an implementation as a proposal author.

Here is that document:

Back then we sliced .Type and .Protocol into Type<T> and AnyType<T> where the last type is an existential like type but for meta types.

If we now rename Type<T> to Meta<T> exchange Any prefix by the any keyword, we'll get the exact same types as above:

  • AnyType<P> == any Meta<P> == any P.Type
  • Type<P> == Meta<P> == (any P).Type

Using Joe's syntax version type(of:) would probably look something like this:

func type<T>(of value: T) -> any T.Type

And a 'potential' subtype(of:named:) like so:

func subtype<T>(of type: T.Type, named name: String) -> (any T.Type)?
1 Like

What advantage do you perceive in Type<P> or Meta<P> over meta P? In that syntax, wouldn't your any Meta<P> just be any meta P?

None, that is an old proposal and I like your idea of the meta keyword a lot more. ;) The syntax is different, but the behavior remains exactly the same.


I thought I would be bumping an old thread, but apparently this one is still (kind of) alive!

I didn't get a chance to comment while this was fresh, so I'll leave my thoughts now. While the post is factually/technically accurate, of course, I feel that it's a bit too diminutive towards existentials and almost implies that they are useless or mistakes, or that they're not in the language's future plans. I very much hope that isn't true, and that improving existentials is still on the roadmap somewhere.

I think it's important to argue the case for existentials. Sometimes the post (document?) draws a sharp, fundamental line, and clearly acknowledges that they are very different things, meant for different purposes; and at other times it directly compares them, as though they were interchangeable, and unsurprisingly finds existentials coming up short.


This isn't a brilliant example, IMO - as the rest of the post explains, existentials and generics are entirely different things. I wouldn't say that writing the function this way "loses" type information - it's a different thing entirely.

Maybe the difference isn't obvious enough in the syntax, or perhaps this is the first thing users would try to write and it wouldn't have the behaviour they expect. That's a notation question, and I wouldn't presume to know how others learn to code.

That's a bit of a loaded statement. It's true that existentials can't provide the same type-level guarantees as generic parameters, but that's because that's not what they do. As you said, they are value-level constraints/abstractions. That is the critical thing that makes existentials so useful in the first place. You could equally say that generics won't ever quite reach the flexibility of existentials.

Neither has inherently more "power" than the other.

I really dislike this idea. It's pitched as a solution to accessing associated types from existentials, but I think it is entirely the wrong solution to that problem.

The Collection indexing example proves it - all it does it force-cast. If you consider the various ways this could be used, you'll discover they all amount to force-downcasting. It's no different to saying:

extension Collection {
  subscript(idx: Any) -> Any { self[idx as! Index] }

... which you could do today. But I think we can all agree that it's awful.

So why not hoist the casting up a level? Why not have the caller guarantee that the index really is of type (dynamic type of existential 'c').Index? That could be done via conditional casting, or by tracking the provenance of returned values somehow. Then you could call the original Collection method directly and there would be no need for any of this "existential self-conformance" malarkey.

And as it just so happens, the very next point gives us a way to do that:

This is what we should focus on IMO, because it so precisely addresses the issue. If we had a way to talk about the specific type inside an existential, issues with associated types and uses of Self pretty-much melt away. This is a big hole in the type-system anyway: while you can box a value of any type (including a generic type) in an existential box, and transfer it between different boxes (sometimes), you can't actually, truly un-box the existential unless you know the specific type it contains (which defeats much of the purpose of using existentials in the fist place).

I still feel that the post could be kinder towards this approach, though. There are multiple possible interpretations for what "computations derived from a single existential value" could mean:

  1. Does it mean this approach wouldn't scale to multiple values?

    That's not true - we could support conditionally-downcasting other values to type X:

    let <X: Collection> openedX = x // X is now bound to the dynamic type of x
    var start = openedX.startIndex // type: X.Index
    if let openedOther = other as? X {
      // 'openedOther' is also of type X
      start = openedOther.startIndex // type-safe.

    Or unboxing them to their own types, with constraints based on X:

    var objects: Collection = ...
    var openedObjects: <X: Collection> = objects
    var destination: Collection = ...
    if var rrc = destination as? <R> where R: RangeReplaceableCollection, R.Element == X.Element {
      rrc.append(contentsOf: openedObjects)
      destination = rrc
  2. Does it mean that it wouldn't support writing func foo<T>(a: T, b: T) -> [T]? (i.e. binding multiple parameters to the same type).

    Because that seems obvious. Of course a value-level abstraction is not the right thing for expressing constraints across values. That's not what it's for. Just like opaque types have difficulty expressing constraints across different functions. You need a lexically-higher scope to define a single thing that the various abstractions can reference in their own constraints.

What's more - this idea of introducing a local type that we use for unboxing would be great for code that doesn't even use existentials, too. For example, it could allow us to up/downcast protocols with associated types (e.g. casting from Collection -> RandomAccessCollection).

extension Collection {
  func myAlgorithm() { print("Collection default") }
extension RandomAccessCollection {
  func myAlgorithm() { print("RAC default") }

func doSomething<C: Collection>(_ objects: C) {
  if let rac_objects = objects as? <R: RandomAccessCollection> {
    rac_objects.myAlgorithm() // "RAC default"
  } else {
    objects.myAlgorithm() // "Collection default"

(with the compiler inferring that R.Element == C.Element, R.Index == C.Index, as its a downcast)

Anyway, those are my thoughts. I hope existentials haven't been forgotten about - there is clearly some design work to do, but I don't see anything fundamentally flawed.


No one's forgotten about existentials, don't worry. I don't think there's any contradiction in what I wrote and what you said—existentials do lose static information, and you'd have to use casts to recover it. We should certainly make it possible to write those casts, but we should also make it possible to express statically type-safe APIs that don't fundamentally rely on casting.


As discussed in this thread last year, using new syntax to refer to existentials as any TypeName and generics as some TypeName would make these two features more equally represented. Also it could help in understanding the differences and when to use which.


Absolutely - I even said "the post is factually/technically accurate". I just feel that in trying to make the case to the community about the value of opaque types (which it clearly succeeded in doing), it ends up reading a bit unfair towards existentials. That's just my impression, and I wanted to clarify whether existentials have a future in the language and to make the case that they should.

I'm very, very happy they haven't been forgotten about :slight_smile: .

I've always been a massive fan of using the word "any" for existentials. I think it makes the whole model a lot simpler to teach, to learn, to understand and use.

There were some fears in the thread that SIMD's any(...) free-function might make it impractical to use that word for something else, but I really hope we can find a way around it. I can't think of another word that's as accurate and concise as any.


Anything new on this?


Wouldn't the fact that any ProtocolName is only used in a type position stop there from being conflict between it and the any(...) function? Or are we worried about humans being confused by the reuse?

Yes, I'm working on an implementation of general opaque result types, i.e. func foo() -> <T> T where T.U == Int.


Just reading this thread and thought I'd chime in on this. I've done a lot of C# programming over the last couple of years and the main conceptual difference between protocols in Swift and interfaces in C# is that in C# an interface is a reference type, effectively a specialised abstract class. So old concepts are re-used. "Conforming to an interface" = "inheriting the interface". "Interface as a type" vs "interface as a constraint" doesn't arise because the interface is essentially a class and its use as an "existential type" is just plain old inheritance.

I'm not saying things don't get more complicated under the hood or that the above holds perfectly when conforming structs to interfaces (I hardly ever use structs in C#) but this is how things are presented to the programmer and it works well. If you use an interface the way you would an abstract class you virtually never go wrong.

I think where Swift gets complicated by comparison is because protocols are a completely different thing compared to classes/structs and deviate from traditional ideas of inheritance (whilst retaining the same syntax). I'm sure there are reasons for this, just pointing out how it feels to the programmer.

Having said this, associated types in Swift protocols are fantastic compared to generic parameter types in C# interfaces. The amount of boilerplate in C# generics due to the absence of associated types is unbelievable. I wrote a comment in support of a proposal to add associated types to C# interfaces (terminology note: "existential types" means associated types in proposal). Personally I have come to the view that associated type and generic type parameters complement each other well and I am pretty confident the C# team will add associated types (including the Self type) to C# interfaces after they finish their work on static interface members.


That doesn't really clarify much in my mind. Interfaces in C# don't seem to me any more like inheritance than protocols in Swift. In C# you can only inherit from a single base class, but you can conform to multiple interfaces (even with overlapping methods). Calling C#'s interface semantics "inheritance" doesn't help me understand the distinction at all.

As far as I can tell the answer to "why was Swift designed this way instead" is "so that it can be better optimized", but I guess I'm still not convinced that was a worthwhile tradeoff. It just feels like at this point you need to be an expert in type systems to make use of Swift generics with ease, and I never felt that way when working with C#. For sure there were things I couldn't always express in C# generics that maybe I wished I could, but it seemed like the simple cases were simple, and in Swift I don't feel like that's true today.

So far in my career I've used C++ templates, C# generics, and now Swift generics. I remember when learning C++ templates the syntax sometimes was confusing, but overall it was relatively simple to understand how they worked and how to use them. For C# I remember feeling limited at times in what I could do (what kinds of constraints I could use), but otherwise it felt very straightforward. In Swift I often feel like I have no clue what I'm doing, and it feels like even simple things are just overly difficult. I don't feel more productive. I feel like "if I'm not writing a library for a large audience then maybe I just shouldn't even bother because it's probably not worth the effort". To me that just feels like a miss...

Even so, I think the recent proposals to allow existential to be used in more places will help a lot. Other proposals, though, just feel like they require so much more conceptual understanding to even get started, which, again, I never really felt when using C++ or C#.

If you're familiar with C# generics, then associated types are in most respects isomorphic to generic parameters on interfaces. If not for the limitations on existentials, which IMO have lasted for way too long, they can be used in pretty much all of the same situations. I would say that the design choice to use associated types was driven not so much by performance as by scalability and library evolution concerns; you should be able to add associated types to a protocol that aren't necessarily part of the primary interface, without needing to then specify that associated type every single place you use the protocol as a constraint, like you would with the generic parameters to a C# interface. For example, with a fully armed and operational Swift type system, you ought to be able to use Collection where .Element == Int, without having to also bind the Collection's generator, subsequence, index, and other accessory associated types, but Collection can still use those associated types to provide a stronger-typed relationship between those related parts of a collection implementation. Similarly, you can add associated types to an already-published protocol, with a default binding, and not break the API or ABI of existing clients of the protocol.


That sounds very promising!

That doesn't really clarify much in my mind. Interfaces in C# don't seem to me any more like inheritance than protocols in Swift.

Sorry not to be helpful. The main point I was trying to make is that in C# interfaces are reference types like classes (see here) whereas I have no clear mental model for how Swift implements protocols (are they like classes?). And C# ensures all intuition about reference types carries over so that "interface conformance" functions like a limited form of multiple inheritance. And when an interface is used as a type it functions just like a base class.

1 Like

As I have been working on an implementation of the syntax func foo() -> <T> T, one particular implication of that syntax has come to bother me quite a lot and I have not seen it discussed in this thread. The syntax func foo() -> <T> T means exists a. (() -> a) not () -> (exists a. a), as it would lead one to believe.

If you don't know Haskell it's okay because I'll give an explanation in plain English after the following code, but the difference between the former and the latter can be illustrated by the this Haskell code (credit to my friend William Brandon who I was discussing this thread with for the example):

data MPair where
    MPair :: forall a. (Show a, Monoid a) => a -> a -> MPair

foo :: Bool -> MPair
foo True = MPair 10 20
foo False = MPair "hello" " world"

main :: IO ()
main = do
    let MPair x1 y1 = foo True
    let MPair x2 y2 = foo False
    print (mconcat x1 y1)
    print (mconcat x2 y2)

The first thing to note is that the type of the output of foo depends on the value of the input. This is possible with bool -> (exists a. (a, a)) but not exists a. (bool -> (a, a)). So what is difference between bool -> (exists a. (a, a)) and protocol or class based subtyping? Well if we tried to use subtyping to achieve the same effect, we would end up with something like bool -> (exists a. a, exists b. b) i.e. we would not have the same type constraint on the two tuple elements and could therefore could not call mconcat.

Because of the above, I have a new syntax proposal: named opaque return types would be written like func foo<some T>() -> T where T.Elem == Int. If we mix opaque return types and generic parameters, we end up with something like func foo<T, some U>(_ t: T) -> U.

For parody with opaque return types func foo<T>() could be sugar for func foo<any T>(). In other words, I would like to use any as a keyword, but for an entirely different purpose than discussed in this thread. As an alternative to what any was used for in the above discussion I would propose dyn (like Rust) or dynamic if Swift prefers whole words too abbreviations. If we wanted a symmetric syntax to func foo() -> some P it would turn into func foo(_ t: any T) and not func foo(_ t: some T) as discussed above.

1 Like

I'm not sure I'm understanding your discussion about Haskell correctly, but I agree with this from another point. My concern is about use of some for generics, and previously held discussion here.

I think it makes much more sense to use any for generics, some for reverse generics, and use other keyword for existential types.

I have an idea that I quite like so far but I could be convinced otherwise - I would love to hear what people think about it. I know that the idea is inspired by many ideas of others that I've read here on the forum, but I don't remember seeing exactly this as I'm proposing it. However, it is also possible that without knowing it I'm proudly presenting someone else's idea as if it were my own.

The idea:

What if we expand the use of the generic <T: Constraint> syntax to be usable in every (or maybe almost every) situation where a normal type name can be used? The syntax would have the same meaning the new context as it does in its current usage, namely that the type in question will be chosen by the caller (subject to certain constraints).

Simplest example:

// Current syntax
func discard <Value> (_ value: Value)

// New syntax
func discard (_ value: <Value>)

The Value type is introduced at the same time as being used in the type signature.

The placeholder type names that are introduced in this way are accessible in the whole function signature and within the body of the function just like with the current generic syntax:

// Current syntax
func first <C: Collection> (of collection: C) -> C.Element

// New syntax
func first (of collection: <C: Collection>) -> C.Element

Any type names wrapped in angle brackets must be unique within the scope. This, for example, is an error:

func assign (_ newValue: <Value>, to destination: inout <Value>) // Error - invalid redeclaration of `Value`

Exactly one usage of Value must be wrapped in angle brackets, and everywhere else it is referenced by name like any other type. Generic constraints can be applied either within the angle brackets or by way of a where clause.

If this type declaration construct appears in the return type that does not mean that it is a reverse generic. It is still a regular generic type, in the sense that the caller chooses the return type.

All of these signatures are equivalent:

// Current syntax
func echo <Value> (_ value: Value) -> Value

// New syntax
func echo (_ value: Value) -> <Value>
func echo (_ value: <Value>) -> Value

The order in which the types are declared within the function signature doesn't matter, in the sense that the declared types can be referenced in earlier parameters:

// Old syntax
func feed <Recipient: Eater> (_ food: Recipient.Food, to recipient: Recipient) -> Recipient.FormOfThanks

// New syntax
func feed (_ food: Recipient.Food, to recipient: <Recipient: Eater>) -> Recipient.FormOfThanks

I find the reduction of angle-bracket-blindness in the first relative to the second fairly significant.

It seems reasonable to me to allow this syntax to be nested in a type expression:

// Old syntax
func dropLatterHalf <T> (of array: [T]) -> [T]

// New syntax
func dropLatterHalf (of array: [<T>]) -> [T]
func dropLatterHalf (of array: [T]) -> [<T>]

Given that <T> means a type that will be chosen by the caller, how do we interpret this?:

let foo: <T> = 7

This is effectively the same as this:

typealias T = Int
let foo = 7

in the sense that after using <T> as the type of foo we can then reference T for the rest of the scope:

let foo: <T> = 7
let maximumInteger = T.max // This is `Int.max`

(I can't quite put my finger on it at the moment but I have a feeling that something about this use-case that could prove extremely useful for writing and especially for maintaining unit tests).

If there is a constraint included in the type declaration then it is enforced at compilation time as always:

let a: <T: Numeric> = 1.4 // Ok
let b: <T: Numeric> = "string" // Error

let c: <T: Numeric>
switch something {
case .oneThing: c = 1.2
case .anotherThing: c = 1.9 // Ok - both are `Double`

let d: <T: Numeric>
switch something {
case .oneThing: d = 1.5
case .anotherThing: d = Int(7) // Error: mismatched types

This would allow computed properties to have generic return types:

var anyKindOfSevenYouWant: <T: ExpressibleByIntegerLiteral> {
    .init(integerLiteral: 7)

This syntax would naturally allow us to unwrap existentials. For example:

let existential: any Equatable = ...
let otherExistential: any Equatable = ...
let value: <T: Equatable> = existential
if let otherValue = otherExistential as? T {
    if value == otherValue {
        // Do something

I'm thinking where clauses would be allowed on any declaration that contains a type placeholder declaration:

let existential: any Equatable = ...
let value: <T> = existential where T: Equatable

I suppose that in many cases the generic constraint on the type declaration can be implicit:

let existential: any Equatable = ...
let value: <T> = existential // T is known to conform to `Equatable`

Here's another thought (and this one's a little bit out there) - could using one of these within the type declaration of a stored property of a type be interpreted as a new generic parameter of the type?

struct Queue {

    private(set) var elements: [<Element>]

would be equal to:

struct Queue <Element> {

    private(set) var elements: [Element]

it could also be done like this:

struct Queue {

    private var _privateDictBecauseWhoKnowsWhy: [Int: <Element>]

    var elements: [Element] {

Either way, the Queue type would be usable as a normal generic type (e.g., Queue<Int>).

The result type could then be defined:

enum Result {
    case success (<Success>)
    case failure (<Failure: Error>)

I suppose the proper order of generic type parameters for a type could be determined simply by the order in which they appear in the type declaration.
This has the order A then B:

struct Foo {
    var a: <A>
    var b: <B: Collection>

let _: Foo<Int, Array<Bool>> // Ok
let _: Foo<Array<Bool>, Int> // Error, the Collection must come second

Lastly, perhaps this would also be the right syntax for extending any type (if that's actually a good idea in the first place):

extension <T> {
    func somethingGenericallyUseful () -> Self