Improving the UI of generics

I thought I would be bumping an old thread, but apparently this one is still (kind of) alive!

I didn't get a chance to comment while this was fresh, so I'll leave my thoughts now. While the post is factually/technically accurate, of course, I feel that it's a bit too diminutive towards existentials and almost implies that they are useless or mistakes, or that they're not in the language's future plans. I very much hope that isn't true, and that improving existentials is still on the roadmap somewhere.

I think it's important to argue the case for existentials. Sometimes the post (document?) draws a sharp, fundamental line, and clearly acknowledges that they are very different things, meant for different purposes; and at other times it directly compares them, as though they were interchangeable, and unsurprisingly finds existentials coming up short.

Indeed.

This isn't a brilliant example, IMO - as the rest of the post explains, existentials and generics are entirely different things. I wouldn't say that writing the function this way "loses" type information - it's a different thing entirely.

Maybe the difference isn't obvious enough in the syntax, or perhaps this is the first thing users would try to write and it wouldn't have the behaviour they expect. That's a notation question, and I wouldn't presume to know how others learn to code.

That's a bit of a loaded statement. It's true that existentials can't provide the same type-level guarantees as generic parameters, but that's because that's not what they do. As you said, they are value-level constraints/abstractions. That is the critical thing that makes existentials so useful in the first place. You could equally say that generics won't ever quite reach the flexibility of existentials.

Neither has inherently more "power" than the other.

I really dislike this idea. It's pitched as a solution to accessing associated types from existentials, but I think it is entirely the wrong solution to that problem.

The Collection indexing example proves it - all it does it force-cast. If you consider the various ways this could be used, you'll discover they all amount to force-downcasting. It's no different to saying:

extension Collection {
  subscript(idx: Any) -> Any { self[idx as! Index] }
}

... which you could do today. But I think we can all agree that it's awful.

So why not hoist the casting up a level? Why not have the caller guarantee that the index really is of type (dynamic type of existential 'c').Index? That could be done via conditional casting, or by tracking the provenance of returned values somehow. Then you could call the original Collection method directly and there would be no need for any of this "existential self-conformance" malarkey.

And as it just so happens, the very next point gives us a way to do that:

This is what we should focus on IMO, because it so precisely addresses the issue. If we had a way to talk about the specific type inside an existential, issues with associated types and uses of Self pretty-much melt away. This is a big hole in the type-system anyway: while you can box a value of any type (including a generic type) in an existential box, and transfer it between different boxes (sometimes), you can't actually, truly un-box the existential unless you know the specific type it contains (which defeats much of the purpose of using existentials in the fist place).

I still feel that the post could be kinder towards this approach, though. There are multiple possible interpretations for what "computations derived from a single existential value" could mean:

  1. Does it mean this approach wouldn't scale to multiple values?

    That's not true - we could support conditionally-downcasting other values to type X:

    let <X: Collection> openedX = x // X is now bound to the dynamic type of x
    var start = openedX.startIndex // type: X.Index
    if let openedOther = other as? X {
      // 'openedOther' is also of type X
      start = openedOther.startIndex // type-safe.
    }
    

    Or unboxing them to their own types, with constraints based on X:

    var objects: Collection = ...
    var openedObjects: <X: Collection> = objects
    
    var destination: Collection = ...
    if var rrc = destination as? <R> where R: RangeReplaceableCollection, R.Element == X.Element {
      rrc.append(contentsOf: openedObjects)
      destination = rrc
    }
    
  2. Does it mean that it wouldn't support writing func foo<T>(a: T, b: T) -> [T]? (i.e. binding multiple parameters to the same type).

    Because that seems obvious. Of course a value-level abstraction is not the right thing for expressing constraints across values. That's not what it's for. Just like opaque types have difficulty expressing constraints across different functions. You need a lexically-higher scope to define a single thing that the various abstractions can reference in their own constraints.

What's more - this idea of introducing a local type that we use for unboxing would be great for code that doesn't even use existentials, too. For example, it could allow us to up/downcast protocols with associated types (e.g. casting from Collection -> RandomAccessCollection).

extension Collection {
  func myAlgorithm() { print("Collection default") }
}
extension RandomAccessCollection {
  func myAlgorithm() { print("RAC default") }
}

func doSomething<C: Collection>(_ objects: C) {
  if let rac_objects = objects as? <R: RandomAccessCollection> {
    rac_objects.myAlgorithm() // "RAC default"
  } else {
    objects.myAlgorithm() // "Collection default"
  }
}

(with the compiler inferring that R.Element == C.Element, R.Index == C.Index, as its a downcast)


Anyway, those are my thoughts. I hope existentials haven't been forgotten about - there is clearly some design work to do, but I don't see anything fundamentally flawed.

9 Likes