If that's not a generic protocol, what else is? Can you point me to some reading so I can follow your argument?
Just recently we got this direction approved into the generics manifesto, which adds some more parameterization to protocols and would help cover a lot (but not all) places where I otherwise would want a parameterized protocol (generic protocol). However it does require some proxy types and some boilerplate but at least it will solve a few issues in my code:
I wonder if Self<.Assoc == T>
would be allowed as well at some point.
A "generic protocol" as Rust implements it, and many people argue for it, is a protocol with multiple independent conforming types, by contrast with associated types, which model a functional dependency between the Self
type and the associated types (meaning that for any conforming Self
type, you can infer there's one associated type). The most common examples of these I've seen are conversion-style protocols, where you want to be able to say that there's a relationship between arbitrary pairs of types, like (Int, Float)
, (Int, Double)
, etc. With only associated types, you wouldn't be able to express this without multiple conformances on Int
.
In your example, it looks like really existentials with associated type constraints would be sufficient:
protocol P {
associatedtype A, B
init(a: A, b: B)
}
extension P {
typealias Next<T> = any P<.A == T, .B == Self>
func next<U>(value: U) -> Next<U> {
.init(a: value, b: self)
}
}
I have a couple of comments on this:
First, minor one. I'm reading these keywords as prefix operators, so I would expect last one to be meta any P
, because it is a [metatype of]-[existential of]-protocol.
Second, major one. While I like the idea of existential abstracting generic types, the idea of associated existential seems fundamentally wrong to me.
Associated types associate types with types. Existential types wrap non-type entities as types by abstraction.
Given
protocol P { associatedtype A }
struct S: P { typealias A = Int }
struct Z<T: P> { ... }
Array<S>
is a typeany Array
is a typeArray
is not, at least not a rank-1 typeany P
is a type (whereP
is a protocol)P
is notS.A
is a typeP.A
is not a typeT
is a typeT.A
is a type
Also for the same reason generic syntax Type<P>
cannot be used to express P.Type
. Currently all generic parameters are types, and P
is not a type. You can have Type<any P>
, but that's a P.Protocol
.
You're right, this is a much better ordering. I had it backwards.
The idea here is that each type has an "existential" type implicitly associated with it, similar to how the type of the metatype is associated with it via .Type
. For existentials and non-generic types this would just be an identity association, but for bound generics types (i.e. Array<Int>
) and protocols metatypes this would reference the related existential type. The first post I made in this sub-thread on this topic (which used an earlier version of the syntax) contains some details that weren't stated in the post you quoted.
I hope this helps to clarify what is intended by the term "associated existential".
So association would be between Array<Int>
and any Array
? And between meta P
and any P
? I can imagine how the later would be useful, but first one really confuses me.
The association between Array<Int>
and any Array
can be used to make an encoding of higher-kindred types both more robust and more convenient. I’m not sure if there are any other use cases for it. That’s the one where I stumbled across it.
In any case, this is a relatively minor aspect of the language changes discussed in the related sub-thread. Changes along the lines discussed there would be nice to have for many much more practical reasons.
With the any
modifier, there’s an opportunity to revise the current meta type syntax to be more obvious. P.Protocol
, the type of P.self
, would be (any P).Type
, whereas the type of all T.self
where T: P
would be any P.Type
if we say that P.Type
is a generic constraint for a meta type conforming to P.
@Joe_Groff Do you see no chance to free up the .Type
namespace in Swift?
Btw. @Nickolas_Pohilets here is the translation from Joe's notation to the other meta
keyword I was using.
typealias Meta<T> = meta T
P.Protocol == (any P).Type == Meta<any P> == Meta<P> == meta P
P.Type == any P.Type == any Meta<P> == any meta P
Why is there any
in the last line?
Right now there are two kind of meta types which are merged into one type, which makes it really hard to work with and which also explains why there is no pure Swift implementation of type(of:)
function.
Around Swift 3 end we were working on a proposal to try to push a meta type revamp though while you didn't need to provide an implementation as a proposal author.
Here is that document: swift-evolution/0126-refactor-metatypes.md at refactor_existential_metatypes · DevAndArtist/swift-evolution · GitHub
Back then we sliced .Type
and .Protocol
into Type<T>
and AnyType<T>
where the last type is an existential like type but for meta types.
If we now rename Type<T>
to Meta<T>
exchange Any
prefix by the any
keyword, we'll get the exact same types as above:
-
AnyType<P>
==any Meta<P>
==any P.Type
-
Type<P>
==Meta<P>
==(any P).Type
Using Joe's syntax version type(of:)
would probably look something like this:
func type<T>(of value: T) -> any T.Type
And a 'potential' subtype(of:named:)
like so:
func subtype<T>(of type: T.Type, named name: String) -> (any T.Type)?
What advantage do you perceive in Type<P>
or Meta<P>
over meta P
? In that syntax, wouldn't your any Meta<P>
just be any meta P
?
None, that is an old proposal and I like your idea of the meta
keyword a lot more. ;) The syntax is different, but the behavior remains exactly the same.
I thought I would be bumping an old thread, but apparently this one is still (kind of) alive!
I didn't get a chance to comment while this was fresh, so I'll leave my thoughts now. While the post is factually/technically accurate, of course, I feel that it's a bit too diminutive towards existentials and almost implies that they are useless or mistakes, or that they're not in the language's future plans. I very much hope that isn't true, and that improving existentials is still on the roadmap somewhere.
I think it's important to argue the case for existentials. Sometimes the post (document?) draws a sharp, fundamental line, and clearly acknowledges that they are very different things, meant for different purposes; and at other times it directly compares them, as though they were interchangeable, and unsurprisingly finds existentials coming up short.
Indeed.
This isn't a brilliant example, IMO - as the rest of the post explains, existentials and generics are entirely different things. I wouldn't say that writing the function this way "loses" type information - it's a different thing entirely.
Maybe the difference isn't obvious enough in the syntax, or perhaps this is the first thing users would try to write and it wouldn't have the behaviour they expect. That's a notation question, and I wouldn't presume to know how others learn to code.
That's a bit of a loaded statement. It's true that existentials can't provide the same type-level guarantees as generic parameters, but that's because that's not what they do. As you said, they are value-level constraints/abstractions. That is the critical thing that makes existentials so useful in the first place. You could equally say that generics won't ever quite reach the flexibility of existentials.
Neither has inherently more "power" than the other.
I really dislike this idea. It's pitched as a solution to accessing associated types from existentials, but I think it is entirely the wrong solution to that problem.
The Collection indexing example proves it - all it does it force-cast. If you consider the various ways this could be used, you'll discover they all amount to force-downcasting. It's no different to saying:
extension Collection {
subscript(idx: Any) -> Any { self[idx as! Index] }
}
... which you could do today. But I think we can all agree that it's awful.
So why not hoist the casting up a level? Why not have the caller guarantee that the index really is of type (dynamic type of existential 'c').Index
? That could be done via conditional casting, or by tracking the provenance of returned values somehow. Then you could call the original Collection
method directly and there would be no need for any of this "existential self-conformance" malarkey.
And as it just so happens, the very next point gives us a way to do that:
This is what we should focus on IMO, because it so precisely addresses the issue. If we had a way to talk about the specific type inside an existential, issues with associated types and uses of Self
pretty-much melt away. This is a big hole in the type-system anyway: while you can box a value of any type (including a generic type) in an existential box, and transfer it between different boxes (sometimes), you can't actually, truly un-box the existential unless you know the specific type it contains (which defeats much of the purpose of using existentials in the fist place).
I still feel that the post could be kinder towards this approach, though. There are multiple possible interpretations for what "computations derived from a single existential value" could mean:
-
Does it mean this approach wouldn't scale to multiple values?
That's not true - we could support conditionally-downcasting other values to type
X
:let <X: Collection> openedX = x // X is now bound to the dynamic type of x var start = openedX.startIndex // type: X.Index if let openedOther = other as? X { // 'openedOther' is also of type X start = openedOther.startIndex // type-safe. }
Or unboxing them to their own types, with constraints based on X:
var objects: Collection = ... var openedObjects: <X: Collection> = objects var destination: Collection = ... if var rrc = destination as? <R> where R: RangeReplaceableCollection, R.Element == X.Element { rrc.append(contentsOf: openedObjects) destination = rrc }
-
Does it mean that it wouldn't support writing
func foo<T>(a: T, b: T) -> [T]
? (i.e. binding multiple parameters to the same type).Because that seems obvious. Of course a value-level abstraction is not the right thing for expressing constraints across values. That's not what it's for. Just like opaque types have difficulty expressing constraints across different functions. You need a lexically-higher scope to define a single thing that the various abstractions can reference in their own constraints.
What's more - this idea of introducing a local type that we use for unboxing would be great for code that doesn't even use existentials, too. For example, it could allow us to up/downcast protocols with associated types (e.g. casting from Collection -> RandomAccessCollection).
extension Collection {
func myAlgorithm() { print("Collection default") }
}
extension RandomAccessCollection {
func myAlgorithm() { print("RAC default") }
}
func doSomething<C: Collection>(_ objects: C) {
if let rac_objects = objects as? <R: RandomAccessCollection> {
rac_objects.myAlgorithm() // "RAC default"
} else {
objects.myAlgorithm() // "Collection default"
}
}
(with the compiler inferring that R.Element == C.Element, R.Index == C.Index, as its a downcast)
Anyway, those are my thoughts. I hope existentials haven't been forgotten about - there is clearly some design work to do, but I don't see anything fundamentally flawed.
No one's forgotten about existentials, don't worry. I don't think there's any contradiction in what I wrote and what you said—existentials do lose static information, and you'd have to use casts to recover it. We should certainly make it possible to write those casts, but we should also make it possible to express statically type-safe APIs that don't fundamentally rely on casting.
As discussed in this thread last year, using new syntax to refer to existentials as any TypeName
and generics as some TypeName
would make these two features more equally represented. Also it could help in understanding the differences and when to use which.
Absolutely - I even said "the post is factually/technically accurate". I just feel that in trying to make the case to the community about the value of opaque types (which it clearly succeeded in doing), it ends up reading a bit unfair towards existentials. That's just my impression, and I wanted to clarify whether existentials have a future in the language and to make the case that they should.
I'm very, very happy they haven't been forgotten about .
I've always been a massive fan of using the word "any" for existentials. I think it makes the whole model a lot simpler to teach, to learn, to understand and use.
There were some fears in the thread that SIMD's any(...)
free-function might make it impractical to use that word for something else, but I really hope we can find a way around it. I can't think of another word that's as accurate and concise as any
.
Anything new on this?
Wouldn't the fact that any ProtocolName
is only used in a type position stop there from being conflict between it and the any(...)
function? Or are we worried about humans being confused by the reuse?
Yes, I'm working on an implementation of general opaque result types, i.e. func foo() -> <T> T where T.U == Int
.