SE-0244: Opaque Result Types (reopened)

Your complaint is valid. The tone of your first comment was not. We can debate whether a half-baked feature should be added, but it's not valid to claim that there's no path to fully-baking the feature.

5 Likes

Then it should become a proposal again when it's not half-baked anymore, and not now. Then I won't be able to say that it's the same as before, because it won't be.

I see where you’re coming from here, and I also think it would be preferable to solve the naming issue. That said, I think this example handily demonstrates that the need will be less common than you expect.

This API seems to suggest that makeShape can represent any shape as a concrete type. If there is such a universal concrete shape type, an opaque return type is probably the wrong abstraction. makeShape should return the concrete type, possibly wrapped in a struct to hide internal details (a zero-cost abstraction as long as all proxying accessors and methods are inlineable).

On the other hand, let’s consider this alternative API:

func makeCircle() -> some Shape { ... }
func makeRegularPolygon(sides: Int) -> some Shape { ... }
func makeSquare() -> some Shape { return makeRegularPolygon(sides: 4) }
func makeHexagon() -> some Shape { return makeRegularPolygon(sides: 6) }

In this situation, you might argue that you “know” squares and hexagons are both instances of an underlying polygon type. However, it’s not clear that this information should be exposed to the client. If you’re using opaque types to express that circles and polygons are not (necessarily) the same, it’s consistent and logical to also express that squares and hexagons are not necessarily the same, even if they happen to be implemented the same today. One day you might want to implement squares as a kind of rectangle instead.

I suspect this ties into @Joe_Groff’s finding that this problem doesn’t arise much in Rust: in many cases, same-type-constraints on opaque returns are probably a code smell. That said, reasoning about appropriate use of opaque types is unpleasantly subtle, which brings me back to my concerns about how to teach this stuff.

1 Like

One possibility I see that doesn’t involve new syntax is to have the compiler “flatten” opaque types so that

func f() -> some P { ... }

and

func g() -> some P { return f() }

would both have a return type of “the opaque return type of f.” It seems like this would solve one of the main concerns motivating the desire for naming opaque types (in fact, I’m having trouble thinking of an example that wouldn’t be satisfied by this). I don’t know if this would cause other, undesirable consequences though.

EDIT: Should have thought about this a bit more before posting. A feature allowing simple wrapper functions as above really should expose in the API that g returns the same type as f, otherwise changing the body of g to return h() for some func h() -> some P could be a breaking change for clients, which would be extremely non-obvious particularly when the body of g is more complex.

What is your evaluation of the proposal?

Overall +1, definitely addresses a number of pain points already, and the future direction bits also seem like a good plan.

Is the problem being addressed significant enough to warrant a change to Swift?

Yes, I definitely think it will help in API design, esp. where one would be forced into unwieldy large generic types nowadays for the sake of preserving the types "all the way." Looking forward to having these at our disposal <3

Does this proposal fit well with the feel and direction of Swift?

Yes, the spelling and overall "feel" seem to fit Swift IMHO.

If you have used other languages or libraries with a similar feature, how do you feel that this proposal compares to those?

So I've had to spend quite some time hammering "out" the notion of what "opaque types" mean that I've grown used to from Scala 2 and 3 (Dotty). Worth pointing out that it is a new addition over there in Scala-land https://dotty.epfl.ch/docs/reference/other-new-features/opaques.html however the feature and rationale for introducing it to the language is a bit different; there it is only the opaque type (alias) aspect, and reasoning being the replacement of extends AnyVal value types which may not always "wrap" but sometimes might anyway. In comparison, the proposal here and the long term goal is much wider scoped.

Having that said, I think all the naming / wording used in the proposal feels right, and does not really conflict with what I've known from Scala-land -- the realization that the scope is much wider here makes it quite clear.

How much effort did you put into your review? A glance, a quick reading, or an in-depth study?

I've been following the discussion thread(s) for a while and skimmed the proposals a bit before; and a very thorough session with pondering "oh no, but what if..." with the writeup.

I could not see any holes in the reasoning so far, though perhaps would wish for a better error message in case f4 which reports as "protocol type P does not conform to P," which I "get" but hopefully the actual error message could be a bit more informative why that is an issue (that it could be various Ps and is not a "fixed one").

It wouldn't be a good idea to do this implicitly, since we really don't want to have to type check the body of g() in order to determine what the function's own type is. With @michelf's proposed #returnType(of:) feature, it would be straightforward to ask for the same type explicitly if you want it:

func f() -> some P { ... }
func g() -> #returnType(of: f()) { return f() }

And I think that would also address @Vogel's concerns about being able to forward return types through wrapper functions. As @jayton noted, I still think it's a code smell if you're using opaque types and end up doing this a lot, since opaque types are intended to enable more fluent type-level composition where the individual types involved are not very significant. It's good nonetheless to have an escape hatch when you absolutely need it.

2 Likes

I agree that in some cases you don't want to offer a guaranty makeSquare and makeHexagon will return the same type, but whether you should offer this guaranty or not is a decision you should take when writing makeSquare and makeHexagon, not when you write makeRegularPolygon.

In essence, the wrong person is in charge of taking the decision. I don't think it's right for the author of a function upstream to decide all downstream clients must absolutely create a new opaque type of their own if they in turn return the value. I actually expect most of the time the author of the upstream function will have no such intention anyway, it'll just happen as a side effect.

3 Likes

I'm happy to see there is a more clear vision communicated of how this all fits together. Thanks for writing that up @Joe_Groff. However, I still think it would be best to first add reverse generics syntax with named types before looking into anonymous return types.

Reverse generics will fit more naturally in the language as it is today, while opaque types create more challenges due to their anonymous nature (and people still wanting to be able to refer to them). Having reverse generics in the language first will give us more information down the line about how people actually use this feature, which we can then use for fleshing out opaque types and other improvements to the UI of generics.

Reverse generics has named types, but it still has to solve the question: How do we refer to the reverse generic type? Earlier I suggested to use FunctionName.TypeName, like for example:

f.T

This feels far more natural to me than something like #returnType(of:...), and I think this is the problem with starting with opaque types. We're trying to find solutions for problems that are actually created by the anonymity of the types. If a foundation of reverse generics would already exist today, the solutions we'd come up with for opaque types might be very different.

So, for me it's a -1.

8 Likes

+1
I read the proposal and the UI of Generics post. I am excited for the overall direction of this when combined with Reverse generics and I think that there is an interesting space where 'both' features are useful.

1 Like

As before, I generally support the careful thought of people who understand the problem far better than I do, and ultimately defer to their judgement with an uncertain +1. As before, I tend to like the some T syntax (especially in relation to any T). I find that terminology useful as I puzzle my way through this.

As before, I have the nagging feeling that something doesn’t quite sit right — and a second nagging feeling that it’s only my short lookahead on the whole generics adventure that gives me this feeling. I hope you don't mind me asking a bunch of muddled questions about where this is heading….


The big “Improving the UI of Generics” post was tremendously helpful. (Thank you, Joe!) This example in particular (paraphrased) was compelling:

func bar(x: Collection) {
  // not type-safe, since the existential Comparable might not match the existential Collection's Index type
  var start = x.startIndex
  // somebody could do this and change `start`'s dynamic type:
  // start = y.startIndex
  var firstValue = x[start] // error
}

That snapped into focus for me why existentials lead to pitfalls with associated types, and why the “same type” guarantee of opaque matters in practice for tasks that aren’t contrived or esoteric.

This example also raises many more questions! I wonder, for example, about this code (using the hypothetical any/some syntax), which if I understand correctly would not compile:

func nthOfEach(
  _ n: Int,
  from heterogeneousCollections: [any Collection]
) -> Any {
  return heterogeneousCollections.map { collection in  // collection is `any Collection`
    let nthIndex = collection.startIndex.advanced(by: n)

    // error: inferred static type of nthIndex is something like `any Collection.Index`,
    // so compiler can’t guarantee it indexes `collection` … correct?
    return collection[nthIndex]
  }
}

Would this then also not compile, for the same reasons?

  return heterogeneousCollections.map { collection in
    return collection[collection.startIndex.advanced(by: n)]
  }

…or would it be reasonable for the compiler to infer types something like this so that it did compile?

  return heterogeneousCollections.map { (collection: some Collection) in
    return collection[collection.startIndex.advanced(by: n)]
  }

Could the compiler then infer the type of nthIndex to make even the first example compile?

  return heterogeneousCollections.map { (collection: some Collection) in
    let nthIndex: some Collection.Index<…inferred constraints to match collection…> =
      collection.startIndex.advanced(by: n)
    return collection[nthIndex]
  }

Should it make that inference even if nthIndex were a var?


I ask these questions for two reasons. First, the focus just on opaque result types seems to be missing the big picture: still a dead end in the type system, but now just shifted back one step in the great chain of value passing. Despite the manifesto’s big picture, I’m nervous that this proposal is hill-climbing instead of global optimization.

Second, even with the big picture of the manifesto, my last two examples seem to fit into the intuitive spirit of opaque types but not the particular “reverse generics” model. They do seem like opaque some T types should fit: there is a particular collection type, it has a particular index type, we don’t know what those types are, but they are still stable and mutually consistent. But does the “reverse generics” model still fit? Consider:

  for collection in heterogeneousCollections {
    print(collection[collection.startIndex.advanced(by: n)])
  }

This is correct code, and it would be nice if it type checked without fuss or muss — but collection needs to take on a different some Collection type for each iteration of the loop. It would be intuitively reasonable for the language to do this — but I can’t imagine what it looks like under the hood for the compiler!

These examples require “path-dependent types” to work. There are some examples in the old enhanced existentials draft (although it does not use the term path-dependent types).

Also, fwiw Scala’s type system supports this feature. I don’t know Scala in depth so I’m not sure how closely it relates to what we might have in Swift but you might find it interesting to look at how Scala handles this.

This would require a map overload in a conditional extension constrained to where Element was an existential, and for which we could name the constraints applied to the existential so they could be used as constraints on the opaque transform parameter type. I’m not sure how we might express that or how useful it would be, but it’s an interesting direction to think about.

That's not an example that shows why opaque types would be needed. You really need only the most basic version of currently existing generics to achieve this.

func bar<X: Collection>(x: X) {
    // type-safe, since this is just typed as the Collection's Index type
    var start = x.startIndex
    // somebody could do this without changing `start`'s dynamic type, as long as y is typed as X:
    // start = y.startIndex
    var firstValue = x[start] // no error
}

Same thing for your nthOfEach:

func nthOfEach<HeterogeneousCollection: Collection>(
    _ n: Int,
    from heterogeneousCollections: [HeterogeneousCollection]
) -> Any {
    return heterogeneousCollections.map { collection in  // collection is `HeterogeneousCollection `
      let nthIndex = collection.startIndex.advanced(by: n)

      // no error: inferred static type of nthIndex is HeterogeneousCollection.Index,
      // so compiler can guarantee it indexes `collection` … correct!
      return collection[nthIndex]
    }
}

You misunderstood the example. The type of heterogeneousCollections is [any Collection], i.e. every element is a collection of a potentially different type. There is no single type substitution for HeterogeneousCollection that matches.

5 Likes

I have (belatedly) generated a new toolchain with the latest implementation of the feature, which also supports the ABI resilience aspects of the proposal, and uses the proposed some P syntax:

11 Likes

This would be my in-an-ideal-world choice for how existentials eventually work in Swift.

1 Like

Ah okay, I see, that's why you called it heterogeneous.
Then of course opaque types aren't just not needed in that example, they wouldn't even be useful.

The example I gave is already correct with the existing implementation of map; my intuition says a robust type system of the hypothetical future should be able to verify that without needing an additional map overload.

The question is not “how would you implement this,” but rather “can we make a type system that naturally supports this?”

Yes, this is more along the lines of what I was thinking! The blog post you linked to, and what other resources I was able to find, do make Scala’s path-dependent types sound similar to the passage you mention in the enhanced existentials manifesto.

However, it seems like Scala’s path-dependent types create a new type per value of the enclosing type, e.g. in the example from the post you linked, starTrek.Character is a type unique to a single Franchise instance that represents Star Trek. That’s more robust than Swift’s current type checking. Swift’s static type system still lets you mix and match, say, Set<String>.Indexes from different sets; it reports the mismatch as a runtime error. Path-dependent types might help plug that type system hole! But it’s another bridge beyond my wish, which is bringing existential types up to parity with Swift’s current concrete type regime.

Getting my example to compile with [any Collection] just as it currently does with e.g. [Set<String>] seems like it requires something much less robust than Scala’s path-dependent types — probably something more like what’s in that existentials manifesto.

Here is said example again, clarified a bit (and also index manipulation fixed):

let heterogeneousCollections: [any Collection] = [
  7...11,
  [1, 2, 3, 6, 11, 23, 47],
  ["red", "fish", "blue", "fish"] as Set
]
let n = 2

for collection in heterogeneousCollections {
  let nthIndex = collection.index(collection.startIndex, offsetBy: n)
  print(collection[nthIndex])
}

The intuition I’m trying to chase down here is that what SE-0244 does only for return types feels sort of like what we want in the example above — and the code above seems like the kind of situation the current proposal is naturally going to lead toward.

My steadfastly naive, user-level intuition about what some T means currently runs something like this: “This thing’s type is some specific subtype of T. We aren’t allowed to know what specific subtype that is, but it is one single, specific type. We can assume that specificity when there are Self or associated type constraints. The code will compile as if the concrete type were specified, except I can only use the members exposed by T.”

Following that naive intuition makes me think maybe I could make this explicit type annotation in the code above:

for collection: some Collection in heterogeneousCollections {

I realize the implementation of that looks nothing like opaque result types, but at a user level, this feels similar. And in the general spirit of Swift, I’m looking for a heuristically friendly model that allows a progressive disclosure of all the type complexity in the Pandora’s box of generics, so people can code in Swift without reading up on type theory.

Bringing it back to the topic at hand: I generally support the proposal, but have nagging reservations about whether we’re leading in a direction that sets up friendly heuristic. I think often of someone’s criticism Swift as a “crescendo of special cases stopping just short of generality” — unfair, but there’s a kernel of truth in it. My +1 would go from tentative to comfortable if I had confidence that some T won’t feel like yet another special case in the brighter future world of robust Swift generics.

map takes a transform of type (Element) -> Result. In your example, Element == any Collection. In thinking about this again, the transform you want to pass is actually of a higher-ranked type. some Collection stands in for a generic parameter, so using hypothetical syntax for higher-ranked types, your closure has type <C: Collection>(C) -> Result.

The way we make a type system that naturally supports this is to make this higher-ranked function type a subtype of (any Collection) -> Result so it can be used where (any Collection) -> Result is required. The language would probably create a thunk that opens the existential and passes it to the higher-ranked closure.

It isn't clear whether higher-ranked types will be added to Swift or not. If they are, it isn't clear whether this kind of subtype relationship would exist. But that's ok, I think path-dependent types address the use cases you have in mind very well.

It actually quite similar to the future direction of opaque parameter types (i.e. using some in parameter positions). This would make the body of your loop generic over the Collection constraint and again the compiler would need to open the existential for each element.

This feels like it introduces a lot of complexity over just providing the any Collection to the body of the loop and using path-dependent types. The only reason I can think of that it might be useful is that if you could say for inout collection: some Collection the language could guarantee you don't accidentally replace the element with a new element of a different type. I don't think that would justify a language feature like this.

Do path-dependent types feel like opening a Pandora's box? They allow you to work with associated types of a specific instance of an existential. This gives you type-safe access to a lot of the functionality of the value even in the presence of unconstrained associated types (such as Index). The primary thing it doesn't do is provide a way to work across instance of the same existential type (such as copy elements from one any Collection to another).

What would it take to provide this confidence? Opaque result types as defined in the proposal are definitely a special case, but Improving the UI of Generics is a solid step towards that brighter future IMO. It rounds out the generics model for functions and provides a layer of sugar that is symmetric with the proposed re-skinning of existentials. These are natural duals so it makes perfect sense to do this.

The usage of existentials didn't receive too much attention in Improving the UI of Generics. That document primarily focused on how the types would be spelled. It sounds like maybe you are asking for a similar document that lays out a roadmap for how existentials with Self and associated type constraints will actually be used. Would something like this be useful in helping you get to a more comfortable +1? That sounds like a pretty reasonable request to me.

Proposal Accepted

The core team agrees that this proposal lays important groundwork that can be built on further in ways laid out in the author's Generics UI Improvements overview. This write-up helped clarify the context of the proposal both for reviewers and the core team itself.

Several reviewers felt that being able to name opaque types is an important feature for many use cases. This is clearly a useful feature and would be a good next step, but the core team thinks that the feature as proposed consists of a reasonable "minimum viable product" to land now, and then can be added to through subsequent proposals.

Thank you to everyone who participated in this review!

Ben Cohen
Review Manager

14 Likes
Terms of Service

Privacy Policy

Cookie Policy