Opaque result types

Do I understand this correct? Basically you want a theoretical existential type with a where clause but without the opaque keyword to represent an opaque type? Then when existential support is improved in the compiler the same type (with a where clause and no opaque keyword) will become a true constrained existential that can return any type instead of always the same concrete type.

typealias StringCollection = Collection where Element == String 
typealias OpaqueStringCollection = opaque Collection where Element == String // or just `opaque StringCollection`

In Swift 5 both would be the same, but in Swift 6 the limitations of an existential being an opaque type would be removed.

I'm not sure if this will work out well. Do existentials supposed to work with generics like the proposed opaque types? If not then if we allow the above in Swift 5 then there is a potential risk that people will start using existentials in generics as opaque types which would make it impossible to lift that restriction in Swift 6. Please correct me if I'm wrong, I'm no expert on this topic.

Edit: I can answer my question myself. No existentials won't work with generics like opaque types.

Here is the snippet from above that proves it.

var c = makeMeACollection(Int.self)
c.append(17)         // okay: it's a RangeReplaceableCollection with Element == Int
c[c.startIndex] = 42 // okay: it's a MutableCollection with Element == Int
print(c.reversed())  // okay: all Collection/Sequence operations are available

func foo<C: Collection>(_ : C) { }
foo(c)               // okay: unlike existentials, opaque types work with generics

This is exactly that. Based on my limited understanding of the subject, of course, and the limited time I spent on a topic I have just discovered like one hour ago. Deep apologies to compiler experts if I'm totally out of bounds.

I'm not sure if this will work out well.

Me neither. I just feel that opaque types are a fantastic tool, but I'm wondering if it is a good idea to throw them at all users if opaque T is a subtype of T, and we can use covariance (not at the ABI level, ok, but at least at the API level).

EDIT: again some apologies for using the "subtype" and "covariance" words, maybe in a wrong way, because I'm in the learning phase of those powerful concepts.

Well then I get your point. ;) As I said above, this topic is really interesting to explore no matter how much expertise each one of us have on that area.

All right. Thanks for spotting where the subtyping relation breaks in the current state of the pitch :-)

Cool, this is exactly what I had in mind. How much additional implementation work would be involved in adding something like this? It would lift what could possibly end up being a frustrating limitation.

@Douglas_Gregor This is an exciting proposal! I'm happy to see more work being done to address leaking of implementation details.

You allude to it in the Source compatibility section, but I hope that the Swift team will seriously consider allowing source-breaking changes that might fall out of this proposal in order to stabilize APIs in the standard library or synthesized code.

One example that immediately comes to mind is CaseIterable:

protocol CaseIterable {
  associatedtype AllCases: Collection where Self.AllCases.Element == Self
  static var allCases: AllCases { get }

When the compiler synthesizes the implementation of allCases in a conforming type, today it must synthesize a signature with a concrete type (at the current time, [Self]). The major drawback here is that if someone ever opts-in to the compiler-synthesized implementation of allCases, then they can never replace it with a different type without it being a potentially source-breaking change if a caller refers to that array type specifically.

If we update the compiler to synthesize this instead:

enum SomeEnum: CaseIterable {
  static var allCases: opaque Collection where _.Element == Self { ... }

Then that will be a source-breaking change in the short-term, but it eliminates a major restriction in the ability of the type author to improve it later on.

Since there are probably many places in the standard library that would benefit from this, I hope that we won't prevent those changes from being made for the long-term health of those APIs.


On the surface, opaque types are quite similar to existential types: in each case, the specific concrete type is unknown to the static type system, and can be manipulated only through the stated capabilities (e.g., protocol and superclass constraints). The primary difference is that the concrete type behind an opaque type is constant at run-time, while an existential's type can change.

My initial feedback is that the keyword opaque does not really convey the "constant" nature of an opaque type. As that is a key difference between opaque types and existentials it would be better if syntax could be found that communicates that more clearly.


The storage can be writable. The underlying concrete type will still be determined by the getter's return statements, and will be used as the type of the value provided to the setter. I added some examples to the proposal here.

Yes, see my reply to Matthew Johnson.

Fixed in the document, thanks!

No, this is called out in restrictions on opaque result types.


They won't use existential boxes because the type metadata and associated conformances are fixed (not dynamic), and reachable via accessor functions. The opaque result type will be represented by an ArchetypeType in the type system. Type metadata for the ArchetypeType can be retrieved by calling an accessor; similarly for any protocol conformance requirement listed in the opaque result type (e.g., there will be an accessor to call to get the conformance of the ArchetypeType to the protocol P for opaque P).

When the opaque result types is non-resilient or we are in the same resilience domain, we don't need to call the accessors to get the type metadata or protocol conformances, because we know the underlying concrete type.


We could generalize this rule to be the common super type of all of the return expressions, but doing so could very easily put us in a place where we need to type-check all of the expressions together. For example, you mentioned literal types:

func foo() -> opaque Numeric {
  if Bool.random() {
    return UInt(5)

  return 1 + 2

As written, the proposal would reject this code because UInt != Int. If we took a "common type" rule (here it's not even a super type), we would type-check the second return statement as producing an Int because that's the only consistent solution.

I'm inclined to keep the simpler-to-implement rule in place, and then evaluate the "common super type" rule once the other pieces are in place and we have some usage experience.

It falls out of the model and I see no reason to restrict it.


1 Like

Yes, that's a reasonable way to think about this feature.

Huh, interesting! My main complaint with this approach is that it always forces you to write out the full type name, which can get ugly. Here's a silly little example:

func foo<C: Collection>(_ c: C) -> opaque Collection where _.Element == String {
  return c.lazy.map { String(describing: $0) }.filter { Bool.random() }

The opaque result type lets me avoid having to write out that ugly type. I should add this motivation to the proposal!

I also don't like the idea that I have to come up with a name for a opaque type, which is often going to be an UpperCamelCased version of the oneFunctionThatReturnsThatValue. I think it causes a different kind of API surface area expansion, and makes me wonder whether it's really any better than defining a public, resilient struct that wraps the returned type.

From the design perspective, I think opaque result types is a simpler feature. We don't have to deal with the type-identity issues that plague the design of generalized existentials, e.g., the problem of how one can refer to the associated types of a generalized existential, "open" a generalized existential to give a name to the dynamic type it holds, etc.

I also expect that the demand for generalized existentials will be reduced by opaque result types, because opaque result types subsume the use cases for generalized existentials that involve hiding the result type of the operation. And they do so in a manner that works far better with the generics system, because generalized existentials still don't address the issue that an existential doesn't conform to its own protocol.

opaque constants are fine; there's an example now.

I think that should be an error. Opaque types are a stand-in for a concrete-but-unknown type; composing them should mean that you're composing the concrete types, which doesn't make sense.


Well in that particular example they're not finalized since there is no default concrete type for any of the opaque type nor is there any return nearby (assuming P and Q are not value types, nor a final class). Isn't that reasonable enough to allow their composition like with existentials in general?

Let's assume we have a type that looks like this:

struct MyStruct {
  var collection: opaque Collection where Element == String { ... }

How well will that play with key-paths?

// Is this valid?
let keyPath: KeyPath<MyStruct, opaque Collection where Element == String> = \MyStruct.collection

In the last situation I'd rather prefer a typealias solution to make life easier:

struct MyStruct {
  typealias OpaqueStringCollection = opaque Collection where Element == String
  var collection: OpaqueStringCollection { ... }

let keyPath: KeyPath<MyStruct, OpaqueStringCollection> = \MyStruct.collection

For the "opaque typealias" case, maybe opaque belongs in the decl modifiers? That avoids the "two equal signs with different meaning" issue. The grammar could be:

opaque typealias <Name> [generic constraints on opaque type] = [concrete type]


opaque typealias IntCollection: Collection
   where IntCollection.Element == Int = [Int]

Maybe this is a crazy idea of mine, but can this potentially remove the need of opaque keyword in some places?

The grammar you pitched does not make any sense to be reused for 'existentials', at least from my point of view. That makes it unambiguous for opaque types.

// The following 3 version cannot exist in the same scope because of the name collision

// Version 1: Implicitly `opaque` with a default concrete type
typealias IntCollection : Collection where IntCollection.Element == Int = [Int]

// Version 2: Implicitly `opaque` meant for re-usage (has no concrete type)
typealias IntCollection : Collection where IntCollection.Element == Int

// Version 3: True existential
typealias IntCollection = Collection where IntCollection.Element == Int

What does it mean for a typealias to have a default concrete type in this case?

That is needed for the example above asked by @anandabits. We need a way to say that a specific set of opaque types is the same.

// Slightly modified
protocol Proto {
  associatedtype SomeCollection : Collection
  func someValue() -> SomeCollection
  func someOtherValue() -> SomeCollection

struct T : Proto {
  typealias SomeCollection : Collection where SomeCollection.Element == Int = [Int]
  func someValue() -> SomeCollection { ... }
  func someOtherValue() -> SomeCollection { ... }

// This would be also valid
let t = T()
var value = t.someValue()
value = t.someOtherValue()

I don't think it's a significant amount of implementation work on top of the rest of this feature. Since writing the above I'm a bit less excited about this direction... an opaque typealias seems like it doesn't provide all that much benefit over defining a wrapper struct, and you still don't get to avoid writing the type.


I think the core team would be willing to introduce some source incompatibilities in the short term for, e.g., allCases, or specific standard library APIs. My main concern about the standard library is the ABI: I don't know if this feature can be implemented and adopted quickly enough to make it in for the ABI stability guideline.

I haven't done the work of auditing the standard library to find all of the places where opaque result types would make sense.


The wrapper struct forces you to box and unbox values every time you cross an API boundary. That has a syntactic cost and could also have a runtime cost if the opaque type is used as part of a collection or function type.

opaque typealias Foo = Bar

func makeFoos() -> [Foo] { 
   return functionThatReturnsLargeArrayOfBars()

// vs 

struct Foo {
   private let bar: Bar

func makeFoos() -> [Foo] { 
   let bars = functionThatReturnsLargeArrayOfBars()
   // now we have to map
   return bars.map(Foo.init)

The runtime cost could possibly be avoided if we're careful to wrap Bar right away internally but then the cost of the wrapper struct approach increases quite a bit and the concern of type abstraction permeates the implementation of our library.

I guess I don't understand what an opaque typealias that is not "finalized" would be. Your earlier examples don't state the concrete type underlying the opaque typealias. Where is that type provided?