[Discussion] Easing the learning curve for introducing generic parameters

I'm not sure this comparison is all that convincing—the entire premise of this post is that there's a gulf in users' understanding between type- and value-level abstraction, and at the value level the concept of a function as a "black box" is taught in grade school. Not so for type-level abstraction.

I'm all for empiric evidence, but I think we should also be wary about introducing potentially confusing constructs. Source compatibility means it is exceedingly difficult to take back decisions which create new valid code, so I'd rather err on the side of caution, and wait for the evidence that "some types in parameter and return positions in the same declaration" is a highly-desired use-case that is not sufficiently addressed by guiding users towards explicit generic parameters.

Rust's impl was made to be hybrid sugar of generics and reverse generics. I don't see any wrong point on it, since mere sugar doesn't have to be consistent in semantics.

However, now you are arguing to use some for generics because some is natural. And I argued that some for generics is wrong since closure/function semantics is broken in the last post. What about this point? Do you think it's natural?

Since I found reverse generic argument, I'm really confused about some in argument position for generics. Surely won't happen in real, but if you want to add sugar for reverse generic argument, what prefix do you choice? Reverse generic argument is what deserves using some, isn't it?

The point I’m making is that it is consistent in semantics.

How would that work? A function that chooses a specific type for an argument and doesn’t tell you what it is? How would you then pass in a value of that type? That it “surely won’t happen” is your evidence that unifying the two concepts (generic constraints and opaque types) is semantically consistent. This goes back to my point earlier:

The empiric evidence isn’t unobtainable. As @ensan-hcl points out, Rust has adopted a similar approach—to my knowledge, it’s worked out without confusion, but admittedly I am not expert on Rust’s evolution.


Exactly the same as this closure. It would exist when structural opaque result type is introduced.

This doesn’t answer the relevant question: How would that work? It doesn’t matter if you spell it as a closure or as a function: what does it mean to have an opaque argument? Is your user to guess what argument they have to supply? To what end?

Semantically, the spelling in my mind can only mean a generic argument.

We have not established the use of any for anything yet, as far as I know, although a different use for it has been discussed. And I'm not saying that any is the right keyword to use here (although I would be fine with it). I am saying that I do not think some is a sensible or intuitive choice here.

We have established the use of Any* since the earliest days of Swift, that much is incontrovertible.

I didn’t think the choice of some was the most excellent choice for opaque types either—precisely because I could see this neverending “is it some or is it any?” question popping up—but now that it has been chosen, I’m arguing that unifying the spelling of constraints on the two sides of the function arrow is the right thing to do—and that any beef with some applies equally to both. Moreover, it fits with intuition in the manner I have described above, which I won’t repeat ad nauseam.


We should not use the same keyword to mean two different things, especially in close proximity. That is not a complaint that can be applied equally to both, because the use and meaning of some in return position is established. Therefore we should avoid giving it a different meaning in argument position. (And yes, I will continue to insist that it is a different meaning.)

The replied-to comment quotes several examples. Contextual static requirements would be fully available on opaque types because their witnesses are accessible even if the underlying type is unknown. If structural opaque types is accepted to adopt this meaning for some in parameter position, I agree we should not allow it on function declarations. This is a conflict that would have to be resolved between these two proposals.


…And I’m arguing that you’re incorrect in this, in that the two concepts have the same meaning, with the difference given by which side of the function arrow it’s on.

I agree with you in the notion that different things should be spelled differently: the argument here is that they are fundamentally the same—or rather more precisely, that they are sufficiently distinguished by position relative to the function arrow.

1 Like

Reverse generic argument type is some certain type decided by the callee, and opaque from the caller. The user of the function/closure passes value via protocol requirement: maybe static members or some literals. The motivation to use reverse generic argument is to hide exact type from the caller, and ensure the API consistency through development. (Almost equal to what it is for reverse generic result)

I must admit reverse generic argument types won't be so much useful in practice. My point is, whether it is useful or not, reverse generic argument like behavior exist. If you cannot believe, just try to use the code in Swift5.5. (this idea of code is originally from @xAlien95 )

let value: some BinaryInteger = 42
let closure = value.isMultiple
// how to call closure? closure type is (some BinaryInteger) -> Bool
let intValue = 2
closure(intValue) // error
closure(2)       // true
closure(.zero) // false

You can argue that this is not actual reverse generic argument, but at least such behavior is already in Swift.

1 Like

Who chooses the type (caller or callee) is a fundamental difference.

1 Like

I understand both sides here. To me, I think of opaque types as having their underlying type inferred from the value provided, so it makes sense that using some in a parameter and return position have the underlying type inferred from different places. I also understand that the spelling some P looks like the same type regardless of where you write it, and it has already been established to mean "reverse generic", which is why I mentioned this in the post:

I don't want this thread to turn into a syntax bikeshed of type parameter inference from parameter declarations - that can be done on a dedicated pitch for that feature. The purpose of this thread is to gather ideas for directions for making generic programming more approachable. It sounds to me like type parameter inference from parameter declarations is a direction that folks are interested in, and we can explore whether we need a different syntax later.

Thank you all for the feedback so far, it's definitely been noted and I'll make sure the points are thought out and addressed when a full proposal for this feature is written.


An interesting point re contextual static requirements. Note that the counterpart is a return value of type specified by the caller, which it is already possible to spell:

func f<T: Collection>() -> T { … }

(Typically we don’t make callers write as Foo and instead have them pass in the desired return type as an argument, but it is supported.)

The use case presented above doesn’t argue against the overall point that the position of the function arrow is sufficient to denote who controls the choosing of the type. As noted earlier, generalizing the other way would allow us to write opaque return types such as:

func f() -> <T: Collection> where T.Element: Codable { … }

Both callee-chosen argument types and caller-chosen return types have much more narrow use cases than the other way around (that is, plain old generics and opaque types). I think they would be adequately served by this long-form notation, which we already have for caller-chosen return types and can be extended in the manner shown above to callee-chosen argument types if sufficient use cases arise.

With the intuitive reading of some that I’ve outlined in an earlier post, I’d still expect that to mean a generic constraint on the left side of a function arrow and an opaque type on the right side. To me, it’s no different than how covariance and contravariance work with function argument and return types.

1 Like

…which, as I’ve argued, is amply denoted by the function arrow, in the same way that covariance versus contravariance are, not to mention who chooses the value.

1 Like

—Not to beat a dead horse, but I wonder if I’m wrong in this:

If the pitched syntax generalizing some to generic constraints is adopted, then who chooses the conforming type here:

protocol P { }
func f(_: some P) -> some P { }

…exactly parallels who chooses the subtype here:

class C { }
func f(_: C) -> C { }

+1 for me on type parameter inference via some – it feels like a natural evolution to the generics system, and a path towards what I see as the holy grail of Swift generics UI:

-> some AsyncSequence<Int>

This would mean getting rid of many of the the Any... type erasers such as AnyPublisher or swift-parsing's AnyParser which were created to preserve abstraction across API / module boundaries and for general developer experience.

For me, unlike visibly for many others, having a some P as a parameter and a some P as a return value being two completely different types feels actually very normal and instinctual for me – I've always seen functions as taking control of my "type context" and doing whatever they want with it. @xwu 's code snippet above is very salient in this regard.

I feel less sure however about some generics for stored properties of a type. This may be misguided, but a type's generics have for me always intrinsically part of their identity, much less than for a function's generics. That may be because a (algebraic) data type's type is a product of its properties' types, which give them for me a much more important place in the type's definition.

Some details for those who are interested Let's say we have a struct called Pair:
struct Pair {
    let first: A
    let second: B
In type theory, the type of a struct is the product of the types of its properties, aka:
 Pair<A, B> = A * B 
That's why to me a type's generics feel inherent to the type itself, and deserve a place in the types head declaration next to its name.

Warning - even more type theory...

Technically, a function's input and output types are as intrinsic to the function as generics are to a struct (which could bring down my whole argument about some generics in the input and output being different types).

Indeed, the "type" of a function in type theory is the output type to the power of its input type:

 (A) -> B = B^A 

Practically though, I mentally don't tie a function to its generics as much as I do for a struct.

Meanwhile, a (pure) function feels more like a computation on an input, spitting back out some output. Its generics matter mentally less to me. That's why some generics on parameter declarations feel very natural to me, but some generics on stored property declarations much less so.

So glad you are working on this area! My own wish would be to manifest the information flow and the workings of the type-checker with physical metaphors. That's more a question than an answer, though. If the program is a landscape, type-checking takes us on a journey through it. There can be a fork in the road, and we'll need bird's-eye views too, as well as chunking mechanisms, and ways of seeing the same thing from multiple perspectives at once.

(Much of this is not new to anyone, so apologies for stating the obvious.) The norm is code that does not type check, so the type system must encompass that, somehow, maybe even with probabalistic guesses.

Anyway, I would try to draw the picture in your head when you have to deal with some code that doesn't check out. Then externalize that picture onto the screen so that you can dive deeper into any part, and manipulate any thing you can see. Maybe even change focus so that you can change what is most salient at any given time.

One thing that would be great is if there wasn't such a brick wall between the existential and generic worlds. Of course we'd rather people just write the correct thing, but we could do a lot more to make common situations easier.

  1. A function like

    func doSomething(items: Collection)

    Should just be a generic function. There is literally no reason for value-level abstractions for a function's input arguments (their underlying types are fixed the moment they are passed in to a function call).

    That means it should be possible to access the associated types from items, including getting indexes and storing them in local variables, etc - just like I would in a generic function.

    It should also be possible to call generic functions constrained to <T: Collection> from within doSomething, passing our existential items as a parameter.

    func genericFn<T: Collection>(_: T) { ... }
    func doSomething(items: Collection) {
      genericFn(items) // Currently an error, but doesn't need to be.
  2. Allow opening existentials

    When you do hit the bridge between value and type-level abstractions, it's very bumpy. Whilst we have a way to erase concrete/generic types as existentials (the compiler will even do it implicitly), going the other way isn't even possible.

    We've talked about this for years - the idea of opening an existential as a local generic parameter, so there will actually be something you can do when faced with a Collection does not conform to Collection error.

    If the compiler did it implicitly, as it does for erasing, point 1 would fall out naturally:

    func doSomething(items: Collection) {
      openExistential(items) { <T: Collection>(bound_items: T) in

inout arguments are more delicate, because an argument which is an inout Collection may indeed be reassigned with an underlying value of a different type. It's easy enough to spot - usage requires creating a variable explicitly typed to be an existential:

func doSomething(items: inout Collection) { ... }

var someCollection = "hello"
doSomething(items: &someCollection) // Error

var someCollection: Collection = "hello"
doSomething(items: &someCollection) // Now it's okay.

It might be conceivable to change this in a new language mode, so inout Collection also works like an anonymous generic type. With implicit existential opening, both examples above would work, but doSomething would fail to compile in that new mode if it actually changes the underlying type of items. In that case, it would have to write its parameter inout any Collection (or something), which I think is fair. It's quite rare to see code which uses inout existentials at all.

This would basically remove value-level abstractions for function parameters, save for inout parameters where it is actually meaningful. The difference between value- and type- level abstraction isn't actually relevant in these contexts, so why not just remove that difference? :grinning_face_with_smiling_eyes:

It may not be as intellectually pleasing as keeping a harsh wall between existentials and generics, but it could improve usability.


Being able to write AsyncSequence<String> (and other such things) will definitely be more than just useful, I have a feeling it will be critical as Swift grows. How does this interact with ABI?


Can a developer wishing ABI stability for an API in a framework change the type they are returning later on?

Does that mean there is a cost of an existential heap allocation for that container?

If those two line items are resolved as "they can change without breaking ABI" and "it is not a heap allocation" then this is a game changer. I feel that it will make a good swath of new API written leverage this, because it gives the flexibility of classes with the speed and safety of structures!

Terms of Service

Privacy Policy

Cookie Policy