[Discussion] Easing the learning curve for introducing generic parameters

I never said we couldn't do both, I was only expanding on Karl's observation that existentials are still going to be the path of least resistance as long as they have the nicer syntax.

1 Like

The only thing that is really necessary for swift is better type inference, so people can focus on writing expressions and mind types later. It looks silly that after all years language has been around there is still a significant burden caused by the need to be explicit in the syntax of generics. In a reasonable language, like haskell, there is rarely a need to mention type at all so the experience of writing some code feels pleasant. Why haven't there been any progress done on this? The only thing that got implemented is better inference in closures, and that's pretty minor for the time passed since inception of the lang.

@hborla I had to sleep on this for a bit, so apologies for the somewhat delayed reply. Thank you for writing this document down; as you say, it's a continuation of sorts of @Joe_Groff's earlier document.

To start, I fundamentally agree with the problem statement. All the aspects of your post with respect to tooling are, in my view, essential.

We need to do more to diagnose and guide users to use these features correctly. Some learners will learn best by a careful exposition of the rules, and we can certainly work on improving the educational notes and TSPL sections for those folks, but others learn best by rolling up their sleeves and doing, and it's in the careful implementation of helpful diagnostics and fix-its that we can best reach those users.

Fundamentally, I think any time a user stumbles on a set of diagnostic messages that they can't address such that they find themselves having to ask on these forums or elsewhere we have an opportunity for improvement.

Now, as to the proposed language changes, the portion about using some T as a shorthand for a generic function parameter has been well explored before and, as I have said at those times, makes a lot of sense in my view. This would be because many have found the explanation of opaque types as "reverse generics" to be intuitive to them, and the expansion of this syntax adheres to that well worn principle that similar things should look similar and different things different. It naturally opens the door as well on going the other direction in generalizing the long-form generic syntax for more complicated constraints on opaque types (i.e., something like func() -> <T: Collection> where T.Element: Codable).

(Needless to repeat here, I also agree that the bare spelling of existentials is an attractive nuisance and that the way to fix it is to adopt the long mooted any T spelling, in line with what Rust has done, and that this could be done in concert with generalizing the some T spelling above to create that nice balance which steers users the right way.)

But as to the additional portions you outline here, I have to say that I am similarly wary of these as in the other pitch about lightweight syntax for protocol associated types. You very rightly point out a major caveat to the some Shape example:

This is a design "smell," as it were, that two things are being made to look alike which are dissimilar. I struggled to see why that would be the case but I think I can explain it now:

struct Button {
  var shape: some Shape
  var shape2: some Shape { ... }
// ...sugar for...
struct Button<S: Shape> {
  var shape: S
  var shape2: some Shape { ... }

// is analogous to blurring the difference between:

struct S<T: Shape> {
  func f(_: T) { ... }
// ...and...
struct S {
  func f<T: Shape>(_: T) { ... }
  // which we agree could have a nice shorthand as:
  // func f(_: some Shape) { ... }

I do not think that blurring these lines is advisable. This is an opportunity for the user to learn why they might want to use one or the other and then to be able to apply that to scenarios where the compiler can't so easily help correct the user; just silently accepting that the user wrote one thing but meant the other is a missed opportunity and not doing them any favors in the long run. For the same reason that I am wary about adopting <T> syntax to meant "generic constraint or associated type," so too am I wary of adopting some T syntax to mean "generic in some way."

I think my overall take is this:

The distinctions between opaque types, generic constraints, associated types, existentials and their ilk are meaningful ones, not there merely to frustrate the uninitiated. Therefore, our task as I see it is to design Swift in a way that discloses these differences where they arise and to guide users to the correct usage.

Where the distinction doesn't matter, I do not think that the compiler should be changed to accept just any plausible syntax as "do what I mean--you (the compiler, or human reader of my code) figure it out," because it is not helpful for user learning so that they can make the right choice in those scenarios where the distinctions are meaningful and the compiler can't just figure it out. I think such a "do what I mean" approach actually runs counter to the principle of progressive disclosure and delays rather than catalyzes mastery of the language.

(By contrast, if the distinction that we want the user to learn actually never matters and the compiler can always figure out what the user means, then my premise is wrong, and we should get rid of the distinction in the language entirely. But, as I say, my premise is that there's a useful reason why Swift has generic constraints and opaque types and existentials and associated types.)

Instead, I propose that if the compiler can figure out unambiguously that the user means to do X when they write Y, there should be a fix-it and a helpful message. Circling back concretely to the struct Button { ... some Shape } example, don't make it sugar for a generic Button type, emit a fix-it so that the user can learn the difference.


I believe existing responses mostly capture my feeling here so I'll keep it brief.

I agree with @Karl here that the fundamental issue is existentials receiving the "blessed" bare-name syntax. If we think that the language should guide users towards generics first (or at least put generics on equal footing with existentials), then IMO that's an argument that elevates the bare-name existential syntax to the level of "active harm" that would begin to justify what would presumably be a massive source break.

This would be a harder bar to clear, obviously, but IMO we should explore this direction first and only then consider alternatives if it's decided that the break would be too burdensome for developers.

1 Like

Existing use of some means “the function chooses the type”, but this proposed new use means “the caller chooses the type”. Consider:

func f(input: some RangeReplaceableCollection) -> some RangeReplaceableCollection {

The caller can pass in any conforming type it wants, but can't control the type it gets back. Yet the syntax is identical. And because it's RangeReplaceableCollection (which has an init() requirement), we really can write a function that returns any conforming type of the caller's choosing:

func f<In: RangeReplaceableCollection, Out: RangeReplaceableCollection>(input: In) -> Out {
    return Out()

I don't think we should allow the same syntax to have such different meanings.

1 Like

I strongly disagree with using some for generics. some in argument position and generics are dissimilar, but the syntax change will make them similar.

As discussed in the Structural opaque result types thread, (some P) -> (some P) would have following behavior.

// take opaque Numeric value and return opaque Numeric value.
let foo: (some Numeric) -> some Numeric = { (value: Double) in value * 2 }

let doubleValue = 4.2
foo(doubleValue)  // error (since some Numeric is opaque)
foo(4)            // OK (due to ExpressibleByIntegerLiteral conformance)
foo(.zero)        // OK (due to static member requirement)

However, if we adopt some for generics, the behaviors of closure and function get torn.

func foo(value: some Numeric) -> some Numeric { return value * 2 }
let doubleValue = 4.2
foo(doubleValue)  // OK (since value is generic)
foo(4)            // OK (since value is considered as Int)
foo(.zero)        // error (since there is no way to infer generic type parameter)

It would be really confusing for many developers. What is wrong here is using some for generics. some in the argument position should be also opaque in function.

If we want to add sugar, then we should use any for generics.

// two functions behave the same
func foo(value: any Numeric) -> some Numeric { return value * 2 }
func foo<T: Numeric>(value: T) -> some Numeric { return value * 2 }

Actually, Rust explained its impl as hybrid of generic any and reverse generic some in RFC 1951. I think this is logical direction.
As of existential types, we should search some other prefix like exist.


This is a good point.

IMO, if we extend some types to parameter positions (which, broadly, I think I am in favor of), I believe it should be an error to use some type in both parameter and result position. At that point, we should guide the user toward making the function explicitly generic to avoid any possible confusion.

See, on the contrary, I think some is appropriate even from this angle (besides the whole opaque types being reverse generics angle):

In plain language, the function takes an argument of some type and returns a value of some type. If I insert some money into a machine, I choose both the quantity (value) and denomination (type) of the bills I insert, and if the machine returns some amount of money, it (not I) chooses both the quantity and denomination of the bills.

Who chooses is determined by which side of the function arrow it’s on—both for the value and the type—and some here unifies the concepts in denoting that a specific choice has to be made (statically) as to the underlying type, as opposed to existentials.

In the absence of empiric evidence, I am not convinced that users will have trouble appreciating the locus of control here; it’s not as though users struggle to understand who controls the argument value versus the return value.


I would say it takes any type conforming to the protocol, and returns some specific type conforming to the protocol.


Sure, and you could also say that it returns any one type conforming to the protocol—from your perspective as the caller, you don’t get the choose, so it’s “any.” But in Swift we’ve established the use of Any* for existential boxes, so it would not be appropriate for this distinct concept.

The question, rather, is whether some is workable, and I argue that it is, and that it’s intuitive.


I'm not sure this comparison is all that convincing—the entire premise of this post is that there's a gulf in users' understanding between type- and value-level abstraction, and at the value level the concept of a function as a "black box" is taught in grade school. Not so for type-level abstraction.

I'm all for empiric evidence, but I think we should also be wary about introducing potentially confusing constructs. Source compatibility means it is exceedingly difficult to take back decisions which create new valid code, so I'd rather err on the side of caution, and wait for the evidence that "some types in parameter and return positions in the same declaration" is a highly-desired use-case that is not sufficiently addressed by guiding users towards explicit generic parameters.

Rust's impl was made to be hybrid sugar of generics and reverse generics. I don't see any wrong point on it, since mere sugar doesn't have to be consistent in semantics.

However, now you are arguing to use some for generics because some is natural. And I argued that some for generics is wrong since closure/function semantics is broken in the last post. What about this point? Do you think it's natural?

Since I found reverse generic argument, I'm really confused about some in argument position for generics. Surely won't happen in real, but if you want to add sugar for reverse generic argument, what prefix do you choice? Reverse generic argument is what deserves using some, isn't it?

The point I’m making is that it is consistent in semantics.

How would that work? A function that chooses a specific type for an argument and doesn’t tell you what it is? How would you then pass in a value of that type? That it “surely won’t happen” is your evidence that unifying the two concepts (generic constraints and opaque types) is semantically consistent. This goes back to my point earlier:

The empiric evidence isn’t unobtainable. As @ensan-hcl points out, Rust has adopted a similar approach—to my knowledge, it’s worked out without confusion, but admittedly I am not expert on Rust’s evolution.


Exactly the same as this closure. It would exist when structural opaque result type is introduced.

This doesn’t answer the relevant question: How would that work? It doesn’t matter if you spell it as a closure or as a function: what does it mean to have an opaque argument? Is your user to guess what argument they have to supply? To what end?

Semantically, the spelling in my mind can only mean a generic argument.

We have not established the use of any for anything yet, as far as I know, although a different use for it has been discussed. And I'm not saying that any is the right keyword to use here (although I would be fine with it). I am saying that I do not think some is a sensible or intuitive choice here.

We have established the use of Any* since the earliest days of Swift, that much is incontrovertible.

I didn’t think the choice of some was the most excellent choice for opaque types either—precisely because I could see this neverending “is it some or is it any?” question popping up—but now that it has been chosen, I’m arguing that unifying the spelling of constraints on the two sides of the function arrow is the right thing to do—and that any beef with some applies equally to both. Moreover, it fits with intuition in the manner I have described above, which I won’t repeat ad nauseam.


We should not use the same keyword to mean two different things, especially in close proximity. That is not a complaint that can be applied equally to both, because the use and meaning of some in return position is established. Therefore we should avoid giving it a different meaning in argument position. (And yes, I will continue to insist that it is a different meaning.)

The replied-to comment quotes several examples. Contextual static requirements would be fully available on opaque types because their witnesses are accessible even if the underlying type is unknown. If structural opaque types is accepted to adopt this meaning for some in parameter position, I agree we should not allow it on function declarations. This is a conflict that would have to be resolved between these two proposals.


…And I’m arguing that you’re incorrect in this, in that the two concepts have the same meaning, with the difference given by which side of the function arrow it’s on.

I agree with you in the notion that different things should be spelled differently: the argument here is that they are fundamentally the same—or rather more precisely, that they are sufficiently distinguished by position relative to the function arrow.

1 Like
Terms of Service

Privacy Policy

Cookie Policy