SE-0393: Value and Type Parameter Packs

Hello Swift community,

The review of SE-0393: Value and Type Parameter Packs begins now and runs through April 3, 2023.

For those who've followed along during the pitch phase for this feature, you'll note that the proposed syntax has changed based on implementation experience. While it is within the scope of review to consider an alternative syntax, one can only be adopted if it's implementable without leading to serious ambiguities—please see the Alternatives Considered section for more discussion.

We very much hope to hear your thoughts and suggests on aspects of the design other than the proposed syntax as well. A toolchain will soon be forthcoming that will allow you to experiment with an in-progress implementation of the proposed feature, and feedback on the code completion experience in particular is welcome.

Reviews are an important part of the Swift evolution process. All review feedback should be either on this forum thread or, if you would like to keep your feedback private, directly to the review manager via the forum messaging feature. When contacting the review manager directly, please keep the proposal link at the top of the message.

What goes into a review?

The goal of the review process is to improve the proposal under review through constructive criticism and, eventually, determine the direction of Swift. When writing your review, here are some questions you might want to answer in your review:

  • What is your evaluation of the proposal?
  • Is the problem being addressed significant enough to warrant a change to Swift?
  • Does this proposal fit well with the feel and direction of Swift?
  • If you have used other languages or libraries with a similar feature, how do you feel that this proposal compares to those?
  • How much effort did you put into your review? A glance, a quick reading, or an in-depth study?

More information about the Swift evolution process is available at

Thank you,

Xiaodi Wu
Review Manager


Taking off my reviewer hat—

I do have some questions about the decision regarding overload resolution:

From the philosophical perspective, it seems more intuitive that an overload with a specified number of parameters is "more specific," so at first glance the ambiguity here surprises me.

From the practical perspective, I'm not sure I see a way here to disambiguate the ambiguity at the call site (there doesn't seem to be a spelling as ... which can tell the compiler which one to call, for example), which is extra salient given the following:

The proposal itself states that one example use case for parameter packs is allowing us to write an overload of func < for tuple comparison, but in fact (if shipped with declarations as written in the proposal) this would lead to source-breaking ambiguity for arities between 1 through 6, unless we also use @available or @_disfavoredOverload on the existing functions in the standard library, or unless we spell the new func < with six mandatory parameters followed by one parameter pack.

All of this is unwieldy and/or prone to error and—even if we can accomplish this correctly within the standard library for the specific function in question—it is emblematic of the experience that other library authors will have when they update their own functions to use parameter packs.

Moreover, let's suppose a function frobnicate can be written that takes arbitrarily many arguments, but that there's a specific optimized implementation possible when the number of arguments is, say, three. A natural way of expressing this would be to write an overload frobnicate(_:_:_:) and another overload that takes a pack; the latter might even check dynamically that the number of arguments is 3 and call the optimized implementation in turn (if/when we have a spelling for that). It seems to me there would be no easy way to express this if both overloads are ranked equally.

Unless there's some overriding implementation-level concern here, I think the ergonomics would be superior if the overload with non-pack parameters is ranked as "more specific," all other things being equal.


I like the use of keywords that this ended up at. I notice that many of the examples where repeat is used with a function pattern type use parens around the function type, like repeat ((each Argument) -> Return), but I didn't see whether it was explicitly noted whether this is necessary; does repeat have lower precedence than -> so that repeat (each Argument) -> Return would also be valid?


The parenthesis around function pattern types aren't necessary, e.g. this is valid:

func variadic<each T>(
  closure: repeat (each T) -> Int,
  input: repeat each T
) {
  let results = (repeat closure(each input))

I agree and moreover I think we might want to consider a more strict approach here and disallow overloading with non-variadic (available) overloads to avoid having to invent new ranking rules and deal with possible corner cases with all the features we currently have i.e. allowing to default generic parameters.

This feature is general enough to be able to support "specialization" via where clause requirements too, for example:

func overloaded<each T>(_: repeat each T) where each T == (Int, String) {
  // ...
func overloaded<each T, each U>(_: repeat each T) where each T == (repeat each U, Int, String) {
  // ...

Which should satisfy the need to optimize certain call scenarios.

Edit: To support adding variadic overloads to existing resilient APIs we could require that all of the existing methods are made unavailable.

Overloading should ideally never be used for "specialization", though. In Swift it's unfit for that purpose, because the selection won't occur through generic abstraction, and people used to a more C++-like model end up with questions about why this doesn't work the way they want, and frustrate themselves building into a dead-end that will never completely work the way they want. If we have a clean slate here to potentially prevent overloading from the get go, I think it's worth considering that as a possibility.


Right, I think it's worth it to try and avoid this kind of scenario:

func overloaded<T>(_: T = 42) {}
func overloaded<each T>(_: repeat each T) {}


In practice, it's hard to actually prevent overloading with declaration-time diagnostics — if everything is declared in one scope, sure, but often that's not how overloading happens. We can declare overloads to be ambiguous on use, though.

Functions with concrete arity are more specialized than functions with variadic arity under the traditional rule of "can you call one with the arguments of the other?". I think we haven't embraced that rule with perfect consistency, though, so we don't have to use it here.


That's true. But we do have other existing situations in the language where overloading is soft-prevented but can't be completely avoided, such as with properties, where you can't directly write struct Foo { var x: Int; var x: String }, but you can't prevent someone from extending a protocol Foo conforms to with a var x. So we could do a best effort to discourage obviously ambiguous overloads, and make a call site that matches multiple potential overloads ambiguous, as you said.


This is, of course, no help to the user of such API who suddenly finds both functions unusable, so to @xwu’s point I do think we need to consider the disambiguation strategy in these cases.


One nice thing about preventing obvious overlapping overloads in a scope is that it would leave the door open to something like Becca's :: operator to qualify names, since potential overlaps would need to have at least been declared on different protocols or different modules' extensions, so you could write x.FooModule::overloaded(...) or x.BarModule::overloaded(...) to disambiguate.


This would be sufficient only if we prevent all obvious and subtle sources of ambiguity within a module, which may not be entirely trivial. And if we do an excellent good job of it, there may be oddities related to this that users really won't like.

Consider, for example, Pavel's example of default values—such a rule could prevent library authors from adding a default value to a parameter because defaulting it could cause a diagnosable possibility of ambiguity:

func foo<each T: BinaryInteger>(_: repeat each T) { }
func foo(bar: String = "42", baz: String, _: Int) { }
// `baz` cannot get a new default value *if* `bar` already has one,
// and vice versa; but either one having a default value is fine...

Actually, there's more! There is no ambiguity here:

// Both overloads in the same module:
func bar<each T: Error>(_: repeat each T) { }
func bar(_: String) { }

...until another module retroactively conforms String to Error—and then there's no module qualifying syntax that can clarify the ambiguity at the use site :frowning:

This means that a retroactive conformance in library A can make a correctly non-overlapping set of overloads in library B uncallable by end user C.


A strict "no overloads" rule would say that you can't have overloads, period, even if they're "obviously" not overlapping. A slightly less strict "no overlapping overloads" rule would have to consider all types to potentially conform to any protocol, yeah. We have the latter rule in place already for conditional conformances, for example, where you can't declare both Foo: P where AssocType == String and Foo: P where AssocType: Error for the same reason.

Yeah, retrofitting existing API is one "legitimate" use case for overloading in my eye, so accommodating this is important. However, in practice even our existing overloading model hasn't been so great at this, since API authors have frequently had to deploy undocumented hacks to manually order existing overloads when they try to use overloading to introduce new default arguments, etc.

Will I be able compactly declaring a homogenous tuple of N elements with this? (where N is, say, 1000)

1 Like

With this proposal, you can write generic code that abstracts over the arity of a tuple using parameter packs. You can constrain the elements of that tuple to all have the same type with a same-element requirement on a type parameter pack. You cannot write code that operates over a tuple of a specific length using parameter packs, because this proposal only supports abstract lengths for parameter packs. The justification for this decision can be found in the proposal.


I see examples using Optional<each T> and I would like to know if this is equivalent to each T? and if it isn't, I foresee this being a common point of confusion. Could someone explain the difference if they aren't the same?

1 Like

Using the optional sugar with each requires parentheses, e.g. (each T)?, just like other contextual keywords such as (some P)? and (any P)?. Optional<each T> is the same as (each T)?.


So is each T? something that's not Optional<each T> or is it an error?

1 Like

each T? is an error, because it is interpreted as each (T?) and T? is not a parameter pack. We can consider interpreting each T? as (each T)?, but I think we should consider that holistically for all contextual keywords that can be applied to types in a separate proposal.


Assuming (arguendo) that overloading shouldn't be used for this sort of 'specialization' except in retrofitting, this would still pose issues for doing some things the ‘right’ way I think: Would a method func f<each T>(_: repeat each T) declared on a concrete type satisfy a protocol requirement func f<T>(_: T)?

  • If so, it would be somewhat odd (and, in my reckoning, without precedent in Swift) for a method to be recognized as "more general" for the purposes of satisfying protocol requirements but not for the purposes of overloading. I guess this could be a supportable state of affairs if we also roll in a soft ban on overloading with parameter packs, though—it would basically define away the latter scenario.

  • If not, then we're in a bit of a pickle. For this would mean that a type that semantically meets all the requirements of a protocol P would be forbidden from usefully conforming to the protocol because of a semantically satisfactory implementation. If we go with the proposal as-is, the type could be made to conform by writing a manual "trampoline" but then an end user would never be able to call either function with the required arity due to ambiguity; and if we go with a soft ban on overloads, then the type couldn't be made to conform at all (except retroactively in another module, blech).