Just to clarify—is the double ... expansion here shorthand for a case where we have, say, expanded a pack T... into a stored property of type (T...) and then later want to expand the stored property back into a pack somewhere (giving us (T...)...)?
If so, then doesn't (T...)... just mean the same thing as T..., i.e., expand the pack directly? That seems to work in all of the {Int}, {(Int, String)}, and {Int, String} cases, unless I've misunderstood the operation we're talking about? (Maybe that's just a rephrasing of what you've already said...).
Without single-element tuples we'd also have cases like the following:
struct G<T...> {
var ts: (T...)
}
G<(Int, String)>(ts: (0, "")).ts
G<Int, String>(ts: 0, "").ts // both 'ts' have type '(Int, String)'
but I don't immediately know if that's problematic.
Yeah, it's the abstraction behind T... that makes the level of tuple-ness to apply the operation obvious. John's point (IIUC) is that, strictly speaking, if every type is also the single-element tuple of itself, the concrete case where you do (Int, String)... is ambiguous, because that could mean unpacking either the pair (Int, String) or the singleton tuple ((Int, String)). But we could make tuple operations like that unavailable on concrete types that are obviously not tuples (besides being their own singleton tuple). Technically, we already do this with the .0 member access. In pre-1.0 betas of Swift you used to be able to apply .0 to everything, and there were fun name lookup ambiguities on 2+-tuples because of it, until we simply made it unavailable on scalars (and spelt the whole-value "property" .self instead to make it unambiguous).
I think that's not quite right. Packs need to be able to contain tuples as elements, and so you can have packs that contains nothing but a single tuple as an element, and operating on that in a variadic-generic context is semantically different from operating on the underlying element that's a tuple. The result is that, if we're not going to introduce single-element tuples, we get this class of "things that look like tuples" in variadic generic contexts, like (T...), which is not necessarily a tuple (if T is a singleton pack of a non-tuple type) but is treated like one in the variadic-generic context. Crucially, tuple operations on such values/types have to be rewritten atomically with substitution, because they need to have the effect that they would have if single-element tuples existed and must not see through to the underlying type (which might be a tuple) if the pack happens to be singleton.
Sorry, I was playing a few notational games there; bad math habits. (T...) is not a valid expression, but it is a valid tuple type. If tuple is a value of type (T...), then tuple... produces a sequence of values of the same length as the type pack T, with the ith element in the sequence being taken from the ith element of tuple and thus having the same type as the ith element of the type pack T. If you expand that in a tuple literal (i.e. within parentheses) with nothing else in the literal (i.e. (tuple...)), then you have an expression of type (T...) element-wise identical to tuple.
If E is an expression that references a pack, then E... produces a sequence of values of the same length as that pack, with the ith element in the sequence being the ith result of evaluating E substituting in the ith element of the pack for all the pack references within E. If you write that in a tuple literal and then immediately expand that tuple, like (E...)..., you'll get the same sequence as you would get from E....
The issue I'm discussing with {(Int,String)} arises if you think of substitution as something you can apply in separate phases. If I have tuple: (T...) in a variadic generic context, and I write tuple..., it is important that that produces the T-structured sequence I discussed above. Without single-element tuples, the type of tuple when T == {(Int,String)} becomes (Int, String), and applying the ... operator to that yields a sequence of two values, not a sequence of one tuple value; but that isn't right. So, for example, when we are implementing the substitution of the ... tuple-expansion operator on a tuple of type (T...), we can't unconditionally assume that the type after substitution is a tuple (because maybe T == {Int}), and we can't correct for that bad assumption by ignoring the ...if the type after substitution was a tuple (because maybeT == ({Int, Float})); we have to check specifically whether substitution eliminated the outermost level of potential tuple-ness that we had, and only if so can we ignore the ...` and use the single sequence element we've got. I think that works, but hopefully you can see why I'm worried about it at an implementation level.
I think I'm imagining is that the tuple-expansion ... acts less dynamically than what you're describing, and instead performs a structural transformation at compile time.
IOW, the tuple-expansion ... would, at compile time, look at its argument and turn it into the appropriate sequence of values. If we have (T...)..., that's simple—the tuple-expansion of a pack directly expanded into a tuple is just the pack expansion itself. So we'd generate code to expand the sequence of Ts into whatever context was receiving the result of the tuple-expansion directly.
So in the T={(Int, String)} case, at the point when the tuple... expansion actually happens we'd have already type-checked it to produce T..., or (Int, String) concretely. And when T={Int} we'd similarly produce T... (i.e., Int), without having to special-case the situation where we've eliminated a level of tuple-ness.