That's a very good point Alexis and makes sense to me. I'll updated the
proposal with that in mind and revise my examples.
On Thu, Jan 26, 2017 at 7:06 PM, Alexis <abeingessner@apple.com> wrote:
On Jan 25, 2017, at 8:15 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:
Srdan, I'm afraid I don't understand your discussion. Can you simplify it
for me by explaining your proposed solution in terms of Alexis's examples
below?
// Example 1: user supplied default is IntegerLiteralConvertible
func foo<T=Int64>(t: T) { ... }
foo(22)
// ^
// |
// What type gets inferred here?
I believe that it is essential that the answer here be `Int` and not
`Int64`.
My reasoning is: a user's code *must not* change because a library *adds*
a default in a newer version. (As mentioned in several design docs, most
recently the new ABI manifesto, defaults in Swift are safe to add without
breaking source compatibility.)
I don’t agree: adding a default to an *existing* type parameter should be
a strict source-breaking change (unless the chosen type can avoid all other
defaulting rules, see the end of this email).
Type Parameter Defaults, as I know them, are a tool for avoiding breakage
when a *new* type parameter is introduced. That is, they allow you to
perform the following transformation safe in the knowledge that it won’t
break clients:
func foo(input: X)
func foo<T=X>(input: T)
For this to work, you need to make the <T=X> default have dominance over
the other default rules.
Specifically you want this code to keep working identically:
// before
func foo(input: Int64)
foo(0) // Int64
// after
func foo<T=Int64>(input: T)
foo(0) // Int64
This is in direct conflict with making the following keep working
identically:
// before
func foo<T>(input: T)
foo(0) // Int
// after
func foo<T=Int64>(input: T)
foo(0) // Int
You have to choose which of these API evolution patterns is most
important, because you can’t make both work. To me, the first one is
obviously the most important, because that’s the whole point of the
feature. The reason to do the second one is to try to make a common/correct
case more ergonomic and/or the default. But unlike function argument
defaults, type parameters can already have inferred values.
Note that source breaking with adding defaults can be avoided as long as
long as the chosen default isn’t:
* XLiteralConvertible (pseudo-exception: if the default is also the
XLiteralType it’s fine, but that type is user configurable)
* A supertype of another type (T?, T!, SuperClass, Protocol, (…,
someLabel: T, ...), [SuperType], [SuperType1:SuperType2], (SuperType) ->
SubType, and probably more in the future)
Concretely this means it’s fine to retroactively make an existing generic
parameter default to MyFinalClass, MyStruct, MyEnum, and
collections/functions/unlabeled-tuples thereof. Arguably,
Int/String/Bool/Array/etc are fine, but there’s a niche situation where
using them can cause user breakage due to changing XLiteralType.
In practice I expect this will be robust enough to avoid breakage — I
expect most defaults will be MyStruct/MyEnum, or an XLiteralType. Even if
it’s not, you need to end up in a situation where inference can actually
kick in and find an ambiguity *and* where the difference matters. (e.g.
SubClass vs SuperClass isn’t a big deal in most cases)
Here, if version 1 of a library has `func foo<T>(t: T) { ... }`, then
`foo(22)` must infer `T` to be `Int`. That's just the rule in Swift, and it
would be severely source-breaking to change that. Therefore, if version 2
of that library has `func foo<T=Int64>(t: T) { ... }`, then `foo(22)` must
still infer `T` to be `Int`.
Does your proposed solution have the same effect?
// Example 2: user supplied default isn't IntegerLiteralConvertible
func bar<T=Character>(t: T) { ... }
bar(22)
// ^
// |
// What type gets inferred here?
By the same reasoning as above, this ought to be `Int`. What would the
answer be in your proposed solution?
On Wed, Jan 25, 2017 at 2:07 PM, Srđan Rašić <srdan.rasic@gmail.com> > wrote:
That's a good example Alexis. I do agree that generic arguments are
inferred in a lot of cases, my point was that they should not be inferred
in "type declarations". Not sure what's the right terminology here, but I
mean following places:
(I) Variable/Constant declaration
let x: X
(II) Property declaration
struct T {
let x: X
}
(III) Function declaration
func a(x: X) -> X
(IV) Enumeration case declaration
enum E {
case x(X)
}
(V) Where clauses
extensions E where A == X {}
In those cases `X` should always mean `X<Int>` if it was defined as
`struct X<T = Int>`. That's all my rule says. Sorry for not being clear in
the last email :)
As for the other cases, mostly those where an instance is created,
inference should be applied.
Let's go through your examples. Given
struct BigInt: Integer {
var storage: Array<Int> =
}
func process<T: BinaryInteger>(_ input: BigInt<T>) -> BigInt<T> { ... }
what happens with `let val1 = process(BigInt())`? I think this is
actually the same problem as what happens in case of `let x = BigInt()`.
In such case my rule does not apply as we don't have full type
declaration. In `let x = BigInt()` type is not defined at all, while in `func
process<T: BinaryInteger>(_ input: BigInt<T>) -> BigInt<T> { ... }` type
is explicitly weakened or "undefaulted" if you will.
We should introduce new rule for such cases and allowing `Storage=Int`
default to participate in such expressions would make sense. As you said,
it also solves second example: let val2 = process(0).
I guess this would be the problem we thought we were solving initially and
in that case I think the solution should be what Doug suggested: if you
can’t infer a particular type, fill in a default.
Of course, if the default conflicts with the generic constraint, it would
not be filled in and it would throw an error.
For the sake of completeness,
func fastProcess(_ input: BigInt<Int64>) -> BigInt<Int64> { ... }
let val3 = fastProcess(BigInt())
would certainly infer the type from context as my rule does not apply to
initializers. It would infer BigInt<Int64>.
As for your last example, I guess we can't do anything about that and
that's ok.
On Wed, Jan 25, 2017 at 7:50 PM, Alexis <abeingessner@apple.com> wrote:
Yes, I agree with Xiaodi here. I don’t think this particular example is
particularly compelling. Especially because it’s not following the full
evolution of the APIs and usage, which is critical for understanding how
defaults should work.
Let's look at the evolution of an API and its consumers with the example
of a BigInt:
struct BigInt: Integer {
var storage: Array<Int> =
}
which a consumer is using like:
func process(_ input: BigInt) -> BigInt { ... }
let val1 = process(BigInt())
let val2 = process(0)
Ok that's all fairly straightforward. Now we decide that BigInt should
expose its storage type for power-users:
struct BigInt<Storage: BinaryInteger = Int>: Integer {
var storage: Array<Storage> =
}
Let's make sure our consumer still works:
func process(_ input: BigInt) -> BigInt { ... }
let val1 = process(BigInt())
let val2 = process(0)
Ok BigInt in process’s definition now means BigInt<Int>, so this still all
works fine. Perfect!
But then the developer of the process function catches wind of this new
power user feature, and wants to support it.
So they too become generic:
func process<T: BinaryInteger>(_ input: BigInt<T>) -> BigInt<T> { ... }
The usage sites are now more complicated, and whether they should compile
is unclear:
let val1 = process(BigInt())
let val2 = process(0)
For val1 you can take a hard stance with your rule: BigInt() means
BigInt<Int>(), and that will work. But for val2 this rule doesn't work,
because no one has written BigInt unqualified. However if you say that the
`Storage=Int` default is allowed to participate in this expression, then we
can still find the old behaviour by defaulting to it when we discover
Storage is ambiguous.
We can also consider another power-user function:
func fastProcess(_ input: BigInt<Int64>) -> BigInt<Int64> { ... }
let val3 = fastProcess(BigInt())
Again, we must decide the interpretation of this. If we take the
interpretation that BigInt() has an inferred type, then the type checker
should discover that BigInt<Int64> is the correct result. If however we
take stance that BigInt() means BigInt<Int>(), then we'll get a type
checking error which our users will consider ridiculous: *of course* they
wanted a BigInt<Int64> here!
We do however have the problem that this won’t work:
let temp = BigInt()
fastProcess(temp) // ERROR — expected BigInt<Int64>, found BigInt<Int>
But that’s just as true for normal ints:
let temp = 0
takesAnInt64(temp) // ERROR — expected Int64, found Int
Such is the limit of Swift’s inference scheme.