Default Generic Arguments

I was watching this talk recently by Bjarne Stroustrup (https://youtu.be/2egL4y_VpYg?t=26m34s <https://youtu.be/2egL4y_VpYg?t=26m34s&gt;\), he was saying that when new things are added to a language, people tend to want very loud syntax just because it’s new and people want it to be obvious and fool-proof. That’s why C++ syntax is such a mess.

Personally, I’m fine leaving the angle brackets away if there is a default value. If the expected type is not the one inferred by the initialiser, you will hit compiler errors and be able to explicitly say which value the parameter should have. It’s not an unreasonable cognitive load.

- Karl

···

On 24 Jan 2017, at 05:10, Xiaodi Wu via swift-evolution <swift-evolution@swift.org> wrote:

While it looks nicer without the angle brackets, that suggestion is unresponsive to David's point that we need some way to distinguish defaulted generic arguments from inferred generic arguments.

Consider:

let a: Optional = 1 // Optional<Int>

enum FloatPreferringOptional<T = Float> {
  case some(T)
  case none
}

let b: FloatPreferringOptional = 1
// Does this give you an FloatPreferringOptional<Int>?

If the answer to the above question is "yes, T is inferred as Int" then we need some way to express "give me the default for T, which is Float." If the answer to the above question is "no" then we need some way to express "don't give me the default; rather, infer type T from the right hand side."

On Mon, Jan 23, 2017 at 6:30 PM, Matthew Johnson via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
This proposal looks good to me. I have been looking forward to more flexible generic arguments for a while.

I agree with previous commenters who prefer the option to leave off the angle brackets when all parameters have defaults.

The proposal specifically mentions that the syntax is inspired by that of function arguments. This is good, but I wonder if maybe we should draw further inspiration from function arguments and also add parameter labels for generic arguments. Both feel like low hanging fruit in the generics area (correct me if I’m wrong about that) and it would be great to see both enhancements make it into Swift 4.

On Jan 23, 2017, at 9:55 AM, Srđan Rašić via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

Hi Everyone,

I've opened a PR (https://github.com/apple/swift-evolution/pull/591\) proposing default generic arguments which I think would be nice addition to the language. They are also mentioned in "Generic manifesto".

The proposal is focusing around generic types. Generic functions are not coved by the proposal and I don't think that we need default generic arguments in generic functions as all the types are always part of the function signature so the compiler can always infer them. One corner case might be if using default argument values in which case support for default generic arguments in functions might be useful.

It would be great to hear your opinions and suggestions so I can refine the proposal.
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

It’s worth noting that the question of “how do these defaults interact with other defaults” is an issue that has left this feature dead in the water in the Rust language despite being accepted for inclusion two years ago. See Interaction of user-defined and integral fallbacks with inference - language design - Rust Internals for some discussion of the issues at hand.

For those who don’t want to click that link, or are having trouble translating the syntax/terms to Swift. The heart of Niko’s post is the following (note: functions are used here for expedience; you can imagine these are `inits` for a generic type if you wish):

// Example 1: user supplied default is IntegerLiteralConvertible

func foo<T=Int64>(t: T) { ... }

foo<_>(22)
// ^
// |
// What type gets inferred here?

// Example 2: user supplied default isn't IntegerLiteralConvertible

func bar<T=Character>(t: T) { ... }

bar<_>(22)
// ^
// |
// What type gets inferred here?

There are 4 strategies:

(Note: I use “integer literal” here for simplicity; in general it's “any kind of literal, and its associated LiteralType”. So this reasoning also applies to FloatLiteralType, StringLiteralType, BooleanLiteralType, etc.)

* Unify all: always unify the variables with all defaults. This is the conservative choice in that it gives an error if there is any doubt.

* Prefer literal: always prefer IntegerLiteralType (Int). This is the maximally backwards compatible choice, but I think it leads to very surprising outcomes.

* Prefer user: always the user-defined choice. This is simple from one point of view, but does lead to a potentially counterintuitive result for example 2.

* Do What I Mean (DWIM): Prefer the user-defined default, except in the case where the variable is unified with an integer literal and the user-defined default isn't IntegerLiteralConvertible. This is complex to say but leads to sensible results on both examples. (Basically: prefer user, but fallback to IntegerLiteralType if the user default doesn’t actually make sense)

Strategy | Example 1 | Example 2 |
-------------- | --------- | --------- |
Unify all | Error | Error |
Prefer literal | Int | Int |
Prefer user | Int64 | Error |
DWIM | Int64 | Int |

Personally, I’ve always favoured DWIM. Especially in Swift where IntegerLiteralType inference is so frequently used (you don’t want adding a default to cause code to stop compiling!). In practice I don’t expect there to be many cases where this ambiguity actually kicks in, as it requires the user-specified default to be a LiteralConvertible type that isn't the relevant LiteralType, and for the type variable to affect an actual Literal. So <T=String>(x: T) never causes problems, but <T=StaticString>(x: T) does.

As for the matter of “what if I want the other one” — you clearly know the actual type you want; just name it explicitly.

···

On Jan 24, 2017, at 2:59 AM, Srđan Rašić via swift-evolution <swift-evolution@swift.org> wrote:

> If the answer to the above question is "yes, T is inferred as Int" then we need some way to express "give me the default for T, which is Float."

I don't think that we need that. It would introduce a new level of explicitness, "I want the default, but I don't care what the default is", that is not really useful. If you don't care what the default type is, you probably also don't care that you are defaulting. If you do care what the default type is, you would explicitly sepecify it as `X<Float>`.

> If the answer to the above question is "no" then we need some way to express "don't give me the default; rather, infer type T from the right hand side."

That would be preferred behavior. Infer from the context if possible, use default otherwise.

tir. 24. jan. 2017 kl. 05.11 skrev Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>>:
While it looks nicer without the angle brackets, that suggestion is unresponsive to David's point that we need some way to distinguish defaulted generic arguments from inferred generic arguments.

Consider:

let a: Optional = 1 // Optional<Int>

enum FloatPreferringOptional<T = Float> {
  case some(T)
  case none
}

let b: FloatPreferringOptional = 1
// Does this give you an FloatPreferringOptional<Int>?

If the answer to the above question is "yes, T is inferred as Int" then we need some way to express "give me the default for T, which is Float." If the answer to the above question is "no" then we need some way to express "don't give me the default; rather, infer type T from the right hand side."

On Mon, Jan 23, 2017 at 6:30 PM, Matthew Johnson via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
This proposal looks good to me. I have been looking forward to more flexible generic arguments for a while.

I agree with previous commenters who prefer the option to leave off the angle brackets when all parameters have defaults.

The proposal specifically mentions that the syntax is inspired by that of function arguments. This is good, but I wonder if maybe we should draw further inspiration from function arguments and also add parameter labels for generic arguments. Both feel like low hanging fruit in the generics area (correct me if I’m wrong about that) and it would be great to see both enhancements make it into Swift 4.

On Jan 23, 2017, at 9:55 AM, Srđan Rašić via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

Hi Everyone,

I've opened a PR (https://github.com/apple/swift-evolution/pull/591\) proposing default generic arguments which I think would be nice addition to the language. They are also mentioned in "Generic manifesto".

The proposal is focusing around generic types. Generic functions are not coved by the proposal and I don't think that we need default generic arguments in generic functions as all the types are always part of the function signature so the compiler can always infer them. One corner case might be if using default argument values in which case support for default generic arguments in functions might be useful.

It would be great to hear your opinions and suggestions so I can refine the proposal.

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________

swift-evolution mailing list

swift-evolution@swift.org <mailto:swift-evolution@swift.org>

https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

We could just remove that parameter from the type inference system... if the default value is Int and the user passes in a String, that'd be an error, unless the user also sets that parameter to String.

I'd envisioned using default parameters as more "compile-time configuration options" than something for the type system to actually have deal with. IIRC, I was trying to come up with a way to write just one arbitrary-sized integer struct where I *could* specify the width of its internal calculations, but would usually just use a default value.

- Dave Sweeris

···

On Jan 24, 2017, at 11:41, Alexis via swift-evolution <swift-evolution@swift.org> wrote:

It’s worth noting that the question of “how do these defaults interact with other defaults” is an issue that has left this feature dead in the water in the Rust language despite being accepted for inclusion two years ago. See Interaction of user-defined and integral fallbacks with inference - language design - Rust Internals for some discussion of the issues at hand.

For those who don’t want to click that link, or are having trouble translating the syntax/terms to Swift. The heart of Niko’s post is the following (note: functions are used here for expedience; you can imagine these are `inits` for a generic type if you wish):

// Example 1: user supplied default is IntegerLiteralConvertible

func foo<T=Int64>(t: T) { ... }

foo<_>(22)
// ^
// |
// What type gets inferred here?

// Example 2: user supplied default isn't IntegerLiteralConvertible

func bar<T=Character>(t: T) { ... }

bar<_>(22)
// ^
// |
// What type gets inferred here?

There are 4 strategies:

(Note: I use “integer literal” here for simplicity; in general it's “any kind of literal, and its associated LiteralType”. So this reasoning also applies to FloatLiteralType, StringLiteralType, BooleanLiteralType, etc.)

* Unify all: always unify the variables with all defaults. This is the conservative choice in that it gives an error if there is any doubt.

* Prefer literal: always prefer IntegerLiteralType (Int). This is the maximally backwards compatible choice, but I think it leads to very surprising outcomes.

* Prefer user: always the user-defined choice. This is simple from one point of view, but does lead to a potentially counterintuitive result for example 2.

* Do What I Mean (DWIM): Prefer the user-defined default, except in the case where the variable is unified with an integer literal and the user-defined default isn't IntegerLiteralConvertible. This is complex to say but leads to sensible results on both examples. (Basically: prefer user, but fallback to IntegerLiteralType if the user default doesn’t actually make sense)

> Strategy | Example 1 | Example 2 |
> -------------- | --------- | --------- |
> Unify all | Error | Error |
> Prefer literal | Int | Int |
> Prefer user | Int64 | Error |
> DWIM | Int64 | Int |

Personally, I’ve always favoured DWIM. Especially in Swift where IntegerLiteralType inference is so frequently used (you don’t want adding a default to cause code to stop compiling!). In practice I don’t expect there to be many cases where this ambiguity actually kicks in, as it requires the user-specified default to be a LiteralConvertible type that isn't the relevant LiteralType, and for the type variable to affect an actual Literal. So <T=String>(x: T) never causes problems, but <T=StaticString>(x: T) does.

As for the matter of “what if I want the other one” — you clearly know the actual type you want; just name it explicitly.

I like this approach as a first pass. It leaves room for other, more
forgiving strategies later and is relatively easy to explain.

···

On Tue, Jan 24, 2017 at 4:16 PM, David Sweeris via swift-evolution < swift-evolution@swift.org> wrote:

On Jan 24, 2017, at 11:41, Alexis via swift-evolution < > swift-evolution@swift.org> wrote:

It’s worth noting that the question of “how do these defaults interact
with other defaults” is an issue that has left this feature dead in the
water in the Rust language despite being accepted for inclusion two years
ago. See https://internals.rust-lang.org/t/interaction-of-
user-defined-and-integral-fallbacks-with-inference/2496 for some
discussion of the issues at hand.

For those who don’t want to click that link, or are having trouble
translating the syntax/terms to Swift. The heart of Niko’s post is the
following (note: functions are used here for expedience; you can imagine
these are `inits` for a generic type if you wish):

// Example 1: user supplied default is IntegerLiteralConvertible

func foo<T=Int64>(t: T) { ... }

foo<_>(22)
// ^
// |
// What type gets inferred here?

// Example 2: user supplied default isn't IntegerLiteralConvertible

func bar<T=Character>(t: T) { ... }

bar<_>(22)
// ^
// |
// What type gets inferred here?

There are 4 strategies:

(Note: I use “integer literal” here for simplicity; in general it's “any
kind of literal, and its associated LiteralType”. So this reasoning also
applies to FloatLiteralType, StringLiteralType, BooleanLiteralType, etc.)

* Unify all: always unify the variables with all defaults. This is the
conservative choice in that it gives an error if there is any doubt.

* Prefer literal: always prefer IntegerLiteralType (Int). This is the
maximally backwards compatible choice, but I think it leads to very
surprising outcomes.

* Prefer user: always the user-defined choice. This is simple from one
point of view, but does lead to a potentially counterintuitive result for
example 2.

* Do What I Mean (DWIM): Prefer the user-defined default, except in the
case where the variable is unified with an integer literal *and* the
user-defined default isn't IntegerLiteralConvertible. This is complex to
say but leads to sensible results on both examples. (Basically: prefer
user, but fallback to IntegerLiteralType if the user default doesn’t
actually make sense)

> Strategy | Example 1 | Example 2 |
> -------------- | --------- | --------- |
> Unify all | Error | Error |
> Prefer literal | Int | Int |
> Prefer user | Int64 | Error |
> DWIM | Int64 | Int |

Personally, I’ve always favoured DWIM. Especially in Swift where
IntegerLiteralType inference is so frequently used (you don’t want adding a
default to cause code to stop compiling!). In practice I don’t expect there
to be many cases where this ambiguity actually kicks in, as it requires the
user-specified default to be a LiteralConvertible type that isn't the
relevant LiteralType, and for the type variable to affect an actual
Literal. So <T=String>(x: T) never causes problems, but <T=StaticString>(x:
T) does.

As for the matter of “what if I want the other one” — you clearly know the
actual type you want; just name it explicitly.

We could just remove that parameter from the type inference system... if
the default value is Int and the user passes in a String, that'd be an
error, unless the user also sets that parameter to String.

I'd envisioned using default parameters as more "compile-time
configuration options" than something for the type system to actually have
deal with. IIRC, I was trying to come up with a way to write just one
arbitrary-sized integer struct where I *could* specify the width of its
internal calculations, but would usually just use a default value.

- Dave Sweeris

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

We are probably taking the wrong direction here and trying to solve the
problem that does not need solving. We are discussing how to infer
gereneric arguments in type declarations while we should not do that at
all.

Let me repeat Doug's examples:

struct X<T = Int> { }

func f1() -> X<Double> { return X() }

func f2() -> X<Int> { return X() }
func f2() -> X<Double> { return X() }

func f3<T>(_: T) -> X<T> { return X() }

let x1: X = f1() // okay: x1 has type X<Double>?
let x2: X = f2() // ambiguous?
let x3a: X = f3(1.5) // okay: x3a has type X<Double>?
let x3b: X = f3(1) // okay: x3a has type X<Int>?

Thinking about what the generic argument of X should be inferred to for x1,
x2 and x3 is pointless. If one omits generic arguments in the variable
declaration, one is accepting the defaults. In other words, doing let x: X
= ... should always be treated as doing let x: X<Int> = ..., regardless of
what we have on the right hand side. No inference should happen in this
case. It would mean inferring already specified type.

Why? Consider what happens if we define x as a property:

struct Test {
  let x: X

  init() {
    x = f()
  }
}

It would make no sense that the initialization in the initializer
specializes the generic argument of the property, so for the sake of
consistency we should not do it for the variables/constants either.

Given that, we can solve Doug's example as:

let x1: X = f1() // error: cannot assign X<Double> to X<Int>

let x2: X = f2() // ok: using X<Int> overload
let x3a: X = f3(1.5) // error like in x1
let x3b: X = f3(1) // ok because rhs is inferred as X<Int>

I think this is the only valid way to go and it really simplifies things,
both the understanding of how the feature works, but also the
implementation.

What do you think?

···

tir. 24. jan. 2017 kl. 22.16 skrev David Sweeris <davesweeris@mac.com>:

On Jan 24, 2017, at 11:41, Alexis via swift-evolution < > swift-evolution@swift.org> wrote:

It’s worth noting that the question of “how do these defaults interact
with other defaults” is an issue that has left this feature dead in the
water in the Rust language despite being accepted for inclusion two years
ago. See
Interaction of user-defined and integral fallbacks with inference - language design - Rust Internals for
some discussion of the issues at hand.

For those who don’t want to click that link, or are having trouble
translating the syntax/terms to Swift. The heart of Niko’s post is the
following (note: functions are used here for expedience; you can imagine
these are `inits` for a generic type if you wish):

// Example 1: user supplied default is IntegerLiteralConvertible

func foo<T=Int64>(t: T) { ... }

foo<_>(22)
// ^
// |
// What type gets inferred here?

// Example 2: user supplied default isn't IntegerLiteralConvertible

func bar<T=Character>(t: T) { ... }

bar<_>(22)
// ^
// |
// What type gets inferred here?

There are 4 strategies:

(Note: I use “integer literal” here for simplicity; in general it's “any
kind of literal, and its associated LiteralType”. So this reasoning also
applies to FloatLiteralType, StringLiteralType, BooleanLiteralType, etc.)

* Unify all: always unify the variables with all defaults. This is the
conservative choice in that it gives an error if there is any doubt.

* Prefer literal: always prefer IntegerLiteralType (Int). This is the
maximally backwards compatible choice, but I think it leads to very
surprising outcomes.

* Prefer user: always the user-defined choice. This is simple from one
point of view, but does lead to a potentially counterintuitive result for
example 2.

* Do What I Mean (DWIM): Prefer the user-defined default, except in the
case where the variable is unified with an integer literal *and* the
user-defined default isn't IntegerLiteralConvertible. This is complex to
say but leads to sensible results on both examples. (Basically: prefer
user, but fallback to IntegerLiteralType if the user default doesn’t
actually make sense)

> Strategy | Example 1 | Example 2 |

> -------------- | --------- | --------- |

> Unify all | Error | Error |

> Prefer literal | Int | Int |

> Prefer user | Int64 | Error |

> DWIM | Int64 | Int |

Personally, I’ve always favoured DWIM. Especially in Swift where
IntegerLiteralType inference is so frequently used (you don’t want adding a
default to cause code to stop compiling!). In practice I don’t expect there
to be many cases where this ambiguity actually kicks in, as it requires the
user-specified default to be a LiteralConvertible type that isn't the
relevant LiteralType, and for the type variable to affect an actual
Literal. So <T=String>(x: T) never causes problems, but <T=StaticString>(x:
T) does.

As for the matter of “what if I want the other one” — you clearly know the
actual type you want; just name it explicitly.

We could just remove that parameter from the type inference system... if
the default value is Int and the user passes in a String, that'd be an
error, unless the user also sets that parameter to String.

I'd envisioned using default parameters as more "compile-time
configuration options" than something for the type system to actually have
deal with. IIRC, I was trying to come up with a way to write just one
arbitrary-sized integer struct where I *could* specify the width of its
internal calculations, but would usually just use a default value.

- Dave Sweeris

That does not comport with the definition of "default." I would disagree
with that treatment. Nor does it seem consistent with current syntax. If I
have a type Foo<T>, then inference works when someone writes `let a: Foo =
...`. If I add a default to my type Foo<T=Bar>, this should be a resilient
change for my users.

···

On Tue, Jan 24, 2017 at 16:28 T.J. Usiyan via swift-evolution < swift-evolution@swift.org> wrote:

I like this approach as a first pass. It leaves room for other, more
forgiving strategies later and is relatively easy to explain.

On Tue, Jan 24, 2017 at 4:16 PM, David Sweeris via swift-evolution < > swift-evolution@swift.org> wrote:

On Jan 24, 2017, at 11:41, Alexis via swift-evolution < > swift-evolution@swift.org> wrote:

It’s worth noting that the question of “how do these defaults interact
with other defaults” is an issue that has left this feature dead in the
water in the Rust language despite being accepted for inclusion two years
ago. See
Interaction of user-defined and integral fallbacks with inference - language design - Rust Internals for
some discussion of the issues at hand.

For those who don’t want to click that link, or are having trouble
translating the syntax/terms to Swift. The heart of Niko’s post is the
following (note: functions are used here for expedience; you can imagine
these are `inits` for a generic type if you wish):

// Example 1: user supplied default is IntegerLiteralConvertible

func foo<T=Int64>(t: T) { ... }

foo<_>(22)
// ^
// |
// What type gets inferred here?

// Example 2: user supplied default isn't IntegerLiteralConvertible

func bar<T=Character>(t: T) { ... }

bar<_>(22)
// ^
// |
// What type gets inferred here?

There are 4 strategies:

(Note: I use “integer literal” here for simplicity; in general it's “any
kind of literal, and its associated LiteralType”. So this reasoning also
applies to FloatLiteralType, StringLiteralType, BooleanLiteralType, etc.)

* Unify all: always unify the variables with all defaults. This is the
conservative choice in that it gives an error if there is any doubt.

* Prefer literal: always prefer IntegerLiteralType (Int). This is the
maximally backwards compatible choice, but I think it leads to very
surprising outcomes.

* Prefer user: always the user-defined choice. This is simple from one
point of view, but does lead to a potentially counterintuitive result for
example 2.

* Do What I Mean (DWIM): Prefer the user-defined default, except in the
case where the variable is unified with an integer literal *and* the
user-defined default isn't IntegerLiteralConvertible. This is complex to
say but leads to sensible results on both examples. (Basically: prefer
user, but fallback to IntegerLiteralType if the user default doesn’t
actually make sense)

> Strategy | Example 1 | Example 2 |
> -------------- | --------- | --------- |
> Unify all | Error | Error |
> Prefer literal | Int | Int |
> Prefer user | Int64 | Error |
> DWIM | Int64 | Int |

Personally, I’ve always favoured DWIM. Especially in Swift where
IntegerLiteralType inference is so frequently used (you don’t want adding a
default to cause code to stop compiling!). In practice I don’t expect there
to be many cases where this ambiguity actually kicks in, as it requires the
user-specified default to be a LiteralConvertible type that isn't the
relevant LiteralType, and for the type variable to affect an actual
Literal. So <T=String>(x: T) never causes problems, but <T=StaticString>(x:
T) does.

As for the matter of “what if I want the other one” — you clearly know the
actual type you want; just name it explicitly.

We could just remove that parameter from the type inference system... if
the default value is Int and the user passes in a String, that'd be an
error, unless the user also sets that parameter to String.

I'd envisioned using default parameters as more "compile-time
configuration options" than something for the type system to actually have
deal with. IIRC, I was trying to come up with a way to write just one
arbitrary-sized integer struct where I *could* specify the width of its
internal calculations, but would usually just use a default value.

- Dave Sweeris

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

As I replied above, this doesn't work IMO because omitted generic arguments
are inferred, and that can't change without being hugely source-breaking.

I think it's absolutely essential that adding a default to my library
doesn't change the behavior of code that uses my library. That's currently
the case, afaict, for all default arguments, and so I think it's essential
here.

···

On Tue, Jan 24, 2017 at 17:26 Srđan Rašić via swift-evolution < swift-evolution@swift.org> wrote:

We are probably taking the wrong direction here and trying to solve the
problem that does not need solving. We are discussing how to infer
gereneric arguments in type declarations while we should not do that at
all.

Let me repeat Doug's examples:

struct X<T = Int> { }

func f1() -> X<Double> { return X() }

func f2() -> X<Int> { return X() }
func f2() -> X<Double> { return X() }

func f3<T>(_: T) -> X<T> { return X() }

let x1: X = f1() // okay: x1 has type X<Double>?
let x2: X = f2() // ambiguous?
let x3a: X = f3(1.5) // okay: x3a has type X<Double>?
let x3b: X = f3(1) // okay: x3a has type X<Int>?

Thinking about what the generic argument of X should be inferred to for
x1, x2 and x3 is pointless. If one omits generic arguments in the variable
declaration, one is accepting the defaults. In other words, doing let x: X
= ... should always be treated as doing let x: X<Int> = ..., regardless of
what we have on the right hand side. No inference should happen in this
case. It would mean inferring already specified type.

Why? Consider what happens if we define x as a property:

struct Test {
  let x: X

  init() {
    x = f()
  }
}

It would make no sense that the initialization in the initializer
specializes the generic argument of the property, so for the sake of
consistency we should not do it for the variables/constants either.

Given that, we can solve Doug's example as:

let x1: X = f1() // error: cannot assign X<Double> to X<Int>

let x2: X = f2() // ok: using X<Int> overload
let x3a: X = f3(1.5) // error like in x1
let x3b: X = f3(1) // ok because rhs is inferred as X<Int>

I think this is the only valid way to go and it really simplifies things,
both the understanding of how the feature works, but also the
implementation.

What do you think?

tir. 24. jan. 2017 kl. 22.16 skrev David Sweeris <davesweeris@mac.com>:

On Jan 24, 2017, at 11:41, Alexis via swift-evolution < > swift-evolution@swift.org> wrote:

It’s worth noting that the question of “how do these defaults interact
with other defaults” is an issue that has left this feature dead in the
water in the Rust language despite being accepted for inclusion two years
ago. See
Interaction of user-defined and integral fallbacks with inference - language design - Rust Internals for
some discussion of the issues at hand.

For those who don’t want to click that link, or are having trouble
translating the syntax/terms to Swift. The heart of Niko’s post is the
following (note: functions are used here for expedience; you can imagine
these are `inits` for a generic type if you wish):

// Example 1: user supplied default is IntegerLiteralConvertible

func foo<T=Int64>(t: T) { ... }

foo<_>(22)
// ^
// |
// What type gets inferred here?

// Example 2: user supplied default isn't IntegerLiteralConvertible

func bar<T=Character>(t: T) { ... }

bar<_>(22)
// ^
// |
// What type gets inferred here?

There are 4 strategies:

(Note: I use “integer literal” here for simplicity; in general it's “any
kind of literal, and its associated LiteralType”. So this reasoning also
applies to FloatLiteralType, StringLiteralType, BooleanLiteralType, etc.)

* Unify all: always unify the variables with all defaults. This is the
conservative choice in that it gives an error if there is any doubt.

* Prefer literal: always prefer IntegerLiteralType (Int). This is the
maximally backwards compatible choice, but I think it leads to very
surprising outcomes.

* Prefer user: always the user-defined choice. This is simple from one
point of view, but does lead to a potentially counterintuitive result for
example 2.

* Do What I Mean (DWIM): Prefer the user-defined default, except in the
case where the variable is unified with an integer literal *and* the
user-defined default isn't IntegerLiteralConvertible. This is complex to
say but leads to sensible results on both examples. (Basically: prefer
user, but fallback to IntegerLiteralType if the user default doesn’t
actually make sense)

> Strategy | Example 1 | Example 2 |

> -------------- | --------- | --------- |

> Unify all | Error | Error |

> Prefer literal | Int | Int |

> Prefer user | Int64 | Error |

> DWIM | Int64 | Int |

Personally, I’ve always favoured DWIM. Especially in Swift where
IntegerLiteralType inference is so frequently used (you don’t want adding a
default to cause code to stop compiling!). In practice I don’t expect there
to be many cases where this ambiguity actually kicks in, as it requires the
user-specified default to be a LiteralConvertible type that isn't the
relevant LiteralType, and for the type variable to affect an actual
Literal. So <T=String>(x: T) never causes problems, but <T=StaticString>(x:
T) does.

As for the matter of “what if I want the other one” — you clearly know the
actual type you want; just name it explicitly.

We could just remove that parameter from the type inference system... if
the default value is Int and the user passes in a String, that'd be an
error, unless the user also sets that parameter to String.

I'd envisioned using default parameters as more "compile-time
configuration options" than something for the type system to actually have
deal with. IIRC, I was trying to come up with a way to write just one
arbitrary-sized integer struct where I *could* specify the width of its
internal calculations, but would usually just use a default value.

- Dave Sweeris

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Yes, I agree with Xiaodi here. I don’t think this particular example is particularly compelling. Especially because it’s not following the full evolution of the APIs and usage, which is critical for understanding how defaults should work.

Let's look at the evolution of an API and its consumers with the example of a BigInt:

struct BigInt: Integer {
  var storage: Array<Int> =
}

which a consumer is using like:

func process(_ input: BigInt) -> BigInt { ... }
let val1 = process(BigInt())
let val2 = process(0)

Ok that's all fairly straightforward. Now we decide that BigInt should expose its storage type for power-users:

struct BigInt<Storage: BinaryInteger = Int>: Integer {
  var storage: Array<Storage> =
}

Let's make sure our consumer still works:

func process(_ input: BigInt) -> BigInt { ... }
let val1 = process(BigInt())
let val2 = process(0)

Ok BigInt in process’s definition now means BigInt<Int>, so this still all works fine. Perfect!

But then the developer of the process function catches wind of this new power user feature, and wants to support it.
So they too become generic:

func process<T: BinaryInteger>(_ input: BigInt<T>) -> BigInt<T> { ... }

The usage sites are now more complicated, and whether they should compile is unclear:

let val1 = process(BigInt())
let val2 = process(0)

For val1 you can take a hard stance with your rule: BigInt() means BigInt<Int>(), and that will work. But for val2 this rule doesn't work, because no one has written BigInt unqualified. However if you say that the `Storage=Int` default is allowed to participate in this expression, then we can still find the old behaviour by defaulting to it when we discover Storage is ambiguous.

We can also consider another power-user function:

func fastProcess(_ input: BigInt<Int64>) -> BigInt<Int64> { ... }
let val3 = fastProcess(BigInt())

Again, we must decide the interpretation of this. If we take the interpretation that BigInt() has an inferred type, then the type checker should discover that BigInt<Int64> is the correct result. If however we take stance that BigInt() means BigInt<Int>(), then we'll get a type checking error which our users will consider ridiculous: *of course* they wanted a BigInt<Int64> here!

We do however have the problem that this won’t work:

let temp = BigInt()
fastProcess(temp) // ERROR — expected BigInt<Int64>, found BigInt<Int>

But that’s just as true for normal ints:

let temp = 0
takesAnInt64(temp) // ERROR — expected Int64, found Int

Such is the limit of Swift’s inference scheme.

There is a difference, though, in that default values can't be inferred... Simply adding a default value for a function argument can't change the behavior of anything because, prior to adding it, any code that didn't provide that value wouldn't have compiled.

- Dave Sweeris

···

On Jan 24, 2017, at 15:33, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

As I replied above, this doesn't work IMO because omitted generic arguments are inferred, and that can't change without being hugely source-breaking.

I think it's absolutely essential that adding a default to my library doesn't change the behavior of code that uses my library. That's currently the case, afaict, for all default arguments, and so I think it's essential here.

That's a good example Alexis. I do agree that generic arguments are
inferred in a lot of cases, my point was that they should not be inferred
in "type declarations". Not sure what's the right terminology here, but I
mean following places:

(I) Variable/Constant declaration

  let x: X

(II) Property declaration

  struct T {
    let x: X
  }

(III) Function declaration

  func a(x: X) -> X

(IV) Enumeration case declaration

  enum E {
    case x(X)
  }

(V) Where clauses

  extensions E where A == X {}

In those cases `X` should always mean `X<Int>` if it was defined as `struct
X<T = Int>`. That's all my rule says. Sorry for not being clear in the last
email :)

As for the other cases, mostly those where an instance is created,
inference should be applied.

Let's go through your examples. Given

struct BigInt: Integer {
  var storage: Array<Int> =
}

func process<T: BinaryInteger>(_ input: BigInt<T>) -> BigInt<T> { ... }

what happens with `let val1 = process(BigInt())`? I think this is actually
the same problem as what happens in case of `let x = BigInt()`.

In such case my rule does not apply as we don't have full type declaration.
In `let x = BigInt()` type is not defined at all, while in `func process<T:

(_ input: BigInt<T>) -> BigInt<T> { ... }` type is explicitly

weakened or "undefaulted" if you will.

We should introduce new rule for such cases and allowing `Storage=Int`
default to participate in such expressions would make sense. As you said,
it also solves second example: let val2 = process(0).

I guess this would be the problem we thought we were solving initially and
in that case I think the solution should be what Doug suggested: if you
can’t infer a particular type, fill in a default.

Of course, if the default conflicts with the generic constraint, it would
not be filled in and it would throw an error.

For the sake of completeness,

func fastProcess(_ input: BigInt<Int64>) -> BigInt<Int64> { ... }
let val3 = fastProcess(BigInt())

would certainly infer the type from context as my rule does not apply to
initializers. It would infer BigInt<Int64>.

As for your last example, I guess we can't do anything about that and
that's ok.

···

On Wed, Jan 25, 2017 at 7:50 PM, Alexis <abeingessner@apple.com> wrote:

Yes, I agree with Xiaodi here. I don’t think this particular example is
particularly compelling. Especially because it’s not following the full
evolution of the APIs and usage, which is critical for understanding how
defaults should work.

Let's look at the evolution of an API and its consumers with the example
of a BigInt:

struct BigInt: Integer {
  var storage: Array<Int> =
}

which a consumer is using like:

func process(_ input: BigInt) -> BigInt { ... }
let val1 = process(BigInt())
let val2 = process(0)

Ok that's all fairly straightforward. Now we decide that BigInt should
expose its storage type for power-users:

struct BigInt<Storage: BinaryInteger = Int>: Integer {
  var storage: Array<Storage> =
}

Let's make sure our consumer still works:

func process(_ input: BigInt) -> BigInt { ... }
let val1 = process(BigInt())
let val2 = process(0)

Ok BigInt in process’s definition now means BigInt<Int>, so this still all
works fine. Perfect!

But then the developer of the process function catches wind of this new
power user feature, and wants to support it.
So they too become generic:

func process<T: BinaryInteger>(_ input: BigInt<T>) -> BigInt<T> { ... }

The usage sites are now more complicated, and whether they should compile
is unclear:

let val1 = process(BigInt())
let val2 = process(0)

For val1 you can take a hard stance with your rule: BigInt() means
BigInt<Int>(), and that will work. But for val2 this rule doesn't work,
because no one has written BigInt unqualified. However if you say that the
`Storage=Int` default is allowed to participate in this expression, then we
can still find the old behaviour by defaulting to it when we discover
Storage is ambiguous.

We can also consider another power-user function:

func fastProcess(_ input: BigInt<Int64>) -> BigInt<Int64> { ... }
let val3 = fastProcess(BigInt())

Again, we must decide the interpretation of this. If we take the
interpretation that BigInt() has an inferred type, then the type checker
should discover that BigInt<Int64> is the correct result. If however we
take stance that BigInt() means BigInt<Int>(), then we'll get a type
checking error which our users will consider ridiculous: *of course* they
wanted a BigInt<Int64> here!

We do however have the problem that this won’t work:

let temp = BigInt()
fastProcess(temp) // ERROR — expected BigInt<Int64>, found BigInt<Int>

But that’s just as true for normal ints:

let temp = 0
takesAnInt64(temp) // ERROR — expected Int64, found Int

Such is the limit of Swift’s inference scheme.

Srdan, I'm afraid I don't understand your discussion. Can you simplify it
for me by explaining your proposed solution in terms of Alexis's examples
below?

// Example 1: user supplied default is IntegerLiteralConvertible

func foo<T=Int64>(t: T) { ... }

foo(22)
//  ^
//  |
//  What type gets inferred here?

I believe that it is essential that the answer here be `Int` and not
`Int64`.

My reasoning is: a user's code *must not* change because a library *adds* a
default in a newer version. (As mentioned in several design docs, most
recently the new ABI manifesto, defaults in Swift are safe to add without
breaking source compatibility.)

Here, if version 1 of a library has `func foo<T>(t: T) { ... }`, then
`foo(22)` must infer `T` to be `Int`. That's just the rule in Swift, and it
would be severely source-breaking to change that. Therefore, if version 2
of that library has `func foo<T=Int64>(t: T) { ... }`, then `foo(22)` must
still infer `T` to be `Int`.

Does your proposed solution have the same effect?

// Example 2: user supplied default isn't IntegerLiteralConvertible

func bar<T=Character>(t: T) { ... }

bar(22)
//  ^
//  |
//  What type gets inferred here?

By the same reasoning as above, this ought to be `Int`. What would the
answer be in your proposed solution?

···

On Wed, Jan 25, 2017 at 2:07 PM, Srđan Rašić <srdan.rasic@gmail.com> wrote:

That's a good example Alexis. I do agree that generic arguments are
inferred in a lot of cases, my point was that they should not be inferred
in "type declarations". Not sure what's the right terminology here, but I
mean following places:

(I) Variable/Constant declaration

  let x: X

(II) Property declaration

  struct T {
    let x: X
  }

(III) Function declaration

  func a(x: X) -> X

(IV) Enumeration case declaration

  enum E {
    case x(X)
  }

(V) Where clauses

  extensions E where A == X {}

In those cases `X` should always mean `X<Int>` if it was defined as
`struct X<T = Int>`. That's all my rule says. Sorry for not being clear in
the last email :)

As for the other cases, mostly those where an instance is created,
inference should be applied.

Let's go through your examples. Given

struct BigInt: Integer {
  var storage: Array<Int> =
}

func process<T: BinaryInteger>(_ input: BigInt<T>) -> BigInt<T> { ... }

what happens with `let val1 = process(BigInt())`? I think this is
actually the same problem as what happens in case of `let x = BigInt()`.

In such case my rule does not apply as we don't have full type
declaration. In `let x = BigInt()` type is not defined at all, while in `func
process<T: BinaryInteger>(_ input: BigInt<T>) -> BigInt<T> { ... }` type
is explicitly weakened or "undefaulted" if you will.

We should introduce new rule for such cases and allowing `Storage=Int`
default to participate in such expressions would make sense. As you said,
it also solves second example: let val2 = process(0).

I guess this would be the problem we thought we were solving initially and
in that case I think the solution should be what Doug suggested: if you
can’t infer a particular type, fill in a default.

Of course, if the default conflicts with the generic constraint, it would
not be filled in and it would throw an error.

For the sake of completeness,

func fastProcess(_ input: BigInt<Int64>) -> BigInt<Int64> { ... }
let val3 = fastProcess(BigInt())

would certainly infer the type from context as my rule does not apply to
initializers. It would infer BigInt<Int64>.

As for your last example, I guess we can't do anything about that and
that's ok.

On Wed, Jan 25, 2017 at 7:50 PM, Alexis <abeingessner@apple.com> wrote:

Yes, I agree with Xiaodi here. I don’t think this particular example is
particularly compelling. Especially because it’s not following the full
evolution of the APIs and usage, which is critical for understanding how
defaults should work.

Let's look at the evolution of an API and its consumers with the example
of a BigInt:

struct BigInt: Integer {
  var storage: Array<Int> =
}

which a consumer is using like:

func process(_ input: BigInt) -> BigInt { ... }
let val1 = process(BigInt())
let val2 = process(0)

Ok that's all fairly straightforward. Now we decide that BigInt should
expose its storage type for power-users:

struct BigInt<Storage: BinaryInteger = Int>: Integer {
  var storage: Array<Storage> =
}

Let's make sure our consumer still works:

func process(_ input: BigInt) -> BigInt { ... }
let val1 = process(BigInt())
let val2 = process(0)

Ok BigInt in process’s definition now means BigInt<Int>, so this still
all works fine. Perfect!

But then the developer of the process function catches wind of this new
power user feature, and wants to support it.
So they too become generic:

func process<T: BinaryInteger>(_ input: BigInt<T>) -> BigInt<T> { ... }

The usage sites are now more complicated, and whether they should compile
is unclear:

let val1 = process(BigInt())
let val2 = process(0)

For val1 you can take a hard stance with your rule: BigInt() means
BigInt<Int>(), and that will work. But for val2 this rule doesn't work,
because no one has written BigInt unqualified. However if you say that the
`Storage=Int` default is allowed to participate in this expression, then we
can still find the old behaviour by defaulting to it when we discover
Storage is ambiguous.

We can also consider another power-user function:

func fastProcess(_ input: BigInt<Int64>) -> BigInt<Int64> { ... }
let val3 = fastProcess(BigInt())

Again, we must decide the interpretation of this. If we take the
interpretation that BigInt() has an inferred type, then the type checker
should discover that BigInt<Int64> is the correct result. If however we
take stance that BigInt() means BigInt<Int>(), then we'll get a type
checking error which our users will consider ridiculous: *of course* they
wanted a BigInt<Int64> here!

We do however have the problem that this won’t work:

let temp = BigInt()
fastProcess(temp) // ERROR — expected BigInt<Int64>, found BigInt<Int>

But that’s just as true for normal ints:

let temp = 0
takesAnInt64(temp) // ERROR — expected Int64, found Int

Such is the limit of Swift’s inference scheme.

Srdan, I'm afraid I don't understand your discussion. Can you simplify it for me by explaining your proposed solution in terms of Alexis's examples below?

// Example 1: user supplied default is IntegerLiteralConvertible

func foo<T=Int64>(t: T) { ... }

foo(22)
//  ^
//  |
//  What type gets inferred here?

I believe that it is essential that the answer here be `Int` and not `Int64`.

My reasoning is: a user's code *must not* change because a library *adds* a default in a newer version. (As mentioned in several design docs, most recently the new ABI manifesto, defaults in Swift are safe to add without breaking source compatibility.)

I don’t agree: adding a default to an existing type parameter should be a strict source-breaking change (unless the chosen type can avoid all other defaulting rules, see the end of this email).

Type Parameter Defaults, as I know them, are a tool for avoiding breakage when a new type parameter is introduced. That is, they allow you to perform the following transformation safe in the knowledge that it won’t break clients:

func foo(input: X)
func foo<T=X>(input: T)

For this to work, you need to make the <T=X> default have dominance over the other default rules.

Specifically you want this code to keep working identically:

// before
func foo(input: Int64)
foo(0) // Int64

// after
func foo<T=Int64>(input: T)
foo(0) // Int64

This is in direct conflict with making the following keep working identically:

// before
func foo<T>(input: T)
foo(0) // Int

// after
func foo<T=Int64>(input: T)
foo(0) // Int

You have to choose which of these API evolution patterns is most important, because you can’t make both work. To me, the first one is obviously the most important, because that’s the whole point of the feature. The reason to do the second one is to try to make a common/correct case more ergonomic and/or the default. But unlike function argument defaults, type parameters can already have inferred values.

Note that source breaking with adding defaults can be avoided as long as long as the chosen default isn’t:

* XLiteralConvertible (pseudo-exception: if the default is also the XLiteralType it’s fine, but that type is user configurable)
* A supertype of another type (T?, T!, SuperClass, Protocol, (…, someLabel: T, ...), [SuperType], [SuperType1:SuperType2], (SuperType) -> SubType, and probably more in the future)

Concretely this means it’s fine to retroactively make an existing generic parameter default to MyFinalClass, MyStruct, MyEnum, and collections/functions/unlabeled-tuples thereof. Arguably, Int/String/Bool/Array/etc are fine, but there’s a niche situation where using them can cause user breakage due to changing XLiteralType.

In practice I expect this will be robust enough to avoid breakage — I expect most defaults will be MyStruct/MyEnum, or an XLiteralType. Even if it’s not, you need to end up in a situation where inference can actually kick in and find an ambiguity *and* where the difference matters. (e.g. SubClass vs SuperClass isn’t a big deal in most cases)

···

On Jan 25, 2017, at 8:15 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

Here, if version 1 of a library has `func foo<T>(t: T) { ... }`, then `foo(22)` must infer `T` to be `Int`. That's just the rule in Swift, and it would be severely source-breaking to change that. Therefore, if version 2 of that library has `func foo<T=Int64>(t: T) { ... }`, then `foo(22)` must still infer `T` to be `Int`.

Does your proposed solution have the same effect?

// Example 2: user supplied default isn't IntegerLiteralConvertible

func bar<T=Character>(t: T) { ... }

bar(22)
//  ^
//  |
//  What type gets inferred here?

By the same reasoning as above, this ought to be `Int`. What would the answer be in your proposed solution?

On Wed, Jan 25, 2017 at 2:07 PM, Srđan Rašić <srdan.rasic@gmail.com <mailto:srdan.rasic@gmail.com>> wrote:
That's a good example Alexis. I do agree that generic arguments are inferred in a lot of cases, my point was that they should not be inferred in "type declarations". Not sure what's the right terminology here, but I mean following places:

(I) Variable/Constant declaration

  let x: X

(II) Property declaration

  struct T {
    let x: X
  }

(III) Function declaration

  func a(x: X) -> X

(IV) Enumeration case declaration

  enum E {
    case x(X)
  }

(V) Where clauses

  extensions E where A == X {}  

In those cases `X` should always mean `X<Int>` if it was defined as `struct X<T = Int>`. That's all my rule says. Sorry for not being clear in the last email :)

As for the other cases, mostly those where an instance is created, inference should be applied.

Let's go through your examples. Given

struct BigInt: Integer {
  var storage: Array<Int> =
}

func process<T: BinaryInteger>(_ input: BigInt<T>) -> BigInt<T> { ... }

what happens with `let val1 = process(BigInt())`? I think this is actually the same problem as what happens in case of `let x = BigInt()`.

In such case my rule does not apply as we don't have full type declaration. In `let x = BigInt()` type is not defined at all, while in `func process<T: BinaryInteger>(_ input: BigInt<T>) -> BigInt<T> { ... }` type is explicitly weakened or "undefaulted" if you will.

We should introduce new rule for such cases and allowing `Storage=Int` default to participate in such expressions would make sense. As you said, it also solves second example: let val2 = process(0).

I guess this would be the problem we thought we were solving initially and in that case I think the solution should be what Doug suggested: if you can’t infer a particular type, fill in a default.

Of course, if the default conflicts with the generic constraint, it would not be filled in and it would throw an error.

For the sake of completeness,

func fastProcess(_ input: BigInt<Int64>) -> BigInt<Int64> { ... }
let val3 = fastProcess(BigInt())

would certainly infer the type from context as my rule does not apply to initializers. It would infer BigInt<Int64>.

As for your last example, I guess we can't do anything about that and that's ok.

On Wed, Jan 25, 2017 at 7:50 PM, Alexis <abeingessner@apple.com <mailto:abeingessner@apple.com>> wrote:
Yes, I agree with Xiaodi here. I don’t think this particular example is particularly compelling. Especially because it’s not following the full evolution of the APIs and usage, which is critical for understanding how defaults should work.

Let's look at the evolution of an API and its consumers with the example of a BigInt:

struct BigInt: Integer {
  var storage: Array<Int> =
}

which a consumer is using like:

func process(_ input: BigInt) -> BigInt { ... }
let val1 = process(BigInt())
let val2 = process(0)

Ok that's all fairly straightforward. Now we decide that BigInt should expose its storage type for power-users:

struct BigInt<Storage: BinaryInteger = Int>: Integer {
  var storage: Array<Storage> =
}

Let's make sure our consumer still works:

func process(_ input: BigInt) -> BigInt { ... }
let val1 = process(BigInt())
let val2 = process(0)

Ok BigInt in process’s definition now means BigInt<Int>, so this still all works fine. Perfect!

But then the developer of the process function catches wind of this new power user feature, and wants to support it.
So they too become generic:

func process<T: BinaryInteger>(_ input: BigInt<T>) -> BigInt<T> { ... }

The usage sites are now more complicated, and whether they should compile is unclear:

let val1 = process(BigInt())
let val2 = process(0)

For val1 you can take a hard stance with your rule: BigInt() means BigInt<Int>(), and that will work. But for val2 this rule doesn't work, because no one has written BigInt unqualified. However if you say that the `Storage=Int` default is allowed to participate in this expression, then we can still find the old behaviour by defaulting to it when we discover Storage is ambiguous.

We can also consider another power-user function:

func fastProcess(_ input: BigInt<Int64>) -> BigInt<Int64> { ... }
let val3 = fastProcess(BigInt())

Again, we must decide the interpretation of this. If we take the interpretation that BigInt() has an inferred type, then the type checker should discover that BigInt<Int64> is the correct result. If however we take stance that BigInt() means BigInt<Int>(), then we'll get a type checking error which our users will consider ridiculous: *of course* they wanted a BigInt<Int64> here!

We do however have the problem that this won’t work:

let temp = BigInt()
fastProcess(temp) // ERROR — expected BigInt<Int64>, found BigInt<Int>

But that’s just as true for normal ints:

let temp = 0
takesAnInt64(temp) // ERROR — expected Int64, found Int

Such is the limit of Swift’s inference scheme.

Thanks for your questions Xiaodi, I might have missed some scenarios.
I've had some rethinking and here are the conclusions.
It's a slight refinement of the idea I had in the previous mail. Bear with
me :)

If we have

    func process<T>(_ t: T) {}

    process(5)

compiler will infer T as Int.

Now we introduce a default argument

    func process<T = Int64>(_ t: T) {}

and in order to keep source compatibility, if we do

    process(5)

we must again infer T as Int. That means that the inference should have
priority over default arguments. That is in accordance with the rule that
we defined earlier: if you can’t infer a particular type, fill in a
default.
We are able to infer particular type, Int, so we do that.

However, say we had

    struct Storage<T>: Integer {
        init(_ t: T)
    }

    let s = Storage(5)

and we wanted to introduce a default argument:

    struct Storage<T = Int64>: Integer {
        init(_ t: T)
    }

What happens with `s`? This is essentially the same problem as previous,
so the solution should also be same - we must infer Int.

Would that be confusing for a developer? Maybe, but what is the
alternative? Using the default would make no sense because then

    let s = Storage("ops")

would fail to compile. So inference in such cases is a must. Similar
problem would be observed with inheritance and/or protocol adoption.

    protocol P {}
    struct R: P {}

    struct Storage<T = P>: Integer {
        init(_ t: T)
    }

    let s = Storage(R())

Is T infered to R or to P? To keep source compatibility, we must infer R.

In other words, I agree with you Xiaodi.

Now to my argument about type declarations. Say we now do

    let s: Storage = Storage(R())

In that case T must be infered to P because type declaration must not be
affected by inference. Storage on the left must be treated as Storage<P>.
If that were not the case, consider what would happen if you upgrade local
variable to a propery.

    class T {
        let s: Storage

        init() {
            s = Storage(R())
        }
    }

What is infered for T must not change by making such upgrade and allowing
type inferrence in initializer to affect propery type would make no sense.

Thus, there has to be a rule that says:

    (I) In type declarations, no inference happens. By omitting generic
arguments
    one accepts the defaults.

And to repeat our second rule:

    (II) When instantiating generic type or calling generic function, by
omitting
    generic arguments one lets the compiler specify generic arguments by
following
    the principle: infer particular type if possible, fill in a default
otherwise.

Let's go throught some more examples.

Declaring a function like

    func clear(_ storage: Storage)

assumes `storage` to be of type Storage<P> because of rule (I).

Declaring a constant like

    let s = Storage(R())

will infer `s` to Storage<R> becasue of (II), but

    let s: Storage = Storage(R())

would be considered identical to

    let s: Storage<P> = Storage(R())

because T is defaulted to P and rule (I) is applied to the left hand side,
so
rule (II) applies to right hand side by the infering type from left hand
side.

We must also consider:

    let s = Storage("123")

This is simple. With rule (II) we infer Storage<String>. However, if we do

    let s: Storage = Storage("123")

compiler must throw an error: cannot assign Storage<String> to Storage<P>.

Next, consider following type

    struct Storage<T = Int64>: Integer {
        init()
    }

If we do

    let s = Storage()

we should apply rule (II). In this case, no particular type can be infered
so
we will fill in the default. Meaning that `s` would be Storage<Int64>.

What about generic function calls? Let's use the older example.

    protocol P {}
    struct R: P {}

    struct Storage<T = P>: Integer {
        init(_ t: T)
    }

    func clear<T>(_ storage: Storage<T>)

If we do

    clear(Storage())

we should use rule (II) to get the type. Here we don't have anything to
infer
from so we should fill in the default. T = Storage<P>.

Doing

    clear(Storage(R()))

would use rule (II) to infer T as R.

However, consider generic function with defaults:

    func clear<T = R>(_ storage: Storage<T>)

Now we have a defult R in function and a default P in the type. What if we
now do

    clear(Storage())

Should T be specialized to P or R? This is a conflict of defaults. I'd say
it should be
resolved in the favour of function. In a way function that operates on
type X could be
considered extension of that type and has more specific use case knowledge
of its
arguments so the function preference should be accepted.

So, trying to apply rule (II). There is nothing to infer, so we try to
fill in a default.
We have two defaults - function says the default is R, struct says the
default is P.
As we are resolving this in favour of the function - we chose R.

Final example,

    clear(Storage(R()))

There are no conflicts here. By applying rule (II) we directly infer type
R and use it
regardless of the defaults.

Sorry for the long email, but hopefully it's now more understandable.
Looking forward to your feedback.

···

On Thu, Jan 26, 2017 at 2:15 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

Srdan, I'm afraid I don't understand your discussion. Can you simplify it
for me by explaining your proposed solution in terms of Alexis's examples
below?

// Example 1: user supplied default is IntegerLiteralConvertible

func foo<T=Int64>(t: T) { ... }

foo(22)
//  ^
//  |
//  What type gets inferred here?

I believe that it is essential that the answer here be `Int` and not
`Int64`.

My reasoning is: a user's code *must not* change because a library *adds*
a default in a newer version. (As mentioned in several design docs, most
recently the new ABI manifesto, defaults in Swift are safe to add without
breaking source compatibility.)

Here, if version 1 of a library has `func foo<T>(t: T) { ... }`, then
`foo(22)` must infer `T` to be `Int`. That's just the rule in Swift, and it
would be severely source-breaking to change that. Therefore, if version 2
of that library has `func foo<T=Int64>(t: T) { ... }`, then `foo(22)` must
still infer `T` to be `Int`.

Does your proposed solution have the same effect?

// Example 2: user supplied default isn't IntegerLiteralConvertible

func bar<T=Character>(t: T) { ... }

bar(22)
//  ^
//  |
//  What type gets inferred here?

By the same reasoning as above, this ought to be `Int`. What would the
answer be in your proposed solution?

On Wed, Jan 25, 2017 at 2:07 PM, Srđan Rašić <srdan.rasic@gmail.com> > wrote:

That's a good example Alexis. I do agree that generic arguments are
inferred in a lot of cases, my point was that they should not be inferred
in "type declarations". Not sure what's the right terminology here, but I
mean following places:

(I) Variable/Constant declaration

  let x: X

(II) Property declaration

  struct T {
    let x: X
  }

(III) Function declaration

  func a(x: X) -> X

(IV) Enumeration case declaration

  enum E {
    case x(X)
  }

(V) Where clauses

  extensions E where A == X {}

In those cases `X` should always mean `X<Int>` if it was defined as
`struct X<T = Int>`. That's all my rule says. Sorry for not being clear in
the last email :)

As for the other cases, mostly those where an instance is created,
inference should be applied.

Let's go through your examples. Given

struct BigInt: Integer {
  var storage: Array<Int> =
}

func process<T: BinaryInteger>(_ input: BigInt<T>) -> BigInt<T> { ... }

what happens with `let val1 = process(BigInt())`? I think this is
actually the same problem as what happens in case of `let x = BigInt()`.

In such case my rule does not apply as we don't have full type
declaration. In `let x = BigInt()` type is not defined at all, while in `func
process<T: BinaryInteger>(_ input: BigInt<T>) -> BigInt<T> { ... }` type
is explicitly weakened or "undefaulted" if you will.

We should introduce new rule for such cases and allowing `Storage=Int`
default to participate in such expressions would make sense. As you said,
it also solves second example: let val2 = process(0).

I guess this would be the problem we thought we were solving initially
and in that case I think the solution should be what Doug suggested: if
you can’t infer a particular type, fill in a default.

Of course, if the default conflicts with the generic constraint, it would
not be filled in and it would throw an error.

For the sake of completeness,

func fastProcess(_ input: BigInt<Int64>) -> BigInt<Int64> { ... }
let val3 = fastProcess(BigInt())

would certainly infer the type from context as my rule does not apply to
initializers. It would infer BigInt<Int64>.

As for your last example, I guess we can't do anything about that and
that's ok.

On Wed, Jan 25, 2017 at 7:50 PM, Alexis <abeingessner@apple.com> wrote:

Yes, I agree with Xiaodi here. I don’t think this particular example is
particularly compelling. Especially because it’s not following the full
evolution of the APIs and usage, which is critical for understanding how
defaults should work.

Let's look at the evolution of an API and its consumers with the example
of a BigInt:

struct BigInt: Integer {
  var storage: Array<Int> =
}

which a consumer is using like:

func process(_ input: BigInt) -> BigInt { ... }
let val1 = process(BigInt())
let val2 = process(0)

Ok that's all fairly straightforward. Now we decide that BigInt should
expose its storage type for power-users:

struct BigInt<Storage: BinaryInteger = Int>: Integer {
  var storage: Array<Storage> =
}

Let's make sure our consumer still works:

func process(_ input: BigInt) -> BigInt { ... }
let val1 = process(BigInt())
let val2 = process(0)

Ok BigInt in process’s definition now means BigInt<Int>, so this still
all works fine. Perfect!

But then the developer of the process function catches wind of this new
power user feature, and wants to support it.
So they too become generic:

func process<T: BinaryInteger>(_ input: BigInt<T>) -> BigInt<T> { ... }

The usage sites are now more complicated, and whether they should
compile is unclear:

let val1 = process(BigInt())
let val2 = process(0)

For val1 you can take a hard stance with your rule: BigInt() means
BigInt<Int>(), and that will work. But for val2 this rule doesn't work,
because no one has written BigInt unqualified. However if you say that the
`Storage=Int` default is allowed to participate in this expression, then we
can still find the old behaviour by defaulting to it when we discover
Storage is ambiguous.

We can also consider another power-user function:

func fastProcess(_ input: BigInt<Int64>) -> BigInt<Int64> { ... }
let val3 = fastProcess(BigInt())

Again, we must decide the interpretation of this. If we take the
interpretation that BigInt() has an inferred type, then the type checker
should discover that BigInt<Int64> is the correct result. If however we
take stance that BigInt() means BigInt<Int>(), then we'll get a type
checking error which our users will consider ridiculous: *of course* they
wanted a BigInt<Int64> here!

We do however have the problem that this won’t work:

let temp = BigInt()
fastProcess(temp) // ERROR — expected BigInt<Int64>, found BigInt<Int>

But that’s just as true for normal ints:

let temp = 0
takesAnInt64(temp) // ERROR — expected Int64, found Int

Such is the limit of Swift’s inference scheme.

I fundamentally disagree. In this case, you’re treating “22” as having type Int, which is conceptually false. Literals do not have an inherent type - they are (ideally) abstract, arbitrary-precision “things” which can be transformed to a type in some context.

Let’s consider what a default binding for a generic parameter actually is and when it would be used by the compiler. It is a hint which gives the compiler context when it can’t infer one in any other way. Basically: if you pass in a String as parameter ’t’, the compiler has some context and knows that T must be String.self. It is only when the parameter value is representable as multiple types that the default would be consulted.

As Gankro mentioned WRT the Rust proposal, Int is not a “default” type for IntegerLiteralConvertible; it is a “fallback" to be used when the compiler cannot infer any other type because there is absolutely no context (e.g. “let myNum = 42”). In some other context, that literal may be transformed in to a different numerical type, a struct, enum, class or even an entire class hierarchy.

I believe that programmers would expect the literal in the first example to be an Int64. The function declaration gives the compiler context for the preferred type to use.

Ultimately, default bindings for generic parameters are syntax-level conveniences. There is no code which is impossible without them; they save you being explicit about certain types by allowing the library author to pick sensible/optimal values to use in case you have no specific requirement for a particular parameter. Therefore, it stands to reason that adding a default parameter, and feeding the compiler some context where there previously was none, would be a potentially source-breaking change. It does not, however, necessarily need to be a resilience-breaking change; we could store the defaults information in some kind of metadata and leave the original function symbols intact. The function/type is still as generic as it ever was, after all.

In your example, T is never returned from the function, so it makes no difference to anybody what type “T” actually gets inferred as. If it did, and they tried to store it as some particular type which is no longer the resolved type of T, it would be a compile error. Just like we have to do all the time in Swift, they would have to provide explicit context - for example, by writing “22 as Int”.

If the design documents disagree, we can change them. There is no such thing as “default types” in the language right now (only default values), so it is surprising that ABI documentation would attempt to impose constraints on whether or not they may be source-breaking. It is more important that we have the right design for Swift, rather than one which is consistent with some arbitrary restrictions defined before the feature even existed.

As for the second example, yes it should be Int. The context, which tells the compiler to prefer Character, is not meaningful to an integer literal. In that case, the zero-context “fallback” applied - i.e. Swift.Int.

- Karl

···

On 26 Jan 2017, at 02:15, Xiaodi Wu via swift-evolution <swift-evolution@swift.org> wrote:

Srdan, I'm afraid I don't understand your discussion. Can you simplify it for me by explaining your proposed solution in terms of Alexis's examples below?

// Example 1: user supplied default is IntegerLiteralConvertible

func foo<T=Int64>(t: T) { ... }

foo(22)
//  ^
//  |
//  What type gets inferred here?

I believe that it is essential that the answer here be `Int` and not `Int64`.

My reasoning is: a user's code *must not* change because a library *adds* a default in a newer version. (As mentioned in several design docs, most recently the new ABI manifesto, defaults in Swift are safe to add without breaking source compatibility.)

Here, if version 1 of a library has `func foo<T>(t: T) { ... }`, then `foo(22)` must infer `T` to be `Int`. That's just the rule in Swift, and it would be severely source-breaking to change that. Therefore, if version 2 of that library has `func foo<T=Int64>(t: T) { ... }`, then `foo(22)` must still infer `T` to be `Int`.

Does your proposed solution have the same effect?

// Example 2: user supplied default isn't IntegerLiteralConvertible

func bar<T=Character>(t: T) { ... }

bar(22)
//  ^
//  |
//  What type gets inferred here?

By the same reasoning as above, this ought to be `Int`. What would the answer be in your proposed solution?

That's a very good point Alexis and makes sense to me. I'll updated the
proposal with that in mind and revise my examples.

···

On Thu, Jan 26, 2017 at 7:06 PM, Alexis <abeingessner@apple.com> wrote:

On Jan 25, 2017, at 8:15 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

Srdan, I'm afraid I don't understand your discussion. Can you simplify it
for me by explaining your proposed solution in terms of Alexis's examples
below?

// Example 1: user supplied default is IntegerLiteralConvertible

func foo<T=Int64>(t: T) { ... }

foo(22)
//  ^
//  |
//  What type gets inferred here?

I believe that it is essential that the answer here be `Int` and not
`Int64`.

My reasoning is: a user's code *must not* change because a library *adds*
a default in a newer version. (As mentioned in several design docs, most
recently the new ABI manifesto, defaults in Swift are safe to add without
breaking source compatibility.)

I don’t agree: adding a default to an *existing* type parameter should be
a strict source-breaking change (unless the chosen type can avoid all other
defaulting rules, see the end of this email).

Type Parameter Defaults, as I know them, are a tool for avoiding breakage
when a *new* type parameter is introduced. That is, they allow you to
perform the following transformation safe in the knowledge that it won’t
break clients:

func foo(input: X)
func foo<T=X>(input: T)

For this to work, you need to make the <T=X> default have dominance over
the other default rules.

Specifically you want this code to keep working identically:

// before
func foo(input: Int64)
foo(0) // Int64

// after
func foo<T=Int64>(input: T)
foo(0) // Int64

This is in direct conflict with making the following keep working
identically:

// before
func foo<T>(input: T)
foo(0) // Int

// after
func foo<T=Int64>(input: T)
foo(0) // Int

You have to choose which of these API evolution patterns is most
important, because you can’t make both work. To me, the first one is
obviously the most important, because that’s the whole point of the
feature. The reason to do the second one is to try to make a common/correct
case more ergonomic and/or the default. But unlike function argument
defaults, type parameters can already have inferred values.

Note that source breaking with adding defaults can be avoided as long as
long as the chosen default isn’t:

* XLiteralConvertible (pseudo-exception: if the default is also the
XLiteralType it’s fine, but that type is user configurable)
* A supertype of another type (T?, T!, SuperClass, Protocol, (…,
someLabel: T, ...), [SuperType], [SuperType1:SuperType2], (SuperType) ->
SubType, and probably more in the future)

Concretely this means it’s fine to retroactively make an existing generic
parameter default to MyFinalClass, MyStruct, MyEnum, and
collections/functions/unlabeled-tuples thereof. Arguably,
Int/String/Bool/Array/etc are fine, but there’s a niche situation where
using them can cause user breakage due to changing XLiteralType.

In practice I expect this will be robust enough to avoid breakage — I
expect most defaults will be MyStruct/MyEnum, or an XLiteralType. Even if
it’s not, you need to end up in a situation where inference can actually
kick in and find an ambiguity *and* where the difference matters. (e.g.
SubClass vs SuperClass isn’t a big deal in most cases)

Here, if version 1 of a library has `func foo<T>(t: T) { ... }`, then
`foo(22)` must infer `T` to be `Int`. That's just the rule in Swift, and it
would be severely source-breaking to change that. Therefore, if version 2
of that library has `func foo<T=Int64>(t: T) { ... }`, then `foo(22)` must
still infer `T` to be `Int`.

Does your proposed solution have the same effect?

// Example 2: user supplied default isn't IntegerLiteralConvertible

func bar<T=Character>(t: T) { ... }

bar(22)
//  ^
//  |
//  What type gets inferred here?

By the same reasoning as above, this ought to be `Int`. What would the
answer be in your proposed solution?

On Wed, Jan 25, 2017 at 2:07 PM, Srđan Rašić <srdan.rasic@gmail.com> > wrote:

That's a good example Alexis. I do agree that generic arguments are
inferred in a lot of cases, my point was that they should not be inferred
in "type declarations". Not sure what's the right terminology here, but I
mean following places:

(I) Variable/Constant declaration

  let x: X

(II) Property declaration

  struct T {
    let x: X
  }

(III) Function declaration

  func a(x: X) -> X

(IV) Enumeration case declaration

  enum E {
    case x(X)
  }

(V) Where clauses

  extensions E where A == X {}

In those cases `X` should always mean `X<Int>` if it was defined as
`struct X<T = Int>`. That's all my rule says. Sorry for not being clear in
the last email :)

As for the other cases, mostly those where an instance is created,
inference should be applied.

Let's go through your examples. Given

struct BigInt: Integer {
  var storage: Array<Int> =
}

func process<T: BinaryInteger>(_ input: BigInt<T>) -> BigInt<T> { ... }

what happens with `let val1 = process(BigInt())`? I think this is
actually the same problem as what happens in case of `let x = BigInt()`.

In such case my rule does not apply as we don't have full type
declaration. In `let x = BigInt()` type is not defined at all, while in `func
process<T: BinaryInteger>(_ input: BigInt<T>) -> BigInt<T> { ... }` type
is explicitly weakened or "undefaulted" if you will.

We should introduce new rule for such cases and allowing `Storage=Int`
default to participate in such expressions would make sense. As you said,
it also solves second example: let val2 = process(0).

I guess this would be the problem we thought we were solving initially
and in that case I think the solution should be what Doug suggested: if
you can’t infer a particular type, fill in a default.

Of course, if the default conflicts with the generic constraint, it would
not be filled in and it would throw an error.

For the sake of completeness,

func fastProcess(_ input: BigInt<Int64>) -> BigInt<Int64> { ... }
let val3 = fastProcess(BigInt())

would certainly infer the type from context as my rule does not apply to
initializers. It would infer BigInt<Int64>.

As for your last example, I guess we can't do anything about that and
that's ok.

On Wed, Jan 25, 2017 at 7:50 PM, Alexis <abeingessner@apple.com> wrote:

Yes, I agree with Xiaodi here. I don’t think this particular example is
particularly compelling. Especially because it’s not following the full
evolution of the APIs and usage, which is critical for understanding how
defaults should work.

Let's look at the evolution of an API and its consumers with the example
of a BigInt:

struct BigInt: Integer {
  var storage: Array<Int> =
}

which a consumer is using like:

func process(_ input: BigInt) -> BigInt { ... }
let val1 = process(BigInt())
let val2 = process(0)

Ok that's all fairly straightforward. Now we decide that BigInt should
expose its storage type for power-users:

struct BigInt<Storage: BinaryInteger = Int>: Integer {
  var storage: Array<Storage> =
}

Let's make sure our consumer still works:

func process(_ input: BigInt) -> BigInt { ... }
let val1 = process(BigInt())
let val2 = process(0)

Ok BigInt in process’s definition now means BigInt<Int>, so this still
all works fine. Perfect!

But then the developer of the process function catches wind of this new
power user feature, and wants to support it.
So they too become generic:

func process<T: BinaryInteger>(_ input: BigInt<T>) -> BigInt<T> { ... }

The usage sites are now more complicated, and whether they should
compile is unclear:

let val1 = process(BigInt())
let val2 = process(0)

For val1 you can take a hard stance with your rule: BigInt() means
BigInt<Int>(), and that will work. But for val2 this rule doesn't work,
because no one has written BigInt unqualified. However if you say that the
`Storage=Int` default is allowed to participate in this expression, then we
can still find the old behaviour by defaulting to it when we discover
Storage is ambiguous.

We can also consider another power-user function:

func fastProcess(_ input: BigInt<Int64>) -> BigInt<Int64> { ... }
let val3 = fastProcess(BigInt())

Again, we must decide the interpretation of this. If we take the
interpretation that BigInt() has an inferred type, then the type checker
should discover that BigInt<Int64> is the correct result. If however we
take stance that BigInt() means BigInt<Int>(), then we'll get a type
checking error which our users will consider ridiculous: *of course* they
wanted a BigInt<Int64> here!

We do however have the problem that this won’t work:

let temp = BigInt()
fastProcess(temp) // ERROR — expected BigInt<Int64>, found BigInt<Int>

But that’s just as true for normal ints:

let temp = 0
takesAnInt64(temp) // ERROR — expected Int64, found Int

Such is the limit of Swift’s inference scheme.

Very interesting point, Alexis. So can you reiterate again which of the
four options you outlined earlier support this use case? And if there are
multiple, which would be the most consistent with the rest of the language?

And Srdan, could you incorporate that information into your discussion?

···

On Thu, Jan 26, 2017 at 12:59 Srđan Rašić <srdan.rasic@gmail.com> wrote:

That's a very good point Alexis and makes sense to me. I'll updated the
proposal with that in mind and revise my examples.

On Thu, Jan 26, 2017 at 7:06 PM, Alexis <abeingessner@apple.com> wrote:

On Jan 25, 2017, at 8:15 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

Srdan, I'm afraid I don't understand your discussion. Can you simplify it
for me by explaining your proposed solution in terms of Alexis's examples
below?

// Example 1: user supplied default is IntegerLiteralConvertible

func foo<T=Int64>(t: T) { ... }

foo(22)
//  ^
//  |
//  What type gets inferred here?

I believe that it is essential that the answer here be `Int` and not
`Int64`.

My reasoning is: a user's code *must not* change because a library *adds*
a default in a newer version. (As mentioned in several design docs, most
recently the new ABI manifesto, defaults in Swift are safe to add without
breaking source compatibility.)

I don’t agree: adding a default to an *existing* type parameter should be
a strict source-breaking change (unless the chosen type can avoid all other
defaulting rules, see the end of this email).

Type Parameter Defaults, as I know them, are a tool for avoiding breakage
when a *new* type parameter is introduced. That is, they allow you to
perform the following transformation safe in the knowledge that it won’t
break clients:

func foo(input: X)
func foo<T=X>(input: T)

For this to work, you need to make the <T=X> default have dominance over
the other default rules.

Specifically you want this code to keep working identically:

// before
func foo(input: Int64)
foo(0) // Int64

// after
func foo<T=Int64>(input: T)
foo(0) // Int64

This is in direct conflict with making the following keep working
identically:

// before
func foo<T>(input: T)
foo(0) // Int

// after
func foo<T=Int64>(input: T)
foo(0) // Int

You have to choose which of these API evolution patterns is most
important, because you can’t make both work. To me, the first one is
obviously the most important, because that’s the whole point of the
feature. The reason to do the second one is to try to make a common/correct
case more ergonomic and/or the default. But unlike function argument
defaults, type parameters can already have inferred values.

Note that source breaking with adding defaults can be avoided as long as
long as the chosen default isn’t:

* XLiteralConvertible (pseudo-exception: if the default is also the
XLiteralType it’s fine, but that type is user configurable)
* A supertype of another type (T?, T!, SuperClass, Protocol, (…,
someLabel: T, ...), [SuperType], [SuperType1:SuperType2], (SuperType) ->
SubType, and probably more in the future)

Concretely this means it’s fine to retroactively make an existing generic
parameter default to MyFinalClass, MyStruct, MyEnum, and
collections/functions/unlabeled-tuples thereof. Arguably,
Int/String/Bool/Array/etc are fine, but there’s a niche situation where
using them can cause user breakage due to changing XLiteralType.

In practice I expect this will be robust enough to avoid breakage — I
expect most defaults will be MyStruct/MyEnum, or an XLiteralType. Even if
it’s not, you need to end up in a situation where inference can actually
kick in and find an ambiguity *and* where the difference matters. (e.g.
SubClass vs SuperClass isn’t a big deal in most cases)

Here, if version 1 of a library has `func foo<T>(t: T) { ... }`, then
`foo(22)` must infer `T` to be `Int`. That's just the rule in Swift, and it
would be severely source-breaking to change that. Therefore, if version 2
of that library has `func foo<T=Int64>(t: T) { ... }`, then `foo(22)` must
still infer `T` to be `Int`.

Does your proposed solution have the same effect?

// Example 2: user supplied default isn't IntegerLiteralConvertible

func bar<T=Character>(t: T) { ... }

bar(22)
//  ^
//  |
//  What type gets inferred here?

By the same reasoning as above, this ought to be `Int`. What would the
answer be in your proposed solution?

On Wed, Jan 25, 2017 at 2:07 PM, Srđan Rašić <srdan.rasic@gmail.com> > wrote:

That's a good example Alexis. I do agree that generic arguments are
inferred in a lot of cases, my point was that they should not be inferred
in "type declarations". Not sure what's the right terminology here, but I
mean following places:

(I) Variable/Constant declaration

  let x: X

(II) Property declaration

  struct T {
    let x: X
  }

(III) Function declaration

  func a(x: X) -> X

(IV) Enumeration case declaration

  enum E {
    case x(X)
  }

(V) Where clauses

  extensions E where A == X {}

In those cases `X` should always mean `X<Int>` if it was defined as
`struct X<T = Int>`. That's all my rule says. Sorry for not being clear in
the last email :)

As for the other cases, mostly those where an instance is created,
inference should be applied.

Let's go through your examples. Given

struct BigInt: Integer {
  var storage: Array<Int> =
}

func process<T: BinaryInteger>(_ input: BigInt<T>) -> BigInt<T> { ... }

what happens with `let val1 = process(BigInt())`? I think this is
actually the same problem as what happens in case of `let x = BigInt()`.

In such case my rule does not apply as we don't have full type
declaration. In `let x = BigInt()` type is not defined at all, while in `func
process<T: BinaryInteger>(_ input: BigInt<T>) -> BigInt<T> { ... }` type
is explicitly weakened or "undefaulted" if you will.

We should introduce new rule for such cases and allowing `Storage=Int`
default to participate in such expressions would make sense. As you said,
it also solves second example: let val2 = process(0).

I guess this would be the problem we thought we were solving initially and
in that case I think the solution should be what Doug suggested: if you
can’t infer a particular type, fill in a default.

Of course, if the default conflicts with the generic constraint, it would
not be filled in and it would throw an error.

For the sake of completeness,

func fastProcess(_ input: BigInt<Int64>) -> BigInt<Int64> { ... }
let val3 = fastProcess(BigInt())

would certainly infer the type from context as my rule does not apply to
initializers. It would infer BigInt<Int64>.

As for your last example, I guess we can't do anything about that and
that's ok.

On Wed, Jan 25, 2017 at 7:50 PM, Alexis <abeingessner@apple.com> wrote:

Yes, I agree with Xiaodi here. I don’t think this particular example is
particularly compelling. Especially because it’s not following the full
evolution of the APIs and usage, which is critical for understanding how
defaults should work.

Let's look at the evolution of an API and its consumers with the example
of a BigInt:

struct BigInt: Integer {
  var storage: Array<Int> =
}

which a consumer is using like:

func process(_ input: BigInt) -> BigInt { ... }
let val1 = process(BigInt())
let val2 = process(0)

Ok that's all fairly straightforward. Now we decide that BigInt should
expose its storage type for power-users:

struct BigInt<Storage: BinaryInteger = Int>: Integer {
  var storage: Array<Storage> =
}

Let's make sure our consumer still works:

func process(_ input: BigInt) -> BigInt { ... }
let val1 = process(BigInt())
let val2 = process(0)

Ok BigInt in process’s definition now means BigInt<Int>, so this still all
works fine. Perfect!

But then the developer of the process function catches wind of this new
power user feature, and wants to support it.
So they too become generic:

func process<T: BinaryInteger>(_ input: BigInt<T>) -> BigInt<T> { ... }

The usage sites are now more complicated, and whether they should compile
is unclear:

let val1 = process(BigInt())
let val2 = process(0)

For val1 you can take a hard stance with your rule: BigInt() means
BigInt<Int>(), and that will work. But for val2 this rule doesn't work,
because no one has written BigInt unqualified. However if you say that the
`Storage=Int` default is allowed to participate in this expression, then we
can still find the old behaviour by defaulting to it when we discover
Storage is ambiguous.

We can also consider another power-user function:

func fastProcess(_ input: BigInt<Int64>) -> BigInt<Int64> { ... }
let val3 = fastProcess(BigInt())

Again, we must decide the interpretation of this. If we take the
interpretation that BigInt() has an inferred type, then the type checker
should discover that BigInt<Int64> is the correct result. If however we
take stance that BigInt() means BigInt<Int>(), then we'll get a type
checking error which our users will consider ridiculous: *of course* they
wanted a BigInt<Int64> here!

We do however have the problem that this won’t work:

let temp = BigInt()
fastProcess(temp) // ERROR — expected BigInt<Int64>, found BigInt<Int>

But that’s just as true for normal ints:

let temp = 0
takesAnInt64(temp) // ERROR — expected Int64, found Int

Such is the limit of Swift’s inference scheme.

Both “prefer user” and “DWIM” are consistent with my desired solution for this specific problem (they pick Int64). DWIM seems more consistent with the rest of Swift to me in that it tries harder to find a reasonable interpretation of your code before giving up. I think it also ends up having the simplest implementation in the current compiler. You can potentially just add a new tie-breaker if-statement in this code: https://github.com/apple/swift/blob/master/lib/Sema/CSRanking.cpp#L1010

Something to the affect of “if one of these was recommended by a generic default, that one’s better”. This of course requires threading that information through the compiler.

···

On Jan 26, 2017, at 4:26 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

Very interesting point, Alexis. So can you reiterate again which of the four options you outlined earlier support this use case? And if there are multiple, which would be the most consistent with the rest of the language?

Cool, thanks--that makes sense.

Personally, although DWIM is appealing, I think if we are to go all-out on
your stance that "adding a default to an existing type parameter should be
a strict source-breaking change," then "prefer user" is the one rule that
maximally clarifies the scenario. With that rule, in the evolution
scenarios that I brought up, either the user-specified default and the
inferred literal type line up perfectly or it is guaranteed to be
source-breaking. IMO, that consistency would bring more clarity than DWIM,
which might prompt a user to be confused why sometimes the compiler "gets
it" and other times it doesn't.

···

On Thu, Jan 26, 2017 at 18:15 Alexis <abeingessner@apple.com> wrote:

On Jan 26, 2017, at 4:26 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

Very interesting point, Alexis. So can you reiterate again which of the
four options you outlined earlier support this use case? And if there are
multiple, which would be the most consistent with the rest of the language?

Both “prefer user” and “DWIM” are consistent with my desired solution for
this specific problem (they pick Int64). DWIM seems more consistent with
the rest of Swift to me in that it tries harder to find a reasonable
interpretation of your code before giving up. I think it also ends up
having the simplest implementation in the current compiler. You can
potentially just add a new tie-breaker if-statement in this code:
https://github.com/apple/swift/blob/master/lib/Sema/CSRanking.cpp#L1010

Something to the affect of “if one of these was recommended by a generic
default, that one’s better”. This of course requires threading that
information through the compiler.

I don’t have much skin in the nuance of PU vs DWIM since, as far as I can tell, it’s backwards compatible to update from PU to DWIM. So we could conservatively adopt PU and then migrate to DWIM if that's found to be intolerable. I expect it will be intolerable, though.

Also, language subtlety thing here: there’s lots of things which are *strictly* source breaking changes, but tend to work out 99% of the time anyway because of things like inference. I’m not at all opposed to making things work out 99.9% of the time instead. For instance, if I changed the Iterator type some collection yielded, almost no one would notice because they just pass it into a for loop or call a standard Sequence method on it. Still, strictly a source breaking change. Someone’s code could stop compiling.

I’m not sure, I think it will be easy enough for users to figure out where the problem is because it will create a type-mismatch.
When type mismatches occur, the only place to look is the variable definition, because that is where the type is defined.

This is such a narrow case that I’m sure we can provide good diagnostics for it. The pattern could be:

- A generic parameter mismatch (i.e. trying to use a value of type MyType<X> where type MyType<Y> is expected), and
- X and Y are both {Whatever}LiteralConvertible, and
- X is the default type bound to that parameter, and
- the value was initialised using a {Whatever} literal, where an instance of the parameter was expected

In that case, we could introduce a simple fix-it: replacing one of the literal values with "(literal as Y)”

for example:

struct Something<T=Int64> { let value: T }
func action(_: Something<Int>) { … } // Expects a specific kind of Something<T>

let myThing = Something(value: 42) // Fix-it: Did you mean ‘Something(value: 42 as Int)’?
action(myThing) // Error: No overload for ‘action’ which takes a Something<Int64>.

···

On 27 Jan 2017, at 01:30, Xiaodi Wu via swift-evolution <swift-evolution@swift.org> wrote:

Cool, thanks--that makes sense.

Personally, although DWIM is appealing, I think if we are to go all-out on your stance that "adding a default to an existing type parameter should be a strict source-breaking change," then "prefer user" is the one rule that maximally clarifies the scenario. With that rule, in the evolution scenarios that I brought up, either the user-specified default and the inferred literal type line up perfectly or it is guaranteed to be source-breaking. IMO, that consistency would bring more clarity than DWIM, which might prompt a user to be confused why sometimes the compiler "gets it" and other times it doesn’t.