Implicit truncation

Swift already has the FloatingPointRoundingRule enum — used in mutating func round(_ rule: FloatingPointRoundingRule) in FloatingPoint — which has all of that behavior as separate cases. So all that would be added to the language would be:

Int.init(rounding number: FloatingPoint, _ rule: FloatingPointRoundingRule)
Int.init(truncating number: FloatingPoint)

The latter would be source breaking and require migration, so we'd need it before Swift 4. This seems like a pretty easy change to make, though.

Also, this probably isn't the thread for this, but I noticed that the FloatingPoint protocol has a mutating round method, but not a non-mutating rounded() method. Would that be worth adding as well? It feels odd to not have the mutating/non-mutating pair of methods there.

Both `rounded()` and `round()` are already provided by FloatingPoint. As
with all proposed API additions, you'd need to demonstrate why
Int.init(rounding: value) is necessary given that Int.init(value.rounded())
is already possible. With so many initializers on Int already, I highly
doubt that adding more is a good move.

There is `Int.init?(exactly:)` for the originally requested feature. Now,
if one were to design from scratch, then perhaps Int.init(truncating:)
would be the most consistent name, but the unlabeled spelling is not
harmful, as `Int.init(_: Float)` is after all a non-failable initializer
that converts from a floating point value to an integer value. Put another
way, if the current unlabeled spelling weren't used for what it is today,
then there could be no unlabeled initializer on an integer type that takes
a floating point argument.

In any case, this issue is settled. The integer protocols have been
formally reviewed not once but twice, and not all of the approved design
has even been implemented yet. One cannot go back and bikeshed endlessly
what has already been approved.

···

On Sun, May 21, 2017 at 21:22 Robert Bennett via swift-evolution < swift-evolution@swift.org> wrote:

Swift already has the FloatingPointRoundingRule enum — used in mutating
func round(_ rule: FloatingPointRoundingRule) in FloatingPoint — which has
all of that behavior as separate cases. So all that would be added to the
language would be:

Int.init(rounding number: FloatingPoint, _ rule: FloatingPointRoundingRule)
Int.init(truncating number: FloatingPoint)

The latter would be source breaking and require migration, so we'd need it
before Swift 4. This seems like a pretty easy change to make, though.

Also, this probably isn't the thread for this, but I noticed that the
FloatingPoint protocol has a mutating round method, but not a non-mutating
rounded() method. Would that be worth adding as well? It feels odd to not
have the mutating/non-mutating pair of methods there.
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

...And I do have to take one statement back: if I recall, using the label
"truncating" was considered and rejected for converting from floating point
values to integers. The reasoning was that, in Swift, the term "truncating"
is intended to be used exclusively for the truncating of bit patterns (such
as Int(truncatingOrExtending: 42 as Int8)). Although C has "trunc", the
same function is deliberately known in Swift as "rounded toward zero." Even
the documentation for Int.init(_: Float) deliberately uses that terminology
and never calls it "truncating."

···

On Sun, May 21, 2017 at 23:44 Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

Both `rounded()` and `round()` are already provided by FloatingPoint. As
with all proposed API additions, you'd need to demonstrate why
Int.init(rounding: value) is necessary given that Int.init(value.rounded())
is already possible. With so many initializers on Int already, I highly
doubt that adding more is a good move.

There is `Int.init?(exactly:)` for the originally requested feature. Now,
if one were to design from scratch, then perhaps Int.init(truncating:)
would be the most consistent name, but the unlabeled spelling is not
harmful, as `Int.init(_: Float)` is after all a non-failable initializer
that converts from a floating point value to an integer value. Put another
way, if the current unlabeled spelling weren't used for what it is today,
then there could be no unlabeled initializer on an integer type that takes
a floating point argument.

In any case, this issue is settled. The integer protocols have been
formally reviewed not once but twice, and not all of the approved design
has even been implemented yet. One cannot go back and bikeshed endlessly
what has already been approved.

On Sun, May 21, 2017 at 21:22 Robert Bennett via swift-evolution < > swift-evolution@swift.org> wrote:

Swift already has the FloatingPointRoundingRule enum — used in mutating
func round(_ rule: FloatingPointRoundingRule) in FloatingPoint — which has
all of that behavior as separate cases. So all that would be added to the
language would be:

Int.init(rounding number: FloatingPoint, _ rule:
FloatingPointRoundingRule)
Int.init(truncating number: FloatingPoint)

The latter would be source breaking and require migration, so we'd need
it before Swift 4. This seems like a pretty easy change to make, though.

Also, this probably isn't the thread for this, but I noticed that the
FloatingPoint protocol has a mutating round method, but not a non-mutating
rounded() method. Would that be worth adding as well? It feels odd to not
have the mutating/non-mutating pair of methods there.
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Reply below.

[…], but the unlabeled spelling is not harmful, as `Int.init(_: Float)` is after all a non-failable initializer that converts from a floating point value to an integer value.

I respectfully disagree. From my experience (tutoring), this is harmful. It leads to subtle bugs all over the place. That’s what I observe.

Another citation from Swift API Guidelines:

In initializers that perform value preserving type conversions, omit the first argument label, e.g. Int64(someUInt32)

This is not a value preserving conversion, is it?

R+

···

On 22 May 2017, at 06:44, Xiaodi Wu via swift-evolution <swift-evolution@swift.org> wrote:

There are no non-failable value-preserving conversions from floating point
values of any type to integers of any type. The API naming guidelines do
not require that _only_ value-preserving type conversions omit the first
argument; in this case, a strong convention among C family languages exists
for the behavior of the conversion, and the guidelines tell us to omit
needless words. This spelling has already been reviewed and approved for
Swift 4. What are the "subtle bugs all over the place" that you observe,
and what did those users expect to happen?

···

On Mon, May 22, 2017 at 06:17 Rudolf Adamkovič via swift-evolution < swift-evolution@swift.org> wrote:

Reply below.

On 22 May 2017, at 06:44, Xiaodi Wu via swift-evolution < > swift-evolution@swift.org> wrote:

[…], but the unlabeled spelling is not harmful, as `Int.init(_: Float)` is
after all a non-failable initializer that converts from a floating point
value to an integer value.

I respectfully disagree. From my experience (tutoring), this is harmful.
It leads to subtle bugs all over the place. That’s what I observe.

Another citation from Swift API Guidelines:

In initializers that perform value preserving type conversions, omit the
first argument label, e.g. Int64(someUInt32)

This is not a value preserving conversion, is it?

R+

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

They pretty much universally expect the value to be rounded. When this happens, I tell them about this language called C (… insert long story here ...) and we replace "Int(foo)" with "foo.rounded()”. We write a unit test and move on. So far, so good. However, two weeks later, they do it again. And again. If the initializer said “truncating” or something along those lines, I’m sure they wouldn’t pick that one. They are all smart people, it’s just that there’s absolutely no “visible clue” and they don’t expect this to be the default behavior.

P.S. I haven’t followed the “integer protocols” debate and had no idea that “this issue is settled” and that it "cannot go back”. Also, I have no time nor interest to "bikeshed endlessly”. I just wanted to share real-world experience/observation with smart people in here (like you) to improve the language going forward.

Thank you!

R+

···

On 22 May 2017, at 14:18, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

What are the "subtle bugs all over the place" that you observe, and what did those users expect to happen?

Let's be clear: it _is_ rounded, just toward zero. It is consistent with
the behavior of integer division. I would guess that your students also
repeatedly struggle with the result that `2 / 3 == 0`? If so, they have not
been taught some important fundamentals of integer arithmetic. If not, then
it should be natural that `Int(2 / 3 as Double == Int(2 / 3 as UInt)`.

···

On Mon, May 22, 2017 at 08:06 Rudolf Adamkovič <salutis@me.com> wrote:

> On 22 May 2017, at 14:18, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:
>
> What are the "subtle bugs all over the place" that you observe, and what
did those users expect to happen?

They pretty much universally expect the value to be rounded. When this
happens, I tell them about this language called C (… insert long story here
...) and we replace "Int(foo)" with "foo.rounded()”. We write a unit test
and move on. So far, so good. However, two weeks later, they do it again.
And again. If the initializer said “truncating” or something along those
lines, I’m sure they wouldn’t pick that one. They are all smart people,
it’s just that there’s absolutely no “visible clue” and they don’t expect
this to be the default behavior.

P.S. I haven’t followed the “integer protocols” debate and had no idea
that “this issue is settled” and that it "cannot go back”. Also, I have no
time nor interest to "bikeshed endlessly”. I just wanted to share
real-world experience/observation with smart people in here (like you) to
improve the language going forward.

Thank you!

R+

I should add: thanks for sharing real-world info on learning and teaching
Swift. I agree that it's important to consider how the language is picked
up by new users. Here, what I'm saying is that I think your learners have
missed a much larger pedagogical point. When I teach new learners, whether
it's C or Python or Swift, I introduce the notion that programming
languages treat integers specially, and that any time an operation produces
a fractional value in basic math, the integer math equivalent in a
programming language often drops the fractional part. I find that learners
who have had this introduction do not subsequently struggle with
float-to-int casting in C, for example.

···

On Mon, May 22, 2017 at 09:09 Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

Let's be clear: it _is_ rounded, just toward zero. It is consistent with
the behavior of integer division. I would guess that your students also
repeatedly struggle with the result that `2 / 3 == 0`? If so, they have not
been taught some important fundamentals of integer arithmetic. If not, then
it should be natural that `Int(2 / 3 as Double == Int(2 / 3 as UInt)`.
On Mon, May 22, 2017 at 08:06 Rudolf Adamkovič <salutis@me.com> wrote:

> On 22 May 2017, at 14:18, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:
>
> What are the "subtle bugs all over the place" that you observe, and
what did those users expect to happen?

They pretty much universally expect the value to be rounded. When this
happens, I tell them about this language called C (… insert long story here
...) and we replace "Int(foo)" with "foo.rounded()”. We write a unit test
and move on. So far, so good. However, two weeks later, they do it again.
And again. If the initializer said “truncating” or something along those
lines, I’m sure they wouldn’t pick that one. They are all smart people,
it’s just that there’s absolutely no “visible clue” and they don’t expect
this to be the default behavior.

P.S. I haven’t followed the “integer protocols” debate and had no idea
that “this issue is settled” and that it "cannot go back”. Also, I have no
time nor interest to "bikeshed endlessly”. I just wanted to share
real-world experience/observation with smart people in here (like you) to
improve the language going forward.

Thank you!

R+

Just to add my thoughts; but while I agree that it's important developers learn the intricacies of Int vs. Float, I don't think this is quite the same issue as divisions.

If you're asking for an Int then you should have some idea that you're asking for a whole number only, and so that follows on to division as Ints simply cannot handle fractions.

However, for conversion from Float there's an expectation that some rounding must occur, the problem is that it's not as expected; while you or I would expect it to be this way with familiarity of other languages, for someone new to the language this isn't always going to be the case. While it's reasonable to expect new developers to know what an Int is, I think it's unreasonable for them also to remember what the default rounding strategy of an implicitly rounding constructor is.

For this reason I tend to agree with the principle that the Int(_:Float) constructor should probably be labelled more intuitively, personally I'd like to see:

  func init(truncating:Float) { … }
  func init(rounding:Float, _ strategy: FloatingPointRoundingRule) { … }

Here the init(truncating:) constructor is just a convenience form of init(rounding:) with a strategy of .towardZero, which I believe is consistent with the current behaviour. It's also easily swapped in anywhere that init(_:Float) etc. are currently used.

···

On 22 May 2017, at 15:09, Xiaodi Wu via swift-evolution <swift-evolution@swift.org> wrote:

Let's be clear: it _is_ rounded, just toward zero. It is consistent with the behavior of integer division. I would guess that your students also repeatedly struggle with the result that `2 / 3 == 0`? If so, they have not been taught some important fundamentals of integer arithmetic. If not, then it should be natural that `Int(2 / 3 as Double == Int(2 / 3 as UInt)`.

Let's be clear: it _is_ rounded, just toward zero. It is consistent with
the behavior of integer division. I would guess that your students also
repeatedly struggle with the result that `2 / 3 == 0`? If so, they have not
been taught some important fundamentals of integer arithmetic. If not, then
it should be natural that `Int(2 / 3 as Double == Int(2 / 3 as UInt)`.

Just to add my thoughts; but while I agree that it's important developers
learn the intricacies of Int vs. Float, I don't think this is quite the
same issue as divisions.

If you're asking for an Int then you should have some idea that you're
asking for a whole number only, and so that follows on to division as Ints
simply cannot handle fractions.

However, for conversion from Float there's an expectation that some
rounding must occur, the problem is that it's not as expected; while you or
I would expect it to be this way with familiarity of other languages, for
someone new to the language this isn't always going to be the case. While
it's reasonable to expect new developers to know what an Int is, I think
it's unreasonable for them also to remember what the default rounding
strategy of an implicitly rounding constructor is.

It's _not_ an arbitrarily chosen default rounding strategy to be
remembered. I mean, if you want to memorize it and move on, then of course
that's fine. But at heart it goes to what a decimal point is. Let's go back
to grade school. What does it mean to write "1.9"? Well, it means "one
whole thing and nine parts out of ten of a thing," or in other words, "1 +
9/10" or "19 / 10". Now, what happens in Swift?

let x = Int(19 / 10) // x == 1
let y = Int(1 + 9 / 10) // y == 1
let z = Int(1.9) // z == 1

If we're to speak of intuition for new developers who've never used a
programming language, who are falling back to what they know about
mathematics, then quite literally a decimal point _is_ about division by
ten.

For this reason I tend to agree with the principle that the Int(_:Float)

constructor should probably be labelled more intuitively, personally I'd
like to see:

func init(truncating:Float) { … }

Again, this particular naming suggestion has been discussed as part of the
review of integer protocols and not adopted. The rationale was that the
term "truncating" is intended to be left for bit patterns only. The term in
Swift is exclusively "rounded toward zero."

func init(rounding:Float, _ strategy: FloatingPointRoundingRule) { … }

Again, here, as an addition to the API, this fails the six criteria of Ben
Cohen, as it is strictly duplicative of `T(value.rounded(strategy))`.

Here the init(truncating:) constructor is just a convenience form of

···

On Mon, May 22, 2017 at 9:30 AM, Haravikk <swift-evolution@haravikk.me> wrote:

On 22 May 2017, at 15:09, Xiaodi Wu via swift-evolution < > swift-evolution@swift.org> wrote:
init(rounding:) with a strategy of .towardZero, which I believe is
consistent with the current behaviour. It's also easily swapped in anywhere
that init(_:Float) etc. are currently used.

If we're to speak of intuition for new developers who've never used a programming language, who are falling back to what they know about mathematics, then quite literally a decimal point _is_ about division by ten.

I don't think this necessarily follows; the issue here is that the constructor isn't explicit enough that it is simply lopping off the fractional part. My own experience of maths as taught in school, to go from a decimal to an integer I would expect to round, so I think it's reasonable that Swift should be clear. While it is reflected in the documentation, a good choice of label would allow it to be explicit at the point of use, without requiring a look up each time there is uncertainty.

  func init(truncating:Float) { … }

Again, this particular naming suggestion has been discussed as part of the review of integer protocols and not adopted. The rationale was that the term "truncating" is intended to be left for bit patterns only. The term in Swift is exclusively "rounded toward zero."

As I understand it truncation is a term of art from C at least (rounding toward zero is the trunc function I believe?), it also makes sense given that what's happening is that the fractional part is being discarded, regardless of how how high it may be. init(roundTowardZero:Float) seems like it would be very unwieldy by comparison just because truncating is arbitrarily reserved for bit operations.

Also, when it comes down to it, discarding the fractional part of a float is a bit-pattern operation of a sort, as the conversion is simplistically taking the significand, dropping it into an Int then shifting by the exponent.

  func init(rounding:Float, _ strategy: FloatingPointRoundingRule) { … }

Again, here, as an addition to the API, this fails the six criteria of Ben Cohen, as it is strictly duplicative of `T(value.rounded(strategy))`.

Maybe, but init(rounding:) is explicit that something is being done to the value, at which point there's no obvious harm in clarifying what (or allowing full freedom). While avoiding redundancy is good as a general rule, it doesn't mean there can't be any at all if there's some benefit to it; in this case clarity of exactly what kind of rounding is taking place to the Float/Double value.

···

On 22 May 2017, at 15:51, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

Xiaodi,

thank you for your time and very thoughtful replies!

You’re right, there’s no way around teaching integer arithmetic as it applies to many other cases, e.g. 2 / 3 == 0.

Happy coding and thanks again!

R+

If we're to speak of intuition for new developers who've never used a
programming language, who are falling back to what they know about
mathematics, then quite literally a decimal point _is_ about division by
ten.

I don't think this necessarily follows; the issue here is that the
constructor isn't explicit enough that it is simply lopping off the
fractional part. My own experience of maths as taught in school, to go from
a decimal to an integer I would expect to round,

You would also expect that 3 / 4 in integer math gives you 1. With integer
division, however, 3 / 4 == 0. By definition the decimal point separates an
integer from a fractional part, so the behaviors are inextricably linked.
To test this out in practice, I asked the first person with no programming
experience I just encountered today.

I said: "Let me teach you one fact about integers in a programming
language. When two integers are divided, the integer result has the
fractional part discarded; for example, 3/4 computes to 0. What would you
expect to be the result of converting 0.75 to an integer?"

He answered immediately: "I would have expected that 3/4 gives you 1, but
since 3/4 gives you 0, I'd expect 0.75 to convert to 0."

so I think it's reasonable that Swift should be clear. While it is

reflected in the documentation, a good choice of label would allow it to be
explicit at the point of use, without requiring a look up each time there
is uncertainty.

func init(truncating:Float) { … }

Again, this particular naming suggestion has been discussed as part of the
review of integer protocols and not adopted. The rationale was that the
term "truncating" is intended to be left for bit patterns only. The term in
Swift is exclusively "rounded toward zero."

As I understand it truncation is a term of art from C at least (rounding
toward zero is the trunc function I believe?), it also makes sense given
that what's happening is that the fractional part is being discarded,
regardless of how how high it may be. init(roundTowardZero:Float) seems
like it would be very unwieldy by comparison just because truncating is
arbitrarily reserved for bit operations.

Also, when it comes down to it, discarding the fractional part of a float
*is* a bit-pattern operation of a sort,

Discarding the fractional part of a floating point value is a bit pattern
operation only in the sense that any operation on any data is a bit pattern
operation. It is clearly not, however, an operation truncating a bit
pattern.

as the conversion is simplistically taking the significand, dropping it

into an Int then shifting by the exponent.

That's not correct. If you shift the significand or the significand bit
pattern of pi by its exponent, you don't get 3.

func init(rounding:Float, _ strategy: FloatingPointRoundingRule) { … }

Again, here, as an addition to the API, this fails the six criteria of Ben
Cohen, as it is strictly duplicative of `T(value.rounded(strategy))`.

Maybe, but init(rounding:) is explicit that something is being done to the
value, at which point there's no obvious harm in clarifying what (or
allowing full freedom). While avoiding redundancy is good as a general
rule, it doesn't mean there can't be any at all if there's some benefit to
it; in this case clarity of exactly what kind of rounding is taking place
to the Float/Double value.

The bar for adding new API to the standard library is *far* higher than
"some benefit"; `Int(value.rounded(.up))` is the approved spelling for
which you are proposing a second spelling that does the same thing.

···

On Mon, May 22, 2017 at 10:39 Haravikk <swift-evolution@haravikk.me> wrote:

On 22 May 2017, at 15:51, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

If we're to speak of intuition for new developers who've never used a programming language, who are falling back to what they know about mathematics, then quite literally a decimal point _is_ about division by ten.

I don't think this necessarily follows; the issue here is that the constructor isn't explicit enough that it is simply lopping off the fractional part. My own experience of maths as taught in school, to go from a decimal to an integer I would expect to round,

You would also expect that 3 / 4 in integer math gives you 1. With integer division, however, 3 / 4 == 0. By definition the decimal point separates an integer from a fractional part, so the behaviors are inextricably linked. To test this out in practice, I asked the first person with no programming experience I just encountered today.

I said: "Let me teach you one fact about integers in a programming language. When two integers are divided, the integer result has the fractional part discarded; for example, 3/4 computes to 0. What would you expect to be the result of converting 0.75 to an integer?"

He answered immediately: "I would have expected that 3/4 gives you 1, but since 3/4 gives you 0, I'd expect 0.75 to convert to 0."

These are two different case; Int(3) / Int(4) is a division of integer values with an integer result, there's no intermediate floating point value that needs to be coerced back into an Int. The issue here is converse of a Float/Double to an Integer, it's a different operation.

3.0 / 4.0 = 0.75 is a property of Float
3 / 4 = 0 is a property of Int

What's under discussion here is conversion between the two, they're not really comparable cases.

Discarding the fractional part of a floating point value is a bit pattern operation only in the sense that any operation on any data is a bit pattern operation. It is clearly not, however, an operation truncating a bit pattern.

as the conversion is simplistically taking the significand, dropping it into an Int then shifting by the exponent.

That's not correct. If you shift the significand or the significand bit pattern of pi by its exponent, you don't get 3.

I think you're misunderstanding me. If you think of it in base 10 terms 1.2345 is equivalent to 12345 x 10-4; when you convert that to an Int it effectively becomes 12345 shifted four places to the right, leaving you with 1. In that sense it's a truncation of the bit-pattern as you're chopping part of it off, or at the very least are manipulating it.

Regardless it's also very literally a truncation since you're specifically truncating any fraction part, it's simply the most correct term to use; frankly I find restricting that to bit-pattern truncation to be entirely arbitrary and unhelpful. The types involved should make it clear whether the value is being made narrower or not. Int64 -> Int32 is a form of truncation, but so to is Float -> Int; in both cases the target can't represent all values of the source, so something will be lost.

  func init(rounding:Float, _ strategy: FloatingPointRoundingRule) { … }

Again, here, as an addition to the API, this fails the six criteria of Ben Cohen, as it is strictly duplicative of `T(value.rounded(strategy))`.

Maybe, but init(rounding:) is explicit that something is being done to the value, at which point there's no obvious harm in clarifying what (or allowing full freedom). While avoiding redundancy is good as a general rule, it doesn't mean there can't be any at all if there's some benefit to it; in this case clarity of exactly what kind of rounding is taking place to the Float/Double value.

The bar for adding new API to the standard library is *far* higher than "some benefit"; `Int(value.rounded(.up))` is the approved spelling for which you are proposing a second spelling that does the same thing.

The main benefit is that the constructor I proposed would actually require the developer to do this, what you're showing is entirely optional; i.e- any value can be passed without consideration of the rounding that is occurring, or that it may not be as desired. With a label the constructor at least would remind the developer that rounding is occurring (i.e- the value may not be as passed). Going further and requiring them to provide a rounding strategy would also force them to consider what method of rounding should actually be used, eliminating any confusion entirely. What you're demonstrating there does not provide any of these protections against mistakes, as you can omit the rounding operation without any warning, and end up with a value you didn't expect.

A secondary benefit is that any rounding that does take place can do so within the integer type itself, potentially eliminating a Float to Float rounding followed by truncation; i.e- since rounding towards zero is the same as truncation it can optimise away entirely.

···

On 22 May 2017, at 21:16, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:
On Mon, May 22, 2017 at 10:39 Haravikk <swift-evolution@haravikk.me <mailto:swift-evolution@haravikk.me>> wrote:

On 22 May 2017, at 15:51, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

If we're to speak of intuition for new developers who've never used a
programming language, who are falling back to what they know about
mathematics, then quite literally a decimal point _is_ about division by
ten.

I don't think this necessarily follows; the issue here is that the
constructor isn't explicit enough that it is simply lopping off the
fractional part. My own experience of maths as taught in school, to go from
a decimal to an integer I would expect to round,

You would also expect that 3 / 4 in integer math gives you 1. With integer
division, however, 3 / 4 == 0. By definition the decimal point separates an
integer from a fractional part, so the behaviors are inextricably linked.
To test this out in practice, I asked the first person with no programming
experience I just encountered today.

I said: "Let me teach you one fact about integers in a programming
language. When two integers are divided, the integer result has the
fractional part discarded; for example, 3/4 computes to 0. What would you
expect to be the result of converting 0.75 to an integer?"

He answered immediately: "I would have expected that 3/4 gives you 1, but
since 3/4 gives you 0, I'd expect 0.75 to convert to 0."

These are two different case; Int(3) / Int(4) is a division of integer
values with an integer result, there's no intermediate floating point value
that needs to be coerced back into an Int. The issue here is converse of a
Float/Double to an Integer, it's a different operation.

3.0 / 4.0 = 0.75 is a property of Float
3 / 4 = 0 is a property of Int

What's under discussion here is conversion between the two, they're not
really comparable cases.

We're not disagreeing here. Or at least, I'm not disagreeing with you. Of
course, integer division is not the same operation as conversion of Float
to Int. However, division and decimals are related concepts, obviously. I
don't think you dispute that.

How integer types behave under division and how they behave when converting
from a decimal value are therefore inevitably related. And as my
non-programming colleague demonstrated today, new users will use their
knowledge regarding the behavior of one operation to shape expectations
regarding the behavior of the other.

Discarding the fractional part of a floating point value is a bit pattern
operation only in the sense that any operation on any data is a bit pattern
operation. It is clearly not, however, an operation truncating a bit
pattern.

as the conversion is simplistically taking the significand, dropping it

into an Int then shifting by the exponent.

That's not correct. If you shift the significand or the significand bit
pattern of pi by its exponent, you don't get 3.

I think you're misunderstanding me. If you think of it in base 10 terms
1.2345 is equivalent to 12345 x 10-4; when you convert that to an Int it
effectively becomes 12345 shifted four places to the right, leaving you
with 1. In that sense it's a truncation of the bit-pattern as you're
chopping part of it off, or at the very least are manipulating it.

No. It is of course true that the most significant bit of the binary
representation of 12345 is 1. However:

12345 >> 1 == 6172
12345 >> 2 == 3086
12345 >> 3 == 1543
12345 >> 4 == 771
12345 >> 5 == 385
etc.

That is to say, you can't get 1234, 123, or 12 from truncating 12345. There
is no sense in which converting 1234.5, 123.45, or 12.345 to an integer
involves truncating the bit pattern of 12345. You are not in that case
performing recursive integer division by 2, but rather recursive integer
division by 10, which goes back to how decimals and division are
inextricably related.

Regardless it's also very literally a truncation since you're specifically
truncating any fraction part, it's simply the most correct term to use;
frankly I find restricting that to bit-pattern truncation to be entirely
arbitrary and unhelpful.

It is important and not at all arbitrary. Binary integers model two things
simultaneously: an integral value and a sequence of bits. Much care was
placed during the design and review of the revised integer protocols in
making sure that the names of operations that view integers as integral
values are distinguished from those that view integers as sequences of
bits. It was accepted that "truncating" and "extending" would be applied to
operations on bit patterns only, which is why it was OK to shorten the
label from `truncatingBitPattern` to `truncating` (later renamed
`extendingOrTruncating`, for other fairly obvious reasons). By analogy, the
expected value for a hypothetical `Int32(truncating: 42.0 as Double)` would
be 252867936, which is of questionable usefulness. It would, however, be
confusing and unhelpful to use the same word to describe an operation on
the represented real or integral value which is now used only for a very
different operation on a sequence of bits.

The types involved should make it clear whether the value is being made

narrower or not. Int64 -> Int32 is a form of truncation, but so to is Float
-> Int; in both cases the target can't represent all values of the source,
so something will be lost.

func init(rounding:Float, _ strategy: FloatingPointRoundingRule) { … }

Again, here, as an addition to the API, this fails the six criteria of
Ben Cohen, as it is strictly duplicative of `T(value.rounded(strategy))`.

Maybe, but init(rounding:) is explicit that something is being done to
the value, at which point there's no obvious harm in clarifying what (or
allowing full freedom). While avoiding redundancy is good as a general
rule, it doesn't mean there can't be any at all if there's some benefit to
it; in this case clarity of exactly what kind of rounding is taking place
to the Float/Double value.

The bar for adding new API to the standard library is *far* higher than
"some benefit"; `Int(value.rounded(.up))` is the approved spelling for
which you are proposing a second spelling that does the same thing.

The main benefit is that the constructor I proposed would actually require
the developer to do this, what you're showing is entirely optional; i.e-
any value can be passed without consideration of the rounding that is
occurring, or that it may not be as desired. With a label the constructor
at least would remind the developer that rounding is occurring (i.e- the
value may not be as passed).

I do not think any users expect 0.75 to be represented exactly as an
integer; that's not at issue here. The question is, which users see
`Int(0.75)` and think, "this must mean that 0.75 is rounded up to 1"? My
answer, from prior experiences teaching beginners, is that the subset of
users who make this mistake (or a similar one in other languages) largely
overlaps the subset of users who see `3 / 4` and think "this must evaluate
to 1."

Going further and requiring them to provide a rounding strategy would also
force them to consider what method of rounding should actually be used,
eliminating any confusion entirely. What you're demonstrating there does
not provide any of these protections against mistakes, as you can omit the
rounding operation without any warning, and end up with a value you didn't
expect.

It is, of course, true that a user who does not read the documentation and
expects a function that does A instead to do B will use that function
incorrectly. The question is whether it is reasonably common for users to
make the incorrect assumption, and whether such incorrectness ought to be
accommodated by a breaking change to the language. Here I am arguing that
users who are aware of integer division are unlikely to make the incorrect
assumption; they will at minimum look up what the actual behavior is, and
based on my teaching experience and today's mini-experiment, they are
likely actually to expect the existing behavior. (And, of course, I am
arguing that users who are unaware of integer division have a much more
serious gap in knowledge that is the primary issue, not fixable by tweaking
the name of integer initializers.)

A secondary benefit is that any rounding that does take place can do so

within the integer type itself, potentially eliminating a Float to Float
rounding followed by truncation; i.e- since rounding towards zero is the
same as truncation it can optimise away entirely.

Renaming the `Int.init(_: Float)` initializer, by definition, cannot
recover any performance benefits. I'm not aware of any optimizations of
other rounding modes that can make such an initializer faster than what is
currently possible.

···

On Mon, May 22, 2017 at 5:21 PM, Haravikk <swift-evolution@haravikk.me> wrote:

On 22 May 2017, at 21:16, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:
On Mon, May 22, 2017 at 10:39 Haravikk <swift-evolution@haravikk.me> > wrote:

On 22 May 2017, at 15:51, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

This is getting too muddled, so I'm going to summarise thusly; I believe the principle the OP raised is sound, that any initialiser performing a lossy conversion from another type should be clear that this happening, and that a label can do that best, as it makes the conversion self-documenting at the call-site.

Consider for example; with type-inference it's not always obvious what the type of a variable is, and an unlabelled initialiser does nothing to help, like-so:

  var someValue = someMethod()
  …
  var someInt = Int(someValue)

At a glance it's not all obvious what is happening to someValue here; in fact I'd argue that this looks like a lossless conversion, requiring you to find out that someMethod() returns a Float before you can know for sure what's really going on. Whereas the following is more clear:

  var someValue = someMethod()
  …
  var someInt = Int(truncating: someValue)

It may not communicate everything that's happening, but at least now it's clear at a glance that something is happening, and the term truncating suggests that something is being lost/removed from someValue.

Now I don't really care if truncating is the best term for it or not, though I do still think it is and won't change my mind on that; the key thing here is that it's providing that extra safety to developers by making it clear that an important conversion is taking place, and this to me is more consistent with other initialisers on Int etc. where distinctions are made between types that can and cannot be represented exactly.

I'm not saying it should be limited to floats either; I think any initialiser that cannot, or may not, represent the passed value accurately should be labelled for clarity. So passing an Int16 into Int32(_:) would be fine for example, but the reverse should not be.

It's not just an issue of new developers; experienced ones make mistakes too, or forget to consider whether the conversion will impact their code. A label helps, though I still think forcing a decision on rounding is even better, as both prevent a developer from simply throwing in a value in a way that may be a mistake.

I completely agree with Haravikk. This is not C, we have type inference, and this behaviour is different from other non-failable lossy conversions in Swift.

Regarding uses of ‘truncating’, Xiaodi is wrong. The documentation of BinaryInteger.init<T: FloatingPoint>(_ source: T) in the accepted proposal specifically uses the term in the way the original poster used it:
  /// Creates an integer from the given floating-point value, truncating any
  /// fractional part.
(swift-evolution/proposals/0104-improved-integers.md at master · apple/swift-evolution · GitHub)

Cheers,
Guillaume Lessard

This is getting too muddled, so I'm going to summarise thusly; I believe
the principle the OP raised is sound, that any initialiser performing a
lossy conversion from another type should be clear that this happening, and
that a label can do that best, as it makes the conversion self-documenting
at the call-site.

This is a much wider claim that any advanced in the conversation
previously. Swift documentation refers to `init(_:)` as the "default
initializer." If I understand you, you are arguing that _any default
initializer must be value-preserving (monomorphic)_. This is plainly not
the current standard. For example, the following is a default conversion
but not a monomorphic conversion because multiple different arrays yield
the same resulting set:

let x = Set([1, 2, 3, 4, 3, 2, 1])

// x: Set<Int> = {
//  [0] = 2
//  [1] = 3
//  [2] = 1
//  [3] = 4
// }

Much has been quoted from the Swift API guidelines. The language in that
document is fairly terse: it gives guidelines in imperative form (e.g.,
"Omit useless words") with some follow-on suggestions worded more softly.
The strong guideline is that value-preserving (monomorphic) initializers
should omit the label. Notably, it is _not_ the other way around: that is,
it does _not_ require that all initializers omitting the label are
value-preserving. Now, there _is_ a "recommend[ation]" to distinguish
non-value-preserving initializers by a label. During the discussion on this
list about that recommendation, IIRC, the gist was that this is geared
toward situations where some conversions from type A to type B are lossy
and others are not; in that case, a label should be provided for the lossy
conversion(s) so that users don't use it when they mean to use the lossless
conversion.

In the example of array-to-set conversion, there is _no possible
non-failable monomorphic conversion_ from array to set; however, there is
an accepted default lossy way to make the conversion, and therefore the
default initializer adopts that behavior and does not need to announce that
it is lossy. Likewise, there is _no possible non-failable monomorphic
conversion_ from floating point to integer; however, there is an accepted
default lossy way to make the conversion (it is an LLVM intrinsic), and
therefore the default initializer adopts that behavior and does not need to
announce that it is lossy.

Now, if you want to argue that you in particular do not accept that the
LLVM intrinsic is the default lossy way to convert from floating point to
integer, or that it is highly confusing, here's the place for that
discussion. But you're now arguing (IIUC) that regardless of how accepted
the default behavior is, if it is not lossless then it must be spelled with
a label, and that is plainly not the current convention in Swift.

Consider for example; with type-inference it's not always obvious what the

type of a variable is, and an unlabelled initialiser does nothing to help,
like-so:

var someValue = someMethod()

var someInt = Int(someValue)

At a glance it's not all obvious what is happening to someValue here; in
fact I'd argue that this looks like a lossless conversion, requiring you to
find out that someMethod() returns a Float before you can know for sure
what's really going on.

Again, this is based on the claim that _only_ lossless conversions use
unlabeled initializers. You express the opinion that it _should_ be the
case above, but as I have already replied, it is factually _not_ the case
currently. That is, for an arbitrary pair of types A and B:

let x = A()
let y = B(x) // There is _no guarantee_ that this is lossless.

Whereas the following is more clear:

var someValue = someMethod()

var someInt = Int(truncating: someValue)

It may not communicate everything that's happening, but at least now it's
clear at a glance that *something* is happening, and the term truncating
suggests that something is being lost/removed from someValue.

Now I don't really care if truncating is the best term for it or not,
though I do still think it is and won't change my mind on that;

I think we're done here, then. What is the point of having a discussion
with reasoned arguments if you're pre-committed to not changing your mind?

the key thing here is that it's providing that extra safety to developers
by making it clear that an important conversion is taking place, and this
to me is more consistent with other initialisers on Int etc. where
distinctions are made between types that can and cannot be represented
exactly.

I'm not saying it should be limited to floats either; I think any
initialiser that cannot, or may not, represent the passed value accurately
should be labelled for clarity.

If you want to make that argument, it is a different discussion from the
one here about integer math and floating point. You'd be re-opening a
discussion on the Swift API naming guidelines and asking for a much wider
re-examination of the entire Swift API surface.

···

On Tue, May 23, 2017 at 5:36 AM, Haravikk <swift-evolution@haravikk.me> wrote:

So passing an Int16 into Int32(_:) would be fine for example, but the
reverse should not be.

It's not just an issue of new developers; experienced ones make mistakes
too, or forget to consider whether the conversion will impact their code. A
label helps, though I still think forcing a decision on rounding is even
better, as both prevent a developer from simply throwing in a value in a
way that may be a mistake.

I completely agree with Haravikk. This is not C, we have type inference,
and this behaviour is different from other non-failable lossy conversions
in Swift.

Regarding uses of ‘truncating’, Xiaodi is wrong. The documentation of
BinaryInteger.init<T: FloatingPoint>(_ source: T) in the accepted proposal
specifically uses the term in the way the original poster used it:
  /// Creates an integer from the given floating-point value, truncating
any
  /// fractional part.
(https://github.com/apple/swift-evolution/blame/master/propo
sals/0104-improved-integers.md#L444)

The doc comments in the proposal text were meant as explanation for readers
of the proposal; they undergo editing for accuracy during implementation.
That wording is, notably, not found in the implemented protocol. Instead:

/// When you create a binary integer from a floating-point value using the
/// default initializer, the value is rounded toward zero before the range
is
/// checked. In the following example, the value `127.75` is rounded to
`127`,
/// which is representable by the `Int8` type.  `128.25` is rounded to
`128`,
/// which is not representable as an `Int8` instance, triggering a runtime
/// error.

0ae64da565/stdlib/public/core/Integers.swift.gyb#L1216

This wording is also reflected in the pre-SE-0104-take-2 documentation for
the concrete implementation on the type itself, showing that it is a
longstanding convention:

`init(_:)`
Creates a new instance by rounding the given floating-point value toward

zero.

···

On Tue, May 23, 2017 at 1:16 PM, Guillaume Lessard via swift-evolution < swift-evolution@swift.org> wrote:

What? It shows up in my browser! This is how it looks right now: (in https://github.com/apple/swift/blob/368847b5c7581b9024347f0a73fc83eb6d9866a8/stdlib/public/core/Integers.swift.gyb#L1366\)

/// Creates an integer from the given floating-point value, truncating any
/// fractional part.
///
/// Truncating the fractional part of `source` is equivalent to rounding
/// toward zero.

Let’s not cherry-pick.

Is there another lossy initializer for Int or BinaryInteger that doesn’t have a parameter label?

GL

···

On May 23, 2017, at 13:25, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

The doc comments in the proposal text were meant as explanation for readers of the proposal; they undergo editing for accuracy during implementation. That wording is, notably, not found in the implemented protocol.