SE-0170: NSNumber bridging and Numeric types

On Wed, Apr 19, 2017 at 6:00 PM, Philippe Hausler <phausler@apple.com> w
rote:

So, as I understand it, `Float.init(exactly: Double.pi) == nil`. I would
expect NSNumber to behave similarly (a notion with which Martin disagrees,
I guess). I don't see a test that shows whether NSNumber behaves or does
not behave in that way.

At present they behave differently:

    print(Float(exactly: Double.pi) as Any)
    // nil
    print(Float(exactly: NSNumber(value: Double.pi)) as Any)
    // Optional(3.14159274)

I realize that identical behavior would be logical and least surprising.
My only concern was about cases like

    let num = ... // some NSNumber from a JSON deserialization
    let fval = Float(exactly: num)

where one cannot know how the number is represented internally and what
precision it needs. But then one could use the truncating conversion or
`.floatValue` instead.

JSON numbers are double-precision floating point, unless I'm
misunderstanding something. If someone writes `Float(exactly:
valueParsedFromJSON)`, surely, that can only mean that they *really,
really* prefer nil over an imprecise value. I can see no other reason to
insist on using both Float and .init(exactly:).

JSON does not claim 32 or 64 bit floating point, or for that matter 128
or infinite bit floating point :(

Oops, you're right. I see they've wanted to future-proof this. That said,
RFC 7159 *does* say:

This specification allows implementations to set limits on the range

and precision of numbers accepted. Since software that implements

IEEE 754-2008 binary64 (double precision) numbers [IEEE754] is

generally available and widely used, good interoperability can be
achieved by implementations that expect no more precision or range
than these provide, in the sense that implementations will
approximate JSON numbers within the expected precision.

So JSON doesn't set limits on how numbers are represented, but JSON
implementations are permitted to (and I'd imagine that all in fact do). A
user of a JSON deserialization library can rightly expect to know the
numeric limits of that implementation; for the purposes of bridging
NSNumber, if the answer is that the implementation parses JSON numbers as
double-precision values, Double(exactly:) would be the right choice;
otherwise, if it's 80-bit values, then Float80(exactly:) would be the right
choice, etc.

Float80 is not compatible with NSNumber; and is well out of scope for this
proposal.

OK, so Double is the largest floating point type compatible with NSNumber?
It stands to reason that any Swift JSON implementation that uses NSNumber
for parsed floating point values would at most have that much range and
precision, right?

If so, then every floating point value parsed by any such Swift JSON
implementation would be exactly representable as a Double: regardless of
whether that specific implementation uses Float or Double under the hood,
every Float can be represented exactly as a Double. If a user is trying to
bridge such a NSNumber instance specifically to *Float* instead of Double,
and they are asking for an exact value, there's no a priori reason to think
that this user would be more likely to care only about the range and not
the precision, or vice versa. Which is to say, I don't think you'll get too
many bug reports :)

···

On Wed, Apr 19, 2017 at 6:35 PM, Philippe Hausler <phausler@apple.com> wrote:

On Apr 19, 2017, at 16:17, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Apr 19, 2017, at 3:23 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:
On Wed, Apr 19, 2017 at 3:19 PM, Martin R <martinr448@gmail.com> wrote:

On 19. Apr 2017, at 01:48, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

After thinking about it more; it seems reasonable to restrict it to the

behavior of Float(exactly: Double(…)). I am certain this will probably in
the end cause more bugs for me to have to address and mark as “behaves
correctly” and confuse a few new developers - but in the end they chose
Swift and the consistent story would be the current behavior of
Float(exactly: Double).

On Tue, Apr 18, 2017 at 11:43 AM, Philippe Hausler <phausler@apple.com> >>> wrote:

On Apr 18, 2017, at 9:22 AM, Stephen Canon <scanon@apple.com> wrote:

On Apr 18, 2017, at 12:17 PM, Joe Groff <jgroff@apple.com> wrote:

On Apr 17, 2017, at 5:56 PM, Xiaodi Wu via swift-evolution < >>>> swift-evolution@swift.org> wrote:

It seems Float.init(exactly: NSNumber) has not been updated to behave
similarly?

I would have to say, I would naively expect "exactly" to behave exactly
as it says, exactly. I don't think it should be a synonym for
Float(Double(exactly:)).
On Mon, Apr 17, 2017 at 19:24 Philippe Hausler via swift-evolution < >>>> swift-evolution@swift.org> wrote:
I posted my branch and fixed up the Double case to account for your
concerns (with a few inspired unit tests to validate)

GitHub - phausler/swift at safe_nsnumber

There is a builtin assumption here though: it does presume that the
swift’s representation of Double and Float are IEEE compliant. However that
is a fairly reasonable assumption in the tests.

Even with the updated code at phausler (Philippe Hausler) · GitHub
/swift/tree/safe_nsnumber

    print(Double(exactly: NSNumber(value: Int64(9000000000000000001)))
as Any)
    // Optional(9e+18)

still succeeds, however the reason seems to be an error in the
`init(exactly value: someIntegerType)` inititializers of Float/Double, I
have submitted a bug report: https://bugs.swift.org/browse/SR-4634\.

(+Steve Canon) What is the behavior of Float.init(exactly: Double)?

NSNumber's behavior would ideally be consistent with that.

The implementation is essentially just:

self.init(other)
guard Double(self) == other else {
return nil
}

i.e. if the result is not equal to the source when round-tripped back
to double (which is always exact), the result is nil.

– Steve

Pretty much the same trick inside of CFNumber/NSNumber

So, as I understand it, `Float.init(exactly: Double.pi) == nil`. I would expect NSNumber to behave similarly (a notion with which Martin disagrees, I guess). I don't see a test that shows whether NSNumber behaves or does not behave in that way.

At present they behave differently:

    print(Float(exactly: Double.pi) as Any)
    // nil
    print(Float(exactly: NSNumber(value: Double.pi)) as Any)
    // Optional(3.14159274)

I realize that identical behavior would be logical and least surprising. My only concern was about cases like

    let num = ... // some NSNumber from a JSON deserialization
    let fval = Float(exactly: num)

where one cannot know how the number is represented internally and what precision it needs. But then one could use the truncating conversion or `.floatValue` instead.

JSON numbers are double-precision floating point, unless I'm misunderstanding something. If someone writes `Float(exactly: valueParsedFromJSON)`, surely, that can only mean that they *really, really* prefer nil over an imprecise value. I can see no other reason to insist on using both Float and .init(exactly:).

JSON does not claim 32 or 64 bit floating point, or for that matter 128 or infinite bit floating point :(

Oops, you're right. I see they've wanted to future-proof this. That said, RFC 7159 *does* say:

This specification allows implementations to set limits on the range
and precision of numbers accepted. Since software that implements
IEEE 754-2008 binary64 (double precision) numbers [IEEE754] is
generally available and widely used, good interoperability can be
achieved by implementations that expect no more precision or range
than these provide, in the sense that implementations will
approximate JSON numbers within the expected precision.

So JSON doesn't set limits on how numbers are represented, but JSON implementations are permitted to (and I'd imagine that all in fact do). A user of a JSON deserialization library can rightly expect to know the numeric limits of that implementation; for the purposes of bridging NSNumber, if the answer is that the implementation parses JSON numbers as double-precision values, Double(exactly:) would be the right choice; otherwise, if it's 80-bit values, then Float80(exactly:) would be the right choice, etc.

Float80 is not compatible with NSNumber; and is well out of scope for this proposal.

OK, so Double is the largest floating point type compatible with NSNumber? It stands to reason that any Swift JSON implementation that uses NSNumber for parsed floating point values would at most have that much range and precision, right?

For JSONSerialization (which I am most familiar with and ships with Foundation); it can emit both NSNumbers and NSDecimalNumber. A rough approximation of the behavior: if it can store the value in an integer type it stores it as such in a NSNumber (iirc up to UINT64_MAX) and then if it has a decimal point it will attempt to parse as a double but if that is not enough storage it will store the best possible value into NSDecimalNumber.

So NSNumber itself (excluding subclasses) can only store up to a 64 bit value.

If so, then every floating point value parsed by any such Swift JSON implementation would be exactly representable as a Double: regardless of whether that specific implementation uses Float or Double under the hood, every Float can be represented exactly as a Double. If a user is trying to bridge such a NSNumber instance specifically to *Float* instead of Double, and they are asking for an exact value, there's no a priori reason to think that this user would be more likely to care only about the range and not the precision, or vice versa. Which is to say, I don't think you'll get too many bug reports :)

In my mind there are two considerations here; balance against the surprise from new developers learning their first programming language versus consistency. In the end even if I believe the behavior is sub-par I would rather it be consistent. Primarily consistency is easier to teach even if it is derived from a standard developed with legacy behavior of C at its heart.

Perhaps in the future we might want to eventually allow conversions to and from NSNumber via the Integer and FloatingPoint protocols; however I would guess that there needs to be a lot more thought and perhaps some modifications there to pull that off. Not to sound like a broken record, but again that it is out of scope for right now.

···

On Apr 19, 2017, at 6:09 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:
On Wed, Apr 19, 2017 at 6:35 PM, Philippe Hausler <phausler@apple.com <mailto:phausler@apple.com>> wrote:

On Apr 19, 2017, at 16:17, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:
On Wed, Apr 19, 2017 at 6:00 PM, Philippe Hausler <phausler@apple.com <mailto:phausler@apple.com>> wrote:

On Apr 19, 2017, at 3:23 PM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:
On Wed, Apr 19, 2017 at 3:19 PM, Martin R <martinr448@gmail.com <mailto:martinr448@gmail.com>> wrote:

On 19. Apr 2017, at 01:48, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

After thinking about it more; it seems reasonable to restrict it to the behavior of Float(exactly: Double(…)). I am certain this will probably in the end cause more bugs for me to have to address and mark as “behaves correctly” and confuse a few new developers - but in the end they chose Swift and the consistent story would be the current behavior of Float(exactly: Double).

On Tue, Apr 18, 2017 at 11:43 AM, Philippe Hausler <phausler@apple.com <mailto:phausler@apple.com>> wrote:

On Apr 18, 2017, at 9:22 AM, Stephen Canon <scanon@apple.com <mailto:scanon@apple.com>> wrote:

On Apr 18, 2017, at 12:17 PM, Joe Groff <jgroff@apple.com <mailto:jgroff@apple.com>> wrote:

On Apr 17, 2017, at 5:56 PM, Xiaodi Wu via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

It seems Float.init(exactly: NSNumber) has not been updated to behave similarly?

I would have to say, I would naively expect "exactly" to behave exactly as it says, exactly. I don't think it should be a synonym for Float(Double(exactly:)).
On Mon, Apr 17, 2017 at 19:24 Philippe Hausler via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
I posted my branch and fixed up the Double case to account for your concerns (with a few inspired unit tests to validate)

GitHub - phausler/swift at safe_nsnumber

There is a builtin assumption here though: it does presume that the swift’s representation of Double and Float are IEEE compliant. However that is a fairly reasonable assumption in the tests.

Even with the updated code at GitHub - phausler/swift at safe_nsnumber

    print(Double(exactly: NSNumber(value: Int64(9000000000000000001))) as Any)
    // Optional(9e+18)

still succeeds, however the reason seems to be an error in the `init(exactly value: someIntegerType)` inititializers of Float/Double, I have submitted a bug report: https://bugs.swift.org/browse/SR-4634\.

(+Steve Canon) What is the behavior of Float.init(exactly: Double)? NSNumber's behavior would ideally be consistent with that.

The implementation is essentially just:

  self.init(other)
  guard Double(self) == other else {
    return nil
  }

i.e. if the result is not equal to the source when round-tripped back to double (which is always exact), the result is nil.

– Steve

Pretty much the same trick inside of CFNumber/NSNumber

Right. I think we're in vigorous agreement.

···

On Thu, Apr 20, 2017 at 11:11 Philippe Hausler <phausler@apple.com> wrote:

On Apr 19, 2017, at 6:09 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Wed, Apr 19, 2017 at 6:35 PM, Philippe Hausler <phausler@apple.com> > wrote:

On Apr 19, 2017, at 16:17, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Wed, Apr 19, 2017 at 6:00 PM, Philippe Hausler <phausler@apple.com> >> wrote:

On Apr 19, 2017, at 3:23 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Wed, Apr 19, 2017 at 3:19 PM, Martin R <martinr448@gmail.com> wrote:

On 19. Apr 2017, at 01:48, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

So, as I understand it, `Float.init(exactly: Double.pi) == nil`. I
would expect NSNumber to behave similarly (a notion with which Martin
disagrees, I guess). I don't see a test that shows whether NSNumber behaves
or does not behave in that way.

At present they behave differently:

    print(Float(exactly: Double.pi) as Any)
    // nil
    print(Float(exactly: NSNumber(value: Double.pi)) as Any)
    // Optional(3.14159274)

I realize that identical behavior would be logical and least
surprising. My only concern was about cases like

    let num = ... // some NSNumber from a JSON deserialization
    let fval = Float(exactly: num)

where one cannot know how the number is represented internally and what
precision it needs. But then one could use the truncating conversion or
`.floatValue` instead.

JSON numbers are double-precision floating point, unless I'm
misunderstanding something. If someone writes `Float(exactly:
valueParsedFromJSON)`, surely, that can only mean that they *really,
really* prefer nil over an imprecise value. I can see no other reason to
insist on using both Float and .init(exactly:).

JSON does not claim 32 or 64 bit floating point, or for that matter 128
or infinite bit floating point :(

Oops, you're right. I see they've wanted to future-proof this. That said,
RFC 7159 *does* say:

This specification allows implementations to set limits on the range

and precision of numbers accepted. Since software that implements

IEEE 754-2008 binary64 (double precision) numbers [IEEE754] is

generally available and widely used, good interoperability can be
achieved by implementations that expect no more precision or range
than these provide, in the sense that implementations will
approximate JSON numbers within the expected precision.

So JSON doesn't set limits on how numbers are represented, but JSON
implementations are permitted to (and I'd imagine that all in fact do). A
user of a JSON deserialization library can rightly expect to know the
numeric limits of that implementation; for the purposes of bridging
NSNumber, if the answer is that the implementation parses JSON numbers as
double-precision values, Double(exactly:) would be the right choice;
otherwise, if it's 80-bit values, then Float80(exactly:) would be the right
choice, etc.

Float80 is not compatible with NSNumber; and is well out of scope for
this proposal.

OK, so Double is the largest floating point type compatible with NSNumber?
It stands to reason that any Swift JSON implementation that uses NSNumber
for parsed floating point values would at most have that much range and
precision, right?

For JSONSerialization (which I am most familiar with and ships with
Foundation); it can emit both NSNumbers and NSDecimalNumber. A rough
approximation of the behavior: if it can store the value in an integer type
it stores it as such in a NSNumber (iirc up to UINT64_MAX) and then if it
has a decimal point it will attempt to parse as a double but if that is not
enough storage it will store the best possible value into NSDecimalNumber.

So NSNumber itself (excluding subclasses) can only store up to a 64 bit
value.

If so, then every floating point value parsed by any such Swift JSON
implementation would be exactly representable as a Double: regardless of
whether that specific implementation uses Float or Double under the hood,
every Float can be represented exactly as a Double. If a user is trying to
bridge such a NSNumber instance specifically to *Float* instead of Double,
and they are asking for an exact value, there's no a priori reason to think
that this user would be more likely to care only about the range and not
the precision, or vice versa. Which is to say, I don't think you'll get too
many bug reports :)

In my mind there are two considerations here; balance against the surprise
from new developers learning their first programming language versus
consistency. In the end even if I believe the behavior is sub-par I would
rather it be consistent. Primarily consistency is easier to teach even if
it is derived from a standard developed with legacy behavior of C at its
heart.

Perhaps in the future we might want to eventually allow conversions to and
from NSNumber via the Integer and FloatingPoint protocols; however I would
guess that there needs to be a lot more thought and perhaps some
modifications there to pull that off. Not to sound like a broken record,
but again that it is out of scope for right now.

After thinking about it more; it seems reasonable to restrict it to the

behavior of Float(exactly: Double(…)). I am certain this will probably in
the end cause more bugs for me to have to address and mark as “behaves
correctly” and confuse a few new developers - but in the end they chose
Swift and the consistent story would be the current behavior of
Float(exactly: Double).

On Tue, Apr 18, 2017 at 11:43 AM, Philippe Hausler <phausler@apple.com> >>>> wrote:

On Apr 18, 2017, at 9:22 AM, Stephen Canon <scanon@apple.com> wrote:

On Apr 18, 2017, at 12:17 PM, Joe Groff <jgroff@apple.com> wrote:

On Apr 17, 2017, at 5:56 PM, Xiaodi Wu via swift-evolution < >>>>> swift-evolution@swift.org> wrote:

It seems Float.init(exactly: NSNumber) has not been updated to behave
similarly?

I would have to say, I would naively expect "exactly" to behave
exactly as it says, exactly. I don't think it should be a synonym for
Float(Double(exactly:)).
On Mon, Apr 17, 2017 at 19:24 Philippe Hausler via swift-evolution < >>>>> swift-evolution@swift.org> wrote:
I posted my branch and fixed up the Double case to account for your
concerns (with a few inspired unit tests to validate)

GitHub - phausler/swift at safe_nsnumber

There is a builtin assumption here though: it does presume that the
swift’s representation of Double and Float are IEEE compliant. However that
is a fairly reasonable assumption in the tests.

Even with the updated code at
GitHub - phausler/swift at safe_nsnumber

    print(Double(exactly: NSNumber(value: Int64(9000000000000000001)))
as Any)
    // Optional(9e+18)

still succeeds, however the reason seems to be an error in the
`init(exactly value: someIntegerType)` inititializers of Float/Double, I
have submitted a bug report: https://bugs.swift.org/browse/SR-4634\.

(+Steve Canon) What is the behavior of Float.init(exactly: Double)?

NSNumber's behavior would ideally be consistent with that.

The implementation is essentially just:

self.init(other)
guard Double(self) == other else {
return nil
}

i.e. if the result is not equal to the source when round-tripped back
to double (which is always exact), the result is nil.

– Steve

Pretty much the same trick inside of CFNumber/NSNumber

So is it correct to say that for all types T which NSNumber can hold (Double, Float, Int, UInt, ... )

    T(exactly: someNSNumber)
    
will succeed if and only if

    NSNumber(value: T(truncating: someNSNumber)) == someNSNumber

holds?

···

On 20. Apr 2017, at 18:10, Philippe Hausler <phausler@apple.com> wrote:

On Apr 19, 2017, at 6:09 PM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Wed, Apr 19, 2017 at 6:35 PM, Philippe Hausler <phausler@apple.com <mailto:phausler@apple.com>> wrote:

On Apr 19, 2017, at 16:17, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Wed, Apr 19, 2017 at 6:00 PM, Philippe Hausler <phausler@apple.com <mailto:phausler@apple.com>> wrote:

On Apr 19, 2017, at 3:23 PM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Wed, Apr 19, 2017 at 3:19 PM, Martin R <martinr448@gmail.com <mailto:martinr448@gmail.com>> wrote:

On 19. Apr 2017, at 01:48, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

So, as I understand it, `Float.init(exactly: Double.pi) == nil`. I would expect NSNumber to behave similarly (a notion with which Martin disagrees, I guess). I don't see a test that shows whether NSNumber behaves or does not behave in that way.

At present they behave differently:

    print(Float(exactly: Double.pi) as Any)
    // nil
    print(Float(exactly: NSNumber(value: Double.pi)) as Any)
    // Optional(3.14159274)

I realize that identical behavior would be logical and least surprising. My only concern was about cases like

    let num = ... // some NSNumber from a JSON deserialization
    let fval = Float(exactly: num)

where one cannot know how the number is represented internally and what precision it needs. But then one could use the truncating conversion or `.floatValue` instead.

JSON numbers are double-precision floating point, unless I'm misunderstanding something. If someone writes `Float(exactly: valueParsedFromJSON)`, surely, that can only mean that they *really, really* prefer nil over an imprecise value. I can see no other reason to insist on using both Float and .init(exactly:).

JSON does not claim 32 or 64 bit floating point, or for that matter 128 or infinite bit floating point :(

Oops, you're right. I see they've wanted to future-proof this. That said, RFC 7159 *does* say:

This specification allows implementations to set limits on the range
and precision of numbers accepted. Since software that implements
IEEE 754-2008 binary64 (double precision) numbers [IEEE754] is
generally available and widely used, good interoperability can be
achieved by implementations that expect no more precision or range
than these provide, in the sense that implementations will
approximate JSON numbers within the expected precision.

So JSON doesn't set limits on how numbers are represented, but JSON implementations are permitted to (and I'd imagine that all in fact do). A user of a JSON deserialization library can rightly expect to know the numeric limits of that implementation; for the purposes of bridging NSNumber, if the answer is that the implementation parses JSON numbers as double-precision values, Double(exactly:) would be the right choice; otherwise, if it's 80-bit values, then Float80(exactly:) would be the right choice, etc.

Float80 is not compatible with NSNumber; and is well out of scope for this proposal.

OK, so Double is the largest floating point type compatible with NSNumber? It stands to reason that any Swift JSON implementation that uses NSNumber for parsed floating point values would at most have that much range and precision, right?

For JSONSerialization (which I am most familiar with and ships with Foundation); it can emit both NSNumbers and NSDecimalNumber. A rough approximation of the behavior: if it can store the value in an integer type it stores it as such in a NSNumber (iirc up to UINT64_MAX) and then if it has a decimal point it will attempt to parse as a double but if that is not enough storage it will store the best possible value into NSDecimalNumber.

So NSNumber itself (excluding subclasses) can only store up to a 64 bit value.

If so, then every floating point value parsed by any such Swift JSON implementation would be exactly representable as a Double: regardless of whether that specific implementation uses Float or Double under the hood, every Float can be represented exactly as a Double. If a user is trying to bridge such a NSNumber instance specifically to *Float* instead of Double, and they are asking for an exact value, there's no a priori reason to think that this user would be more likely to care only about the range and not the precision, or vice versa. Which is to say, I don't think you'll get too many bug reports :)

In my mind there are two considerations here; balance against the surprise from new developers learning their first programming language versus consistency. In the end even if I believe the behavior is sub-par I would rather it be consistent. Primarily consistency is easier to teach even if it is derived from a standard developed with legacy behavior of C at its heart.

Perhaps in the future we might want to eventually allow conversions to and from NSNumber via the Integer and FloatingPoint protocols; however I would guess that there needs to be a lot more thought and perhaps some modifications there to pull that off. Not to sound like a broken record, but again that it is out of scope for right now.

After thinking about it more; it seems reasonable to restrict it to the behavior of Float(exactly: Double(…)). I am certain this will probably in the end cause more bugs for me to have to address and mark as “behaves correctly” and confuse a few new developers - but in the end they chose Swift and the consistent story would be the current behavior of Float(exactly: Double).

On Tue, Apr 18, 2017 at 11:43 AM, Philippe Hausler <phausler@apple.com <mailto:phausler@apple.com>> wrote:

On Apr 18, 2017, at 9:22 AM, Stephen Canon <scanon@apple.com <mailto:scanon@apple.com>> wrote:

On Apr 18, 2017, at 12:17 PM, Joe Groff <jgroff@apple.com <mailto:jgroff@apple.com>> wrote:

On Apr 17, 2017, at 5:56 PM, Xiaodi Wu via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

It seems Float.init(exactly: NSNumber) has not been updated to behave similarly?

I would have to say, I would naively expect "exactly" to behave exactly as it says, exactly. I don't think it should be a synonym for Float(Double(exactly:)).
On Mon, Apr 17, 2017 at 19:24 Philippe Hausler via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
I posted my branch and fixed up the Double case to account for your concerns (with a few inspired unit tests to validate)

GitHub - phausler/swift at safe_nsnumber

There is a builtin assumption here though: it does presume that the swift’s representation of Double and Float are IEEE compliant. However that is a fairly reasonable assumption in the tests.

Even with the updated code at GitHub - phausler/swift at safe_nsnumber

    print(Double(exactly: NSNumber(value: Int64(9000000000000000001))) as Any)
    // Optional(9e+18)

still succeeds, however the reason seems to be an error in the `init(exactly value: someIntegerType)` inititializers of Float/Double, I have submitted a bug report: https://bugs.swift.org/browse/SR-4634\.

(+Steve Canon) What is the behavior of Float.init(exactly: Double)? NSNumber's behavior would ideally be consistent with that.

The implementation is essentially just:

  self.init(other)
  guard Double(self) == other else {
    return nil
  }

i.e. if the result is not equal to the source when round-tripped back to double (which is always exact), the result is nil.

– Steve

Pretty much the same trick inside of CFNumber/NSNumber

Provided that T is one of the supported types yes that does hold true (and is in the unit tests I have on the pending commit)

···

Sent from my iPhone

On Apr 20, 2017, at 11:29 AM, Martin R <martinr448@gmail.com> wrote:

So is it correct to say that for all types T which NSNumber can hold (Double, Float, Int, UInt, ... )

    T(exactly: someNSNumber)
    
will succeed if and only if

    NSNumber(value: T(truncating: someNSNumber)) == someNSNumber

holds?

On 20. Apr 2017, at 18:10, Philippe Hausler <phausler@apple.com> wrote:

On Apr 19, 2017, at 6:09 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Wed, Apr 19, 2017 at 6:35 PM, Philippe Hausler <phausler@apple.com> wrote:

On Apr 19, 2017, at 16:17, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Wed, Apr 19, 2017 at 6:00 PM, Philippe Hausler <phausler@apple.com> wrote:

On Apr 19, 2017, at 3:23 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Wed, Apr 19, 2017 at 3:19 PM, Martin R <martinr448@gmail.com> wrote:

On 19. Apr 2017, at 01:48, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

So, as I understand it, `Float.init(exactly: Double.pi) == nil`. I would expect NSNumber to behave similarly (a notion with which Martin disagrees, I guess). I don't see a test that shows whether NSNumber behaves or does not behave in that way.

At present they behave differently:

    print(Float(exactly: Double.pi) as Any)
    // nil
    print(Float(exactly: NSNumber(value: Double.pi)) as Any)
    // Optional(3.14159274)

I realize that identical behavior would be logical and least surprising. My only concern was about cases like

    let num = ... // some NSNumber from a JSON deserialization
    let fval = Float(exactly: num)

where one cannot know how the number is represented internally and what precision it needs. But then one could use the truncating conversion or `.floatValue` instead.

JSON numbers are double-precision floating point, unless I'm misunderstanding something. If someone writes `Float(exactly: valueParsedFromJSON)`, surely, that can only mean that they *really, really* prefer nil over an imprecise value. I can see no other reason to insist on using both Float and .init(exactly:).

JSON does not claim 32 or 64 bit floating point, or for that matter 128 or infinite bit floating point :(

Oops, you're right. I see they've wanted to future-proof this. That said, RFC 7159 *does* say:

This specification allows implementations to set limits on the range
and precision of numbers accepted. Since software that implements
IEEE 754-2008 binary64 (double precision) numbers [IEEE754] is
generally available and widely used, good interoperability can be
achieved by implementations that expect no more precision or range
than these provide, in the sense that implementations will
approximate JSON numbers within the expected precision.

So JSON doesn't set limits on how numbers are represented, but JSON implementations are permitted to (and I'd imagine that all in fact do). A user of a JSON deserialization library can rightly expect to know the numeric limits of that implementation; for the purposes of bridging NSNumber, if the answer is that the implementation parses JSON numbers as double-precision values, Double(exactly:) would be the right choice; otherwise, if it's 80-bit values, then Float80(exactly:) would be the right choice, etc.

Float80 is not compatible with NSNumber; and is well out of scope for this proposal.

OK, so Double is the largest floating point type compatible with NSNumber? It stands to reason that any Swift JSON implementation that uses NSNumber for parsed floating point values would at most have that much range and precision, right?

For JSONSerialization (which I am most familiar with and ships with Foundation); it can emit both NSNumbers and NSDecimalNumber. A rough approximation of the behavior: if it can store the value in an integer type it stores it as such in a NSNumber (iirc up to UINT64_MAX) and then if it has a decimal point it will attempt to parse as a double but if that is not enough storage it will store the best possible value into NSDecimalNumber.

So NSNumber itself (excluding subclasses) can only store up to a 64 bit value.

If so, then every floating point value parsed by any such Swift JSON implementation would be exactly representable as a Double: regardless of whether that specific implementation uses Float or Double under the hood, every Float can be represented exactly as a Double. If a user is trying to bridge such a NSNumber instance specifically to *Float* instead of Double, and they are asking for an exact value, there's no a priori reason to think that this user would be more likely to care only about the range and not the precision, or vice versa. Which is to say, I don't think you'll get too many bug reports :)

In my mind there are two considerations here; balance against the surprise from new developers learning their first programming language versus consistency. In the end even if I believe the behavior is sub-par I would rather it be consistent. Primarily consistency is easier to teach even if it is derived from a standard developed with legacy behavior of C at its heart.

Perhaps in the future we might want to eventually allow conversions to and from NSNumber via the Integer and FloatingPoint protocols; however I would guess that there needs to be a lot more thought and perhaps some modifications there to pull that off. Not to sound like a broken record, but again that it is out of scope for right now.

After thinking about it more; it seems reasonable to restrict it to the behavior of Float(exactly: Double(…)). I am certain this will probably in the end cause more bugs for me to have to address and mark as “behaves correctly” and confuse a few new developers - but in the end they chose Swift and the consistent story would be the current behavior of Float(exactly: Double).

On Tue, Apr 18, 2017 at 11:43 AM, Philippe Hausler <phausler@apple.com> wrote:

On Apr 18, 2017, at 9:22 AM, Stephen Canon <scanon@apple.com> wrote:

On Apr 18, 2017, at 12:17 PM, Joe Groff <jgroff@apple.com> wrote:

On Apr 17, 2017, at 5:56 PM, Xiaodi Wu via swift-evolution <swift-evolution@swift.org> wrote:

It seems Float.init(exactly: NSNumber) has not been updated to behave similarly?

I would have to say, I would naively expect "exactly" to behave exactly as it says, exactly. I don't think it should be a synonym for Float(Double(exactly:)).
On Mon, Apr 17, 2017 at 19:24 Philippe Hausler via swift-evolution <swift-evolution@swift.org> wrote:
I posted my branch and fixed up the Double case to account for your concerns (with a few inspired unit tests to validate)

GitHub - phausler/swift at safe_nsnumber

There is a builtin assumption here though: it does presume that the swift’s representation of Double and Float are IEEE compliant. However that is a fairly reasonable assumption in the tests.

Even with the updated code at GitHub - phausler/swift at safe_nsnumber

    print(Double(exactly: NSNumber(value: Int64(9000000000000000001))) as Any)
    // Optional(9e+18)

still succeeds, however the reason seems to be an error in the `init(exactly value: someIntegerType)` inititializers of Float/Double, I have submitted a bug report: https://bugs.swift.org/browse/SR-4634\.

(+Steve Canon) What is the behavior of Float.init(exactly: Double)? NSNumber's behavior would ideally be consistent with that.

The implementation is essentially just:

  self.init(other)
  guard Double(self) == other else {
    return nil
  }

i.e. if the result is not equal to the source when round-tripped back to double (which is always exact), the result is nil.

– Steve

Pretty much the same trick inside of CFNumber/NSNumber