Rationalizing FloatingPoint conformance to Equatable

I was proposing something different where Float and Int are both Equatable and Substitutable, but neither Equatable nor Substitutable inherit from the other. The former is mathematical and the latter is not (which allows it to deal with NaN payloads, ±0, etc). Generic algorithms care mostly if not completely about mathematics, while generic containers care mostly if not completely about substitutability. They can live alongside each other and get along peacefully/sanely. And if people need to care about both, then at least they have an out.

The issue with this is similar to that in my reply earlier about bitwise comparison of floating-point values. Yes, you can propose some total ordering over floating-point values totally divorced from `==`, but I'm quite certain that people who invoke `sort()` on an array of floating-point values don't simply want *some* deterministic order, but rather an actual increasing order of numeric values.

Hi Xiaodi,

Of course, and the implementors of floating point data types would be expected to do just that. It isn’t hard. For example, in C:

// Make floats sort reliably and sufficiently reasonably
bool sortableLessThan(float x, float y) {
  union {
    int i;
    float f;
  } ux = { .f = x }, uy = { .f = y };
  int high_bit = ~INT_MAX;
  int x2 = ux.i >= 0 ? ux.i : (~ux.i | high_bit);
  int y2 = uy.i >= 0 ? uy.i : (~uy.i | high_bit);
  return x2 < y2;
}

Which the compiler vectorizes down to the following branchless code (which is debatably “reasonable” for sorting):

c.o`sortableLessThan:
c.o[0x0] <+0>: vinsertps $0x10, %xmm1, %xmm0, %xmm0 ; xmm0 = xmm0[0],xmm1[0],xmm0[2,3]
c.o[0x6] <+6>: vmovlps %xmm0, -0x8(%rsp)
c.o[0xc] <+12>: vpmovsxdq -0x8(%rsp), %xmm0
c.o[0x13] <+19>: vpcmpeqd %xmm1, %xmm1, %xmm1
c.o[0x17] <+23>: vpcmpgtq %xmm1, %xmm0, %xmm1
c.o[0x1c] <+28>: vpor 0x2c(%rip), %xmm0, %xmm2
c.o[0x24] <+36>: vpxor 0x34(%rip), %xmm2, %xmm2 ; (uint128_t) 0x00007fffffff000000007fffffff0000
c.o[0x2c] <+44>: vblendvpd %xmm1, %xmm0, %xmm2, %xmm0
c.o[0x32] <+50>: vmovd %xmm0, %eax
c.o[0x36] <+54>: vpextrd $0x2, %xmm0, %ecx
c.o[0x3c] <+60>: cmpl %ecx, %eax
c.o[0x3e] <+62>: setl %al
c.o[0x41] <+65>: retq

Likewise, when someone asks if an array contains a floating-point value (say, `10.0 as Decimal`), they generally want to know if *any* representation of that value exists.

I’ve been thinking about the “contains” API argument a lot today, and I’m of the opinion now that we’re arguing about a problem that doesn’t exist.

The “contains” question doesn’t make sense because of rounding errors that are inherent in floating point arithmetic. People would need to write “.contains(value: x, plusOrMinus: y)” and there is no way that the generic collection types are going to vend such an API. If anything, the “contains” API needs to be limited to types that conform to (hand waving) some kind of “PreciseValue” protocol (which floating point would not). For the exact same rounding-error reasons, I don’t think floating point types should be hashable either.

The point is that _what kind of substitutability_ matters, and the kind that people will expect for floating-point values is the very mathematical substitutability that is supposed to be guaranteed by Equatable, which simply does not accommodate NaN.

Given that rounding errors make “contains” and “hashing” impractical to use with floating point types, I don’t see any holes with my (hand waving) “Substitutability” proposal. I could be missing something though. Can you think of anything?

Dave

···

On Oct 25, 2017, at 19:25, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

I was proposing something different where Float and Int are both Equatable

and Substitutable, but neither Equatable nor Substitutable inherit from the
other. The former is mathematical and the latter is not (which allows it to
deal with NaN payloads, ±0, etc). Generic algorithms care mostly if not
completely about mathematics, while generic containers care mostly if not
completely about substitutability. They can live alongside each other and
get along peacefully/sanely. And if people need to care about both, then at
least they have an out.

The issue with this is similar to that in my reply earlier about bitwise
comparison of floating-point values. Yes, you can propose some total
ordering over floating-point values totally divorced from `==`, but I'm
quite certain that people who invoke `sort()` on an array of floating-point
values don't simply want *some* deterministic order, but rather an actual
increasing order of numeric values.

Hi Xiaodi,

Of course, and the implementors of floating point data types would be
expected to do just that. It isn’t hard. For example, in C:

// Make floats sort reliably and sufficiently reasonably
bool sortableLessThan(float x, float y) {
  union {
    int i;
    float f;
  } ux = { .f = x }, uy = { .f = y };
  int high_bit = ~INT_MAX;
  int x2 = ux.i >= 0 ? ux.i : (~ux.i | high_bit);
  int y2 = uy.i >= 0 ? uy.i : (~uy.i | high_bit);
  return x2 < y2;
}

Which the compiler vectorizes down to the following branchless code (which
is debatably “reasonable” for sorting):

c.o`sortableLessThan:
c.o[0x0] <+0>: vinsertps $0x10, %xmm1, %xmm0, %xmm0 ; xmm0 =
xmm0[0],xmm1[0],xmm0[2,3]
c.o[0x6] <+6>: vmovlps %xmm0, -0x8(%rsp)
c.o[0xc] <+12>: vpmovsxdq -0x8(%rsp), %xmm0
c.o[0x13] <+19>: vpcmpeqd %xmm1, %xmm1, %xmm1
c.o[0x17] <+23>: vpcmpgtq %xmm1, %xmm0, %xmm1
c.o[0x1c] <+28>: vpor 0x2c(%rip), %xmm0, %xmm2
c.o[0x24] <+36>: vpxor 0x34(%rip), %xmm2, %xmm2 ; (uint128_t)
0x00007fffffff000000007fffffff0000
c.o[0x2c] <+44>: vblendvpd %xmm1, %xmm0, %xmm2, %xmm0
c.o[0x32] <+50>: vmovd %xmm0, %eax
c.o[0x36] <+54>: vpextrd $0x2, %xmm0, %ecx
c.o[0x3c] <+60>: cmpl %ecx, %eax
c.o[0x3e] <+62>: setl %al
c.o[0x41] <+65>: retq

We already have a similar function in Swift (`isTotallyOrdered`) which
complies with IEEE requirements for total order, and we do not need to
invent another such algorithm.

As I have said, the same approach is not useful for what you call
"substitutability." What useful generic algorithms can be written that make
use of the bitwise _equality_ of two floating-point values?

Likewise, when someone asks if an array contains a floating-point value
(say, `10.0 as Decimal`), they generally want to know if *any*
representation of that value exists.

I’ve been thinking about the “contains” API argument a lot today, and I’m
of the opinion now that we’re arguing about a problem that doesn’t exist.

The “contains” question doesn’t make sense because of rounding errors that
are inherent in floating point arithmetic. People would need to write
“.contains(value: x, plusOrMinus: y)” and there is no way that the generic
collection types are going to vend such an API. If anything, the “contains”
API needs to be limited to types that conform to (hand waving) some kind of
“PreciseValue” protocol (which floating point would not). For the exact
same rounding-error reasons, I don’t think floating point types should be
hashable either.

This is looking at it backwards. Collection vends a "contains" method and
it must do something for floating-point values. It would be exceedingly
user hostile to have it compare values bitwise. If people want to account
for rounding errors, there's `contains(where:)`, but it is entirely
legitimate to ask whether a collection contains exactly zero, exactly
infinity, or any of a variety of exactly representable values.

The point is that _what kind of substitutability_ matters, and the kind

that people will expect for floating-point values is the very mathematical
substitutability that is supposed to be guaranteed by Equatable, which
simply does not accommodate NaN.

Given that rounding errors make “contains” and “hashing” impractical to
use with floating point types, I don’t see any holes with my (hand waving)
“Substitutability” proposal. I could be missing something though. Can you
think of anything?

By your argument, every generic use of `==` is impractical for
floating-point types. If you believe, then, that you don't need to consider
how these "impractical" APIs on Collection work with floating-point types
because they shouldn't be used in the first place, then why bother to make
any changes at all to the semantics of `Equatable`? The only changes we're
talking about here are to do with how these "impractical" APIs behave.

···

On Wed, Oct 25, 2017 at 9:05 PM, David Zarzycki <dave@znu.io> wrote:

On Oct 25, 2017, at 19:25, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

As someone mentioned earlier, we are trying to square a circle here. We
can’t have everything at once… we will have to prioritize. I feel like the
precedent in Swift is to prioritize safety/correctness with an option
ignore safety and regain speed.

I think the 3 point solution I proposed is a good compromise that follows
that precedent. It does mean that there is, by default, a small
performance hit for floats in generic contexts, but in exchange for that,
we get increased correctness and safety. This is the exact same tradeoff
that Swift makes for optionals! Any speed lost can be regained by
providing a specific override for FloatingPoint that uses ‘&==‘.

My point is not about performance. My point is that `Numeric.==` must
continue to have IEEE floating-point semantics for floating-point types and
integer semantics for integer types, or else existing uses of `Numeric.==`
will break without any way to fix them. The whole point of *having*
`Numeric` is to permit such generic algorithms to be written. But since
`Numeric.==` *is* `Equatable.==`, we have a large constraint on how the
semantics of `==` can be changed.

For example, if someone wants to write a generic function that works both
on Integer and FloatingPoint, then they would have to use the new protocol
which would force them to correctly handle cases involving NaN.

What "new protocol" are you referring to, and what do you mean about
"correctly handling cases involving NaN"? The existing API of `Numeric`
makes it possible to write generic algorithms that accommodate both integer
and floating-point types--yes, even if the value is NaN. If you change the
definition of `==` or `<`, currently correct generic algorithms that use
`Numeric` will start to _incorrectly_ handle NaN.

If speed is super important in that particular case, then they can write

overrides for the FloatingPoint case which uses &==, and for Equatable
which uses ==.

Because Float’s Equatable conformance is just being depreciated (with a
warning/fixit), authors have at least a version to decide whether speed or
correctness (or hopefully both) is most important to them.

My point here is not about generic algorithms written to take any Equatable
value; it's about protocol-based numeric algorithms, which we in SE-0104
devoted much time and energy to support in the first place.

Thanks,
Jon

P.S. We really should not be comparing against the speed of algorithms
which don’t correctly handle NaN. Let’s compare Apples to Apples.

Again, what do you mean about "correctly handle NaN"? Most algorithms today
*do* correctly handle NaN, in that they are concordant with the
IEEE-specified behavior. Set *not* deduplicating NaN *is* correct handling
of NaN, for instance, for as long as NaN != NaN.

···

On Wed, Oct 25, 2017 at 11:46 PM, Jonathan Hull <jhull@gbis.com> wrote:

On Oct 25, 2017, at 6:36 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Wed, Oct 25, 2017 at 8:26 PM, Jonathan Hull <jhull@gbis.com> wrote:

> On Oct 25, 2017, at 9:01 AM, David Sweeris via swift-dev < >> swift-dev@swift.org> wrote:
>
> That said, I fully acknowledge that this is all above my pay grade
(also I hadn't realized that the issue was as settled as it apparently is).
If splitting the protocols is a no-go from the get go, I'll go back to
trying to figure out a better way to handle it without doing that.

I don’t think it is settled. The issue that Xiaodi mentioned was a
PartiallyEq protocol which still had a signature of (T,T)->Bool. People
just used that protocol instead of Equatable without taking into account
the difference in behavior. The signature of (T,T)->Bool? changes things
because people are forced to deal with the optional.

Currently, I think we should do 3 things:

1) Create a new protocol with a partial equivalence relation with
signature of (T, T)->Bool? and automatically conform Equatable things to it
2) Depreciate Float, etc’s… Equatable conformance with a warning that it
will eventually be removed (and conform Float, etc… to the partial
equivalence protocol)
3) Provide an '&==‘ relation on Float, etc… (without a protocol) with the
native Float IEEE comparison

I think this provides several benefits. #3 allows pure speed when
needed, but not in a generic context (and is appropriately scary to cause
some thought). #1 forces correct handling in generic contexts. #2 gives
people time to make the adjustment, but also eventually requires them to
switch to using #1 or #3.

I think it will cause A LOT of currently incorrect code to be fixed.

One issue which occurred to me only recently, which I hadn't considered,
renders my `&==` idea and all similar schemes untenable:

Useful algorithms can and are written which operate on both floating-point
and integer numeric types. In fact, the whole point of laboriously
designing `Numeric` as part of SE-0104 was to make it possible to do so. If
IEEE comparison is relegated to `FloatingPoint` and the only operator
remaining on `Numeric` is `==`, then not only will there be a mandatory
performance hit, but currently correct algorithms can be broken with
absolutely no way to express a fix.

As someone mentioned earlier, we are trying to square a circle here. We can’t have everything at once… we will have to prioritize. I feel like the precedent in Swift is to prioritize safety/correctness with an option ignore safety and regain speed.

I think the 3 point solution I proposed is a good compromise that follows that precedent. It does mean that there is, by default, a small performance hit for floats in generic contexts, but in exchange for that, we get increased correctness and safety. This is the exact same tradeoff that Swift makes for optionals! Any speed lost can be regained by providing a specific override for FloatingPoint that uses ‘&==‘.

My point is not about performance. My point is that `Numeric.==` must continue to have IEEE floating-point semantics for floating-point types and integer semantics for integer types, or else existing uses of `Numeric.==` will break without any way to fix them. The whole point of *having* `Numeric` is to permit such generic algorithms to be written. But since `Numeric.==` *is* `Equatable.==`, we have a large constraint on how the semantics of `==` can be changed.

It would also conform to the new protocol and have it’s Equatable conformance depreciated. Once we have conditional conformances, we can add Equatable back conditionally. Also, while we are waiting for that, Numeric can provide overrides of important methods when the conforming type is Equatable or FloatingPoint.

For example, if someone wants to write a generic function that works both on Integer and FloatingPoint, then they would have to use the new protocol which would force them to correctly handle cases involving NaN.

What "new protocol" are you referring to, and what do you mean about "correctly handling cases involving NaN"? The existing API of `Numeric` makes it possible to write generic algorithms that accommodate both integer and floating-point types--yes, even if the value is NaN. If you change the definition of `==` or `<`, currently correct generic algorithms that use `Numeric` will start to _incorrectly_ handle NaN.

#1 from my previous email (shown again here):

Currently, I think we should do 3 things:

1) Create a new protocol with a partial equivalence relation with signature of (T, T)->Bool? and automatically conform Equatable things to it
2) Depreciate Float, etc’s… Equatable conformance with a warning that it will eventually be removed (and conform Float, etc… to the partial equivalence protocol)
3) Provide an '&==‘ relation on Float, etc… (without a protocol) with the native Float IEEE comparison

In this case, #2 would also apply to Numeric. You can think of the new protocol as a failable version of Equatable, so in any case where it can’t meet equatable’s rules, it returns nil.

If speed is super important in that particular case, then they can write overrides for the FloatingPoint case which uses &==, and for Equatable which uses ==.

Because Float’s Equatable conformance is just being depreciated (with a warning/fixit), authors have at least a version to decide whether speed or correctness (or hopefully both) is most important to them.

My point here is not about generic algorithms written to take any Equatable value; it's about protocol-based numeric algorithms, which we in SE-0104 devoted much time and energy to support in the first place.

See above.

Thanks,
Jon

···

On Oct 25, 2017, at 11:22 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:
On Wed, Oct 25, 2017 at 11:46 PM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

Thanks,
Jon

P.S. We really should not be comparing against the speed of algorithms which don’t correctly handle NaN. Let’s compare Apples to Apples.

Again, what do you mean about "correctly handle NaN"? Most algorithms today *do* correctly handle NaN, in that they are concordant with the IEEE-specified behavior. Set *not* deduplicating NaN *is* correct handling of NaN, for instance, for as long as NaN != NaN.

On Oct 25, 2017, at 6:36 PM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Wed, Oct 25, 2017 at 8:26 PM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:
> On Oct 25, 2017, at 9:01 AM, David Sweeris via swift-dev <swift-dev@swift.org <mailto:swift-dev@swift.org>> wrote:
>
> That said, I fully acknowledge that this is all above my pay grade (also I hadn't realized that the issue was as settled as it apparently is). If splitting the protocols is a no-go from the get go, I'll go back to trying to figure out a better way to handle it without doing that.

I don’t think it is settled. The issue that Xiaodi mentioned was a PartiallyEq protocol which still had a signature of (T,T)->Bool. People just used that protocol instead of Equatable without taking into account the difference in behavior. The signature of (T,T)->Bool? changes things because people are forced to deal with the optional.

Currently, I think we should do 3 things:

1) Create a new protocol with a partial equivalence relation with signature of (T, T)->Bool? and automatically conform Equatable things to it
2) Depreciate Float, etc’s… Equatable conformance with a warning that it will eventually be removed (and conform Float, etc… to the partial equivalence protocol)
3) Provide an '&==‘ relation on Float, etc… (without a protocol) with the native Float IEEE comparison

I think this provides several benefits. #3 allows pure speed when needed, but not in a generic context (and is appropriately scary to cause some thought). #1 forces correct handling in generic contexts. #2 gives people time to make the adjustment, but also eventually requires them to switch to using #1 or #3.

I think it will cause A LOT of currently incorrect code to be fixed.

One issue which occurred to me only recently, which I hadn't considered, renders my `&==` idea and all similar schemes untenable:

Useful algorithms can and are written which operate on both floating-point and integer numeric types. In fact, the whole point of laboriously designing `Numeric` as part of SE-0104 was to make it possible to do so. If IEEE comparison is relegated to `FloatingPoint` and the only operator remaining on `Numeric` is `==`, then not only will there be a mandatory performance hit, but currently correct algorithms can be broken with absolutely no way to express a fix.

As someone mentioned earlier, we are trying to square a circle here. We
can’t have everything at once… we will have to prioritize. I feel like the
precedent in Swift is to prioritize safety/correctness with an option
ignore safety and regain speed.

I think the 3 point solution I proposed is a good compromise that follows
that precedent. It does mean that there is, by default, a small
performance hit for floats in generic contexts, but in exchange for that,
we get increased correctness and safety. This is the exact same tradeoff
that Swift makes for optionals! Any speed lost can be regained by
providing a specific override for FloatingPoint that uses ‘&==‘.

My point is not about performance. My point is that `Numeric.==` must
continue to have IEEE floating-point semantics for floating-point types and
integer semantics for integer types, or else existing uses of `Numeric.==`
will break without any way to fix them. The whole point of *having*
`Numeric` is to permit such generic algorithms to be written. But since
`Numeric.==` *is* `Equatable.==`, we have a large constraint on how the
semantics of `==` can be changed.

It would also conform to the new protocol and have it’s Equatable
conformance depreciated. Once we have conditional conformances, we can add
Equatable back conditionally. Also, while we are waiting for that, Numeric
can provide overrides of important methods when the conforming type is
Equatable or FloatingPoint.

For example, if someone wants to write a generic function that works both

on Integer and FloatingPoint, then they would have to use the new protocol
which would force them to correctly handle cases involving NaN.

What "new protocol" are you referring to, and what do you mean about
"correctly handling cases involving NaN"? The existing API of `Numeric`
makes it possible to write generic algorithms that accommodate both integer
and floating-point types--yes, even if the value is NaN. If you change the
definition of `==` or `<`, currently correct generic algorithms that use
`Numeric` will start to _incorrectly_ handle NaN.

#1 from my previous email (shown again here):

Currently, I think we should do 3 things:

1) Create a new protocol with a partial equivalence relation with
signature of (T, T)->Bool? and automatically conform Equatable things to it
2) Depreciate Float, etc’s… Equatable conformance with a warning that it
will eventually be removed (and conform Float, etc… to the partial
equivalence protocol)
3) Provide an '&==‘ relation on Float, etc… (without a protocol) with
the native Float IEEE comparison

In this case, #2 would also apply to Numeric. You can think of the new
protocol as a failable version of Equatable, so in any case where it can’t
meet equatable’s rules, it returns nil.

Again, Numeric makes possible the generic use of == with floating-point
semantics for floating-point values and integer semantics for integer
values; this design would not.

If speed is super important in that particular case, then they can write

···

On Thu, Oct 26, 2017 at 07:52 Jonathan Hull <jhull@gbis.com> wrote:

On Oct 25, 2017, at 11:22 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:
On Wed, Oct 25, 2017 at 11:46 PM, Jonathan Hull <jhull@gbis.com> wrote:

overrides for the FloatingPoint case which uses &==, and for Equatable
which uses ==.

Because Float’s Equatable conformance is just being depreciated (with a
warning/fixit), authors have at least a version to decide whether speed or
correctness (or hopefully both) is most important to them.

My point here is not about generic algorithms written to take any
Equatable value; it's about protocol-based numeric algorithms, which we in
SE-0104 devoted much time and energy to support in the first place.

See above.

Thanks,
Jon

Thanks,
Jon

P.S. We really should not be comparing against the speed of algorithms
which don’t correctly handle NaN. Let’s compare Apples to Apples.

Again, what do you mean about "correctly handle NaN"? Most algorithms
today *do* correctly handle NaN, in that they are concordant with the
IEEE-specified behavior. Set *not* deduplicating NaN *is* correct handling
of NaN, for instance, for as long as NaN != NaN.

On Oct 25, 2017, at 6:36 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Wed, Oct 25, 2017 at 8:26 PM, Jonathan Hull <jhull@gbis.com> wrote:

> On Oct 25, 2017, at 9:01 AM, David Sweeris via swift-dev < >>> swift-dev@swift.org> wrote:
>
> That said, I fully acknowledge that this is all above my pay grade
(also I hadn't realized that the issue was as settled as it apparently is).
If splitting the protocols is a no-go from the get go, I'll go back to
trying to figure out a better way to handle it without doing that.

I don’t think it is settled. The issue that Xiaodi mentioned was a
PartiallyEq protocol which still had a signature of (T,T)->Bool. People
just used that protocol instead of Equatable without taking into account
the difference in behavior. The signature of (T,T)->Bool? changes things
because people are forced to deal with the optional.

Currently, I think we should do 3 things:

1) Create a new protocol with a partial equivalence relation with
signature of (T, T)->Bool? and automatically conform Equatable things to it
2) Depreciate Float, etc’s… Equatable conformance with a warning that it
will eventually be removed (and conform Float, etc… to the partial
equivalence protocol)
3) Provide an '&==‘ relation on Float, etc… (without a protocol) with
the native Float IEEE comparison

I think this provides several benefits. #3 allows pure speed when
needed, but not in a generic context (and is appropriately scary to cause
some thought). #1 forces correct handling in generic contexts. #2 gives
people time to make the adjustment, but also eventually requires them to
switch to using #1 or #3.

I think it will cause A LOT of currently incorrect code to be fixed.

One issue which occurred to me only recently, which I hadn't considered,
renders my `&==` idea and all similar schemes untenable:

Useful algorithms can and are written which operate on both
floating-point and integer numeric types. In fact, the whole point of
laboriously designing `Numeric` as part of SE-0104 was to make it possible
to do so. If IEEE comparison is relegated to `FloatingPoint` and the only
operator remaining on `Numeric` is `==`, then not only will there be a
mandatory performance hit, but currently correct algorithms can be broken
with absolutely no way to express a fix.

Correct. I view this as a good thing, because another way of saying that is: “it makes possible cases where == sometimes conforms to the rules of Equatable and sometimes doesn’t." Under the solution I am advocating, Numeric would instead allow generic use of '==?’.

I suppose an argument could be made that we should extend ‘&==‘ to Numeric from FloatingPoint, but then we would end up with the Rust situation you were talking about earlier…

Thanks,
Jon

···

On Oct 26, 2017, at 8:19 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Thu, Oct 26, 2017 at 07:52 Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

On Oct 25, 2017, at 11:22 PM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Wed, Oct 25, 2017 at 11:46 PM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:
As someone mentioned earlier, we are trying to square a circle here. We can’t have everything at once… we will have to prioritize. I feel like the precedent in Swift is to prioritize safety/correctness with an option ignore safety and regain speed.

I think the 3 point solution I proposed is a good compromise that follows that precedent. It does mean that there is, by default, a small performance hit for floats in generic contexts, but in exchange for that, we get increased correctness and safety. This is the exact same tradeoff that Swift makes for optionals! Any speed lost can be regained by providing a specific override for FloatingPoint that uses ‘&==‘.

My point is not about performance. My point is that `Numeric.==` must continue to have IEEE floating-point semantics for floating-point types and integer semantics for integer types, or else existing uses of `Numeric.==` will break without any way to fix them. The whole point of *having* `Numeric` is to permit such generic algorithms to be written. But since `Numeric.==` *is* `Equatable.==`, we have a large constraint on how the semantics of `==` can be changed.

It would also conform to the new protocol and have it’s Equatable conformance depreciated. Once we have conditional conformances, we can add Equatable back conditionally. Also, while we are waiting for that, Numeric can provide overrides of important methods when the conforming type is Equatable or FloatingPoint.

For example, if someone wants to write a generic function that works both on Integer and FloatingPoint, then they would have to use the new protocol which would force them to correctly handle cases involving NaN.

What "new protocol" are you referring to, and what do you mean about "correctly handling cases involving NaN"? The existing API of `Numeric` makes it possible to write generic algorithms that accommodate both integer and floating-point types--yes, even if the value is NaN. If you change the definition of `==` or `<`, currently correct generic algorithms that use `Numeric` will start to _incorrectly_ handle NaN.

#1 from my previous email (shown again here):

Currently, I think we should do 3 things:

1) Create a new protocol with a partial equivalence relation with signature of (T, T)->Bool? and automatically conform Equatable things to it
2) Depreciate Float, etc’s… Equatable conformance with a warning that it will eventually be removed (and conform Float, etc… to the partial equivalence protocol)
3) Provide an '&==‘ relation on Float, etc… (without a protocol) with the native Float IEEE comparison

In this case, #2 would also apply to Numeric. You can think of the new protocol as a failable version of Equatable, so in any case where it can’t meet equatable’s rules, it returns nil.

Again, Numeric makes possible the generic use of == with floating-point semantics for floating-point values and integer semantics for integer values; this design would not.

Nope. They would continue to work as they always have, but would have a depreciation warning on them. The authors of those algorithms would have a full depreciation cycle to update the algorithms. Fixits would be provided to make conversion easier.

···

On Oct 26, 2017, at 9:34 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Thu, Oct 26, 2017 at 10:57 AM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

On Oct 26, 2017, at 8:19 AM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Thu, Oct 26, 2017 at 07:52 Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

On Oct 25, 2017, at 11:22 PM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Wed, Oct 25, 2017 at 11:46 PM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:
As someone mentioned earlier, we are trying to square a circle here. We can’t have everything at once… we will have to prioritize. I feel like the precedent in Swift is to prioritize safety/correctness with an option ignore safety and regain speed.

I think the 3 point solution I proposed is a good compromise that follows that precedent. It does mean that there is, by default, a small performance hit for floats in generic contexts, but in exchange for that, we get increased correctness and safety. This is the exact same tradeoff that Swift makes for optionals! Any speed lost can be regained by providing a specific override for FloatingPoint that uses ‘&==‘.

My point is not about performance. My point is that `Numeric.==` must continue to have IEEE floating-point semantics for floating-point types and integer semantics for integer types, or else existing uses of `Numeric.==` will break without any way to fix them. The whole point of *having* `Numeric` is to permit such generic algorithms to be written. But since `Numeric.==` *is* `Equatable.==`, we have a large constraint on how the semantics of `==` can be changed.

It would also conform to the new protocol and have it’s Equatable conformance depreciated. Once we have conditional conformances, we can add Equatable back conditionally. Also, while we are waiting for that, Numeric can provide overrides of important methods when the conforming type is Equatable or FloatingPoint.

For example, if someone wants to write a generic function that works both on Integer and FloatingPoint, then they would have to use the new protocol which would force them to correctly handle cases involving NaN.

What "new protocol" are you referring to, and what do you mean about "correctly handling cases involving NaN"? The existing API of `Numeric` makes it possible to write generic algorithms that accommodate both integer and floating-point types--yes, even if the value is NaN. If you change the definition of `==` or `<`, currently correct generic algorithms that use `Numeric` will start to _incorrectly_ handle NaN.

#1 from my previous email (shown again here):

Currently, I think we should do 3 things:

1) Create a new protocol with a partial equivalence relation with signature of (T, T)->Bool? and automatically conform Equatable things to it
2) Depreciate Float, etc’s… Equatable conformance with a warning that it will eventually be removed (and conform Float, etc… to the partial equivalence protocol)
3) Provide an '&==‘ relation on Float, etc… (without a protocol) with the native Float IEEE comparison

In this case, #2 would also apply to Numeric. You can think of the new protocol as a failable version of Equatable, so in any case where it can’t meet equatable’s rules, it returns nil.

Again, Numeric makes possible the generic use of == with floating-point semantics for floating-point values and integer semantics for integer values; this design would not.

Correct. I view this as a good thing, because another way of saying that is: “it makes possible cases where == sometimes conforms to the rules of Equatable and sometimes doesn’t." Under the solution I am advocating, Numeric would instead allow generic use of '==?’.

I suppose an argument could be made that we should extend ‘&==‘ to Numeric from FloatingPoint, but then we would end up with the Rust situation you were talking about earlier…

This would break any `Numeric` algorithms that currently use `==` correctly. There are useful guarantees that are common to integer `==` and IEEE floating-point `==`; namely, they each model equivalence of their respective types at roughly what IEEE calls "level 1" (as numbers, rather than as their representation or encoding). Breaking that utterly eviscerates `Numeric`.

It would, using ==?, you would just be forced to deal with the possibility of the Equality relation not holding. '(a ==? b) == true' would mimic the current behavior.

···

On Oct 26, 2017, at 9:40 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Thu, Oct 26, 2017 at 11:38 AM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

On Oct 26, 2017, at 9:34 AM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Thu, Oct 26, 2017 at 10:57 AM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

On Oct 26, 2017, at 8:19 AM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Thu, Oct 26, 2017 at 07:52 Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

On Oct 25, 2017, at 11:22 PM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Wed, Oct 25, 2017 at 11:46 PM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:
As someone mentioned earlier, we are trying to square a circle here. We can’t have everything at once… we will have to prioritize. I feel like the precedent in Swift is to prioritize safety/correctness with an option ignore safety and regain speed.

I think the 3 point solution I proposed is a good compromise that follows that precedent. It does mean that there is, by default, a small performance hit for floats in generic contexts, but in exchange for that, we get increased correctness and safety. This is the exact same tradeoff that Swift makes for optionals! Any speed lost can be regained by providing a specific override for FloatingPoint that uses ‘&==‘.

My point is not about performance. My point is that `Numeric.==` must continue to have IEEE floating-point semantics for floating-point types and integer semantics for integer types, or else existing uses of `Numeric.==` will break without any way to fix them. The whole point of *having* `Numeric` is to permit such generic algorithms to be written. But since `Numeric.==` *is* `Equatable.==`, we have a large constraint on how the semantics of `==` can be changed.

It would also conform to the new protocol and have it’s Equatable conformance depreciated. Once we have conditional conformances, we can add Equatable back conditionally. Also, while we are waiting for that, Numeric can provide overrides of important methods when the conforming type is Equatable or FloatingPoint.

For example, if someone wants to write a generic function that works both on Integer and FloatingPoint, then they would have to use the new protocol which would force them to correctly handle cases involving NaN.

What "new protocol" are you referring to, and what do you mean about "correctly handling cases involving NaN"? The existing API of `Numeric` makes it possible to write generic algorithms that accommodate both integer and floating-point types--yes, even if the value is NaN. If you change the definition of `==` or `<`, currently correct generic algorithms that use `Numeric` will start to _incorrectly_ handle NaN.

#1 from my previous email (shown again here):

Currently, I think we should do 3 things:

1) Create a new protocol with a partial equivalence relation with signature of (T, T)->Bool? and automatically conform Equatable things to it
2) Depreciate Float, etc’s… Equatable conformance with a warning that it will eventually be removed (and conform Float, etc… to the partial equivalence protocol)
3) Provide an '&==‘ relation on Float, etc… (without a protocol) with the native Float IEEE comparison

In this case, #2 would also apply to Numeric. You can think of the new protocol as a failable version of Equatable, so in any case where it can’t meet equatable’s rules, it returns nil.

Again, Numeric makes possible the generic use of == with floating-point semantics for floating-point values and integer semantics for integer values; this design would not.

Correct. I view this as a good thing, because another way of saying that is: “it makes possible cases where == sometimes conforms to the rules of Equatable and sometimes doesn’t." Under the solution I am advocating, Numeric would instead allow generic use of '==?’.

I suppose an argument could be made that we should extend ‘&==‘ to Numeric from FloatingPoint, but then we would end up with the Rust situation you were talking about earlier…

This would break any `Numeric` algorithms that currently use `==` correctly. There are useful guarantees that are common to integer `==` and IEEE floating-point `==`; namely, they each model equivalence of their respective types at roughly what IEEE calls "level 1" (as numbers, rather than as their representation or encoding). Breaking that utterly eviscerates `Numeric`.

Nope. They would continue to work as they always have, but would have a depreciation warning on them. The authors of those algorithms would have a full depreciation cycle to update the algorithms. Fixits would be provided to make conversion easier.

After the depreciation cycle, Numeric would no longer guarantee a common "level 1" comparison for conforming types.

Sure there would... Semantically, NaN != NaN regardless of the underlying type, it's just that native integer types don't have a way to represent NaN so it can't come up with them.

(Unless I'm misunderstanding something again)

- Dave Sweeris

···

On Oct 26, 2017, at 9:40 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Thu, Oct 26, 2017 at 11:38 AM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

On Oct 26, 2017, at 9:34 AM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Thu, Oct 26, 2017 at 10:57 AM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

On Oct 26, 2017, at 8:19 AM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Thu, Oct 26, 2017 at 07:52 Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

On Oct 25, 2017, at 11:22 PM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Wed, Oct 25, 2017 at 11:46 PM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:
As someone mentioned earlier, we are trying to square a circle here. We can’t have everything at once… we will have to prioritize. I feel like the precedent in Swift is to prioritize safety/correctness with an option ignore safety and regain speed.

I think the 3 point solution I proposed is a good compromise that follows that precedent. It does mean that there is, by default, a small performance hit for floats in generic contexts, but in exchange for that, we get increased correctness and safety. This is the exact same tradeoff that Swift makes for optionals! Any speed lost can be regained by providing a specific override for FloatingPoint that uses ‘&==‘.

My point is not about performance. My point is that `Numeric.==` must continue to have IEEE floating-point semantics for floating-point types and integer semantics for integer types, or else existing uses of `Numeric.==` will break without any way to fix them. The whole point of *having* `Numeric` is to permit such generic algorithms to be written. But since `Numeric.==` *is* `Equatable.==`, we have a large constraint on how the semantics of `==` can be changed.

It would also conform to the new protocol and have it’s Equatable conformance depreciated. Once we have conditional conformances, we can add Equatable back conditionally. Also, while we are waiting for that, Numeric can provide overrides of important methods when the conforming type is Equatable or FloatingPoint.

For example, if someone wants to write a generic function that works both on Integer and FloatingPoint, then they would have to use the new protocol which would force them to correctly handle cases involving NaN.

What "new protocol" are you referring to, and what do you mean about "correctly handling cases involving NaN"? The existing API of `Numeric` makes it possible to write generic algorithms that accommodate both integer and floating-point types--yes, even if the value is NaN. If you change the definition of `==` or `<`, currently correct generic algorithms that use `Numeric` will start to _incorrectly_ handle NaN.

#1 from my previous email (shown again here):

Currently, I think we should do 3 things:

1) Create a new protocol with a partial equivalence relation with signature of (T, T)->Bool? and automatically conform Equatable things to it
2) Depreciate Float, etc’s… Equatable conformance with a warning that it will eventually be removed (and conform Float, etc… to the partial equivalence protocol)
3) Provide an '&==‘ relation on Float, etc… (without a protocol) with the native Float IEEE comparison

In this case, #2 would also apply to Numeric. You can think of the new protocol as a failable version of Equatable, so in any case where it can’t meet equatable’s rules, it returns nil.

Again, Numeric makes possible the generic use of == with floating-point semantics for floating-point values and integer semantics for integer values; this design would not.

Correct. I view this as a good thing, because another way of saying that is: “it makes possible cases where == sometimes conforms to the rules of Equatable and sometimes doesn’t." Under the solution I am advocating, Numeric would instead allow generic use of '==?’.

I suppose an argument could be made that we should extend ‘&==‘ to Numeric from FloatingPoint, but then we would end up with the Rust situation you were talking about earlier…

This would break any `Numeric` algorithms that currently use `==` correctly. There are useful guarantees that are common to integer `==` and IEEE floating-point `==`; namely, they each model equivalence of their respective types at roughly what IEEE calls "level 1" (as numbers, rather than as their representation or encoding). Breaking that utterly eviscerates `Numeric`.

Nope. They would continue to work as they always have, but would have a depreciation warning on them. The authors of those algorithms would have a full depreciation cycle to update the algorithms. Fixits would be provided to make conversion easier.

After the depreciation cycle, Numeric would no longer guarantee a common "level 1" comparison for conforming types.

Now you are just being rude. We all want Swift to be awesome… let’s try to keep things civil.

You said it was impossible, so I gave you a very quick example showing that the current behavior was still possible. I wasn’t recommending that everyone should only ever use that example for all things.

For FloatingPoint, ‘(a &== b) == true’ would mimic the current behavior (bugs and all). It may not hold for all types. The whole point is that you have to put thought into how you want to deal with the optional case where the relation’s guarantees have failed.

If you need full performance, then you would have separate overrides on Numeric for members which conform to FloatingPoint (where you could use &==) and Equatable (where you could use ==). As you get more generic, you lose opportunities for optimization. That is just the nature of generic code. The nice thing about Swift is that you have an opportunity to specialize if you want to optimize more. Once things like conditional conformances come online, all of this will be nicer, of course.

···

On Oct 26, 2017, at 11:01 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Thu, Oct 26, 2017 at 11:50 AM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

On Oct 26, 2017, at 9:40 AM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Thu, Oct 26, 2017 at 11:38 AM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

On Oct 26, 2017, at 9:34 AM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Thu, Oct 26, 2017 at 10:57 AM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

On Oct 26, 2017, at 8:19 AM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Thu, Oct 26, 2017 at 07:52 Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

On Oct 25, 2017, at 11:22 PM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Wed, Oct 25, 2017 at 11:46 PM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:
As someone mentioned earlier, we are trying to square a circle here. We can’t have everything at once… we will have to prioritize. I feel like the precedent in Swift is to prioritize safety/correctness with an option ignore safety and regain speed.

I think the 3 point solution I proposed is a good compromise that follows that precedent. It does mean that there is, by default, a small performance hit for floats in generic contexts, but in exchange for that, we get increased correctness and safety. This is the exact same tradeoff that Swift makes for optionals! Any speed lost can be regained by providing a specific override for FloatingPoint that uses ‘&==‘.

My point is not about performance. My point is that `Numeric.==` must continue to have IEEE floating-point semantics for floating-point types and integer semantics for integer types, or else existing uses of `Numeric.==` will break without any way to fix them. The whole point of *having* `Numeric` is to permit such generic algorithms to be written. But since `Numeric.==` *is* `Equatable.==`, we have a large constraint on how the semantics of `==` can be changed.

It would also conform to the new protocol and have it’s Equatable conformance depreciated. Once we have conditional conformances, we can add Equatable back conditionally. Also, while we are waiting for that, Numeric can provide overrides of important methods when the conforming type is Equatable or FloatingPoint.

For example, if someone wants to write a generic function that works both on Integer and FloatingPoint, then they would have to use the new protocol which would force them to correctly handle cases involving NaN.

What "new protocol" are you referring to, and what do you mean about "correctly handling cases involving NaN"? The existing API of `Numeric` makes it possible to write generic algorithms that accommodate both integer and floating-point types--yes, even if the value is NaN. If you change the definition of `==` or `<`, currently correct generic algorithms that use `Numeric` will start to _incorrectly_ handle NaN.

#1 from my previous email (shown again here):

Currently, I think we should do 3 things:

1) Create a new protocol with a partial equivalence relation with signature of (T, T)->Bool? and automatically conform Equatable things to it
2) Depreciate Float, etc’s… Equatable conformance with a warning that it will eventually be removed (and conform Float, etc… to the partial equivalence protocol)
3) Provide an '&==‘ relation on Float, etc… (without a protocol) with the native Float IEEE comparison

In this case, #2 would also apply to Numeric. You can think of the new protocol as a failable version of Equatable, so in any case where it can’t meet equatable’s rules, it returns nil.

Again, Numeric makes possible the generic use of == with floating-point semantics for floating-point values and integer semantics for integer values; this design would not.

Correct. I view this as a good thing, because another way of saying that is: “it makes possible cases where == sometimes conforms to the rules of Equatable and sometimes doesn’t." Under the solution I am advocating, Numeric would instead allow generic use of '==?’.

I suppose an argument could be made that we should extend ‘&==‘ to Numeric from FloatingPoint, but then we would end up with the Rust situation you were talking about earlier…

This would break any `Numeric` algorithms that currently use `==` correctly. There are useful guarantees that are common to integer `==` and IEEE floating-point `==`; namely, they each model equivalence of their respective types at roughly what IEEE calls "level 1" (as numbers, rather than as their representation or encoding). Breaking that utterly eviscerates `Numeric`.

Nope. They would continue to work as they always have, but would have a depreciation warning on them. The authors of those algorithms would have a full depreciation cycle to update the algorithms. Fixits would be provided to make conversion easier.

After the depreciation cycle, Numeric would no longer guarantee a common "level 1" comparison for conforming types.

It would, using ==?, you would just be forced to deal with the possibility of the Equality relation not holding. '(a ==? b) == true' would mimic the current behavior.

What are the semantic guarantees required of `==?` such that this would be guaranteed to be the current behavior? How would this be implementable without being so costly that, in practice, no generic numeric algorithms would ever use such a facility?

Moreover, if `(a ==? b) == true` guarantees the current behavior for all types, and all currently Equatable types will conform to this protocol, haven't you just reproduced the problem seen in Rust's `PartialEq`, only now with clumsier syntax and poorer performance?

Is it the _purpose_ of this design to make it clumsier and less performant so people don't use it? If so, to the extent that it is an effective deterrent, haven't you created a deterrent to the use of Numeric to an exactly equal extent?

This would break any `Numeric` algorithms that currently use `==`
correctly. There are useful guarantees that are common to integer `==` and
IEEE floating-point `==`; namely, they each model equivalence of their
respective types at roughly what IEEE calls "level 1" (as numbers, rather
than as their representation or encoding). Breaking that utterly
eviscerates `Numeric`.

···

On Thu, Oct 26, 2017 at 10:57 AM, Jonathan Hull <jhull@gbis.com> wrote:

On Oct 26, 2017, at 8:19 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Thu, Oct 26, 2017 at 07:52 Jonathan Hull <jhull@gbis.com> wrote:

On Oct 25, 2017, at 11:22 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Wed, Oct 25, 2017 at 11:46 PM, Jonathan Hull <jhull@gbis.com> wrote:

As someone mentioned earlier, we are trying to square a circle here. We
can’t have everything at once… we will have to prioritize. I feel like the
precedent in Swift is to prioritize safety/correctness with an option
ignore safety and regain speed.

I think the 3 point solution I proposed is a good compromise that
follows that precedent. It does mean that there is, by default, a small
performance hit for floats in generic contexts, but in exchange for that,
we get increased correctness and safety. This is the exact same tradeoff
that Swift makes for optionals! Any speed lost can be regained by
providing a specific override for FloatingPoint that uses ‘&==‘.

My point is not about performance. My point is that `Numeric.==` must
continue to have IEEE floating-point semantics for floating-point types and
integer semantics for integer types, or else existing uses of `Numeric.==`
will break without any way to fix them. The whole point of *having*
`Numeric` is to permit such generic algorithms to be written. But since
`Numeric.==` *is* `Equatable.==`, we have a large constraint on how the
semantics of `==` can be changed.

It would also conform to the new protocol and have it’s Equatable
conformance depreciated. Once we have conditional conformances, we can add
Equatable back conditionally. Also, while we are waiting for that, Numeric
can provide overrides of important methods when the conforming type is
Equatable or FloatingPoint.

For example, if someone wants to write a generic function that works both

on Integer and FloatingPoint, then they would have to use the new protocol
which would force them to correctly handle cases involving NaN.

What "new protocol" are you referring to, and what do you mean about
"correctly handling cases involving NaN"? The existing API of `Numeric`
makes it possible to write generic algorithms that accommodate both integer
and floating-point types--yes, even if the value is NaN. If you change the
definition of `==` or `<`, currently correct generic algorithms that use
`Numeric` will start to _incorrectly_ handle NaN.

#1 from my previous email (shown again here):

Currently, I think we should do 3 things:

1) Create a new protocol with a partial equivalence relation with
signature of (T, T)->Bool? and automatically conform Equatable things to it
2) Depreciate Float, etc’s… Equatable conformance with a warning that
it will eventually be removed (and conform Float, etc… to the partial
equivalence protocol)
3) Provide an '&==‘ relation on Float, etc… (without a protocol) with
the native Float IEEE comparison

In this case, #2 would also apply to Numeric. You can think of the new
protocol as a failable version of Equatable, so in any case where it can’t
meet equatable’s rules, it returns nil.

Again, Numeric makes possible the generic use of == with floating-point
semantics for floating-point values and integer semantics for integer
values; this design would not.

Correct. I view this as a good thing, because another way of saying that
is: “it makes possible cases where == sometimes conforms to the rules of
Equatable and sometimes doesn’t." Under the solution I am advocating,
Numeric would instead allow generic use of '==?’.

I suppose an argument could be made that we should extend ‘&==‘ to Numeric
from FloatingPoint, but then we would end up with the Rust situation you
were talking about earlier…

After the depreciation cycle, Numeric would no longer guarantee a common
"level 1" comparison for conforming types.

···

On Thu, Oct 26, 2017 at 11:38 AM, Jonathan Hull <jhull@gbis.com> wrote:

On Oct 26, 2017, at 9:34 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Thu, Oct 26, 2017 at 10:57 AM, Jonathan Hull <jhull@gbis.com> wrote:

On Oct 26, 2017, at 8:19 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Thu, Oct 26, 2017 at 07:52 Jonathan Hull <jhull@gbis.com> wrote:

On Oct 25, 2017, at 11:22 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Wed, Oct 25, 2017 at 11:46 PM, Jonathan Hull <jhull@gbis.com> wrote:

As someone mentioned earlier, we are trying to square a circle here. We
can’t have everything at once… we will have to prioritize. I feel like the
precedent in Swift is to prioritize safety/correctness with an option
ignore safety and regain speed.

I think the 3 point solution I proposed is a good compromise that
follows that precedent. It does mean that there is, by default, a small
performance hit for floats in generic contexts, but in exchange for that,
we get increased correctness and safety. This is the exact same tradeoff
that Swift makes for optionals! Any speed lost can be regained by
providing a specific override for FloatingPoint that uses ‘&==‘.

My point is not about performance. My point is that `Numeric.==` must
continue to have IEEE floating-point semantics for floating-point types and
integer semantics for integer types, or else existing uses of `Numeric.==`
will break without any way to fix them. The whole point of *having*
`Numeric` is to permit such generic algorithms to be written. But since
`Numeric.==` *is* `Equatable.==`, we have a large constraint on how the
semantics of `==` can be changed.

It would also conform to the new protocol and have it’s Equatable
conformance depreciated. Once we have conditional conformances, we can add
Equatable back conditionally. Also, while we are waiting for that, Numeric
can provide overrides of important methods when the conforming type is
Equatable or FloatingPoint.

For example, if someone wants to write a generic function that works

both on Integer and FloatingPoint, then they would have to use the new
protocol which would force them to correctly handle cases involving NaN.

What "new protocol" are you referring to, and what do you mean about
"correctly handling cases involving NaN"? The existing API of `Numeric`
makes it possible to write generic algorithms that accommodate both integer
and floating-point types--yes, even if the value is NaN. If you change the
definition of `==` or `<`, currently correct generic algorithms that use
`Numeric` will start to _incorrectly_ handle NaN.

#1 from my previous email (shown again here):

Currently, I think we should do 3 things:

1) Create a new protocol with a partial equivalence relation with
signature of (T, T)->Bool? and automatically conform Equatable things to it
2) Depreciate Float, etc’s… Equatable conformance with a warning that
it will eventually be removed (and conform Float, etc… to the partial
equivalence protocol)
3) Provide an '&==‘ relation on Float, etc… (without a protocol) with
the native Float IEEE comparison

In this case, #2 would also apply to Numeric. You can think of the new
protocol as a failable version of Equatable, so in any case where it can’t
meet equatable’s rules, it returns nil.

Again, Numeric makes possible the generic use of == with floating-point
semantics for floating-point values and integer semantics for integer
values; this design would not.

Correct. I view this as a good thing, because another way of saying that
is: “it makes possible cases where == sometimes conforms to the rules of
Equatable and sometimes doesn’t." Under the solution I am advocating,
Numeric would instead allow generic use of '==?’.

I suppose an argument could be made that we should extend ‘&==‘ to
Numeric from FloatingPoint, but then we would end up with the Rust
situation you were talking about earlier…

This would break any `Numeric` algorithms that currently use `==`
correctly. There are useful guarantees that are common to integer `==` and
IEEE floating-point `==`; namely, they each model equivalence of their
respective types at roughly what IEEE calls "level 1" (as numbers, rather
than as their representation or encoding). Breaking that utterly
eviscerates `Numeric`.

Nope. They would continue to work as they always have, but would have a
depreciation warning on them. The authors of those algorithms would have a
full depreciation cycle to update the algorithms. Fixits would be provided
to make conversion easier.

What are the semantic guarantees required of `==?` such that this would be
guaranteed to be the current behavior? How would this be implementable
without being so costly that, in practice, no generic numeric algorithms
would ever use such a facility?

Moreover, if `(a ==? b) == true` guarantees the current behavior for all
types, and all currently Equatable types will conform to this protocol,
haven't you just reproduced the problem seen in Rust's `PartialEq`, only
now with clumsier syntax and poorer performance?

Is it the _purpose_ of this design to make it clumsier and less performant
so people don't use it? If so, to the extent that it is an effective
deterrent, haven't you created a deterrent to the use of Numeric to an
exactly equal extent?

···

On Thu, Oct 26, 2017 at 11:50 AM, Jonathan Hull <jhull@gbis.com> wrote:

On Oct 26, 2017, at 9:40 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Thu, Oct 26, 2017 at 11:38 AM, Jonathan Hull <jhull@gbis.com> wrote:

On Oct 26, 2017, at 9:34 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Thu, Oct 26, 2017 at 10:57 AM, Jonathan Hull <jhull@gbis.com> wrote:

On Oct 26, 2017, at 8:19 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Thu, Oct 26, 2017 at 07:52 Jonathan Hull <jhull@gbis.com> wrote:

On Oct 25, 2017, at 11:22 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Wed, Oct 25, 2017 at 11:46 PM, Jonathan Hull <jhull@gbis.com> wrote:

As someone mentioned earlier, we are trying to square a circle here.
We can’t have everything at once… we will have to prioritize. I feel like
the precedent in Swift is to prioritize safety/correctness with an option
ignore safety and regain speed.

I think the 3 point solution I proposed is a good compromise that
follows that precedent. It does mean that there is, by default, a small
performance hit for floats in generic contexts, but in exchange for that,
we get increased correctness and safety. This is the exact same tradeoff
that Swift makes for optionals! Any speed lost can be regained by
providing a specific override for FloatingPoint that uses ‘&==‘.

My point is not about performance. My point is that `Numeric.==` must
continue to have IEEE floating-point semantics for floating-point types and
integer semantics for integer types, or else existing uses of `Numeric.==`
will break without any way to fix them. The whole point of *having*
`Numeric` is to permit such generic algorithms to be written. But since
`Numeric.==` *is* `Equatable.==`, we have a large constraint on how the
semantics of `==` can be changed.

It would also conform to the new protocol and have it’s Equatable
conformance depreciated. Once we have conditional conformances, we can add
Equatable back conditionally. Also, while we are waiting for that, Numeric
can provide overrides of important methods when the conforming type is
Equatable or FloatingPoint.

For example, if someone wants to write a generic function that works

both on Integer and FloatingPoint, then they would have to use the new
protocol which would force them to correctly handle cases involving NaN.

What "new protocol" are you referring to, and what do you mean about
"correctly handling cases involving NaN"? The existing API of `Numeric`
makes it possible to write generic algorithms that accommodate both integer
and floating-point types--yes, even if the value is NaN. If you change the
definition of `==` or `<`, currently correct generic algorithms that use
`Numeric` will start to _incorrectly_ handle NaN.

#1 from my previous email (shown again here):

Currently, I think we should do 3 things:

1) Create a new protocol with a partial equivalence relation with
signature of (T, T)->Bool? and automatically conform Equatable things to it
2) Depreciate Float, etc’s… Equatable conformance with a warning that
it will eventually be removed (and conform Float, etc… to the partial
equivalence protocol)
3) Provide an '&==‘ relation on Float, etc… (without a protocol) with
the native Float IEEE comparison

In this case, #2 would also apply to Numeric. You can think of the new
protocol as a failable version of Equatable, so in any case where it can’t
meet equatable’s rules, it returns nil.

Again, Numeric makes possible the generic use of == with floating-point
semantics for floating-point values and integer semantics for integer
values; this design would not.

Correct. I view this as a good thing, because another way of saying
that is: “it makes possible cases where == sometimes conforms to the rules
of Equatable and sometimes doesn’t." Under the solution I am advocating,
Numeric would instead allow generic use of '==?’.

I suppose an argument could be made that we should extend ‘&==‘ to
Numeric from FloatingPoint, but then we would end up with the Rust
situation you were talking about earlier…

This would break any `Numeric` algorithms that currently use `==`
correctly. There are useful guarantees that are common to integer `==` and
IEEE floating-point `==`; namely, they each model equivalence of their
respective types at roughly what IEEE calls "level 1" (as numbers, rather
than as their representation or encoding). Breaking that utterly
eviscerates `Numeric`.

Nope. They would continue to work as they always have, but would have a
depreciation warning on them. The authors of those algorithms would have a
full depreciation cycle to update the algorithms. Fixits would be provided
to make conversion easier.

After the depreciation cycle, Numeric would no longer guarantee a common
"level 1" comparison for conforming types.

It would, using ==?, you would just be forced to deal with the possibility
of the Equality relation not holding. '(a ==? b) == true' would mimic the
current behavior.

Now you are just being rude. We all want Swift to be awesome… let’s try to
keep things civil.

Sorry if my reply came across that way! That wasn't at all the intention. I
really mean to ask you those questions and am interested in the answers:

Unless I misunderstand, you're arguing that your proposal is superior to
Rust's design because of a new operator that returns `Bool?` instead of
`Bool`; if so, how is it that you haven't reproduced Rust's design problem,
only with the additional syntax involved in unwrapping the result?

And if, as I understand, your argument is that your design is superior to
Rust's *because* it requires unwrapping, then isn't the extent to which
people will avoid using the protocol unintentionally also equally and
unavoidably the same extent to which it makes Numeric more cumbersome?

You said it was impossible, so I gave you a very quick example showing that

the current behavior was still possible. I wasn’t recommending that
everyone should only ever use that example for all things.

For FloatingPoint, ‘(a &== b) == true’ would mimic the current behavior
(bugs and all). It may not hold for all types.

No, the question was how it would be possible to have these guarantees hold
for `Numeric`, not merely for `FloatingPoint`, as the purpose is to use
`Numeric` for generic algorithms. This requires additional semantic
guarantees on what you propose to call `&==`.

The whole point is that you have to put thought into how you want to deal

with the optional case where the relation’s guarantees have failed.

If you need full performance, then you would have separate overrides on
Numeric for members which conform to FloatingPoint (where you could use
&==) and Equatable (where you could use ==). As you get more generic, you
lose opportunities for optimization. That is just the nature of generic
code. The nice thing about Swift is that you have an opportunity to
specialize if you want to optimize more. Once things like conditional
conformances come online, all of this will be nicer, of course.

This is a non-starter then. Protocols must enable useful generic code. What
you're basically saying is that you do not intend for it to be possible to
use methods on `Numeric` to ask about level 1 equivalence in a way that
would not be prohibitively expensive. This, again, eviscerates the purpose
of `Numeric`.

The point I'm making here, again, is that there are legitimate uses for
`==` guaranteeing partial equivalence in the generic context. The
approximation being put forward over and over is that generic code always
requires full equivalence and concrete floating-point code always requires
IEEE partial equivalence. That is _not true_. Some generic code (for
instance, that which uses `Numeric`) relies on partial equivalence
semantics and some floating-point code can nonetheless benefit from a
notion of full equivalence.

Both concepts must be exposed in a protocol-based manner to accommodate all
use cases. It will not do to say that exposing both concepts will confuse
the user, because the fact remains that both concepts are already and
unavoidably exposed, but sometimes without a way to express the distinction
in code or any documentation about it. Disappearing the notion of partial
equivalence from protocols removes legitimate use cases.

···

On Thu, Oct 26, 2017 at 1:30 PM, Jonathan Hull <jhull@gbis.com> wrote:

On Oct 26, 2017, at 11:01 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Thu, Oct 26, 2017 at 11:50 AM, Jonathan Hull <jhull@gbis.com> wrote:

On Oct 26, 2017, at 9:40 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Thu, Oct 26, 2017 at 11:38 AM, Jonathan Hull <jhull@gbis.com> wrote:

On Oct 26, 2017, at 9:34 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Thu, Oct 26, 2017 at 10:57 AM, Jonathan Hull <jhull@gbis.com> wrote:

On Oct 26, 2017, at 8:19 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Thu, Oct 26, 2017 at 07:52 Jonathan Hull <jhull@gbis.com> wrote:

On Oct 25, 2017, at 11:22 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Wed, Oct 25, 2017 at 11:46 PM, Jonathan Hull <jhull@gbis.com> >>>>> wrote:

As someone mentioned earlier, we are trying to square a circle here.
We can’t have everything at once… we will have to prioritize. I feel like
the precedent in Swift is to prioritize safety/correctness with an option
ignore safety and regain speed.

I think the 3 point solution I proposed is a good compromise that
follows that precedent. It does mean that there is, by default, a small
performance hit for floats in generic contexts, but in exchange for that,
we get increased correctness and safety. This is the exact same tradeoff
that Swift makes for optionals! Any speed lost can be regained by
providing a specific override for FloatingPoint that uses ‘&==‘.

My point is not about performance. My point is that `Numeric.==` must
continue to have IEEE floating-point semantics for floating-point types and
integer semantics for integer types, or else existing uses of `Numeric.==`
will break without any way to fix them. The whole point of *having*
`Numeric` is to permit such generic algorithms to be written. But since
`Numeric.==` *is* `Equatable.==`, we have a large constraint on how the
semantics of `==` can be changed.

It would also conform to the new protocol and have it’s Equatable
conformance depreciated. Once we have conditional conformances, we can add
Equatable back conditionally. Also, while we are waiting for that, Numeric
can provide overrides of important methods when the conforming type is
Equatable or FloatingPoint.

For example, if someone wants to write a generic function that works

both on Integer and FloatingPoint, then they would have to use the new
protocol which would force them to correctly handle cases involving NaN.

What "new protocol" are you referring to, and what do you mean about
"correctly handling cases involving NaN"? The existing API of `Numeric`
makes it possible to write generic algorithms that accommodate both integer
and floating-point types--yes, even if the value is NaN. If you change the
definition of `==` or `<`, currently correct generic algorithms that use
`Numeric` will start to _incorrectly_ handle NaN.

#1 from my previous email (shown again here):

Currently, I think we should do 3 things:

1) Create a new protocol with a partial equivalence relation with
signature of (T, T)->Bool? and automatically conform Equatable things to it
2) Depreciate Float, etc’s… Equatable conformance with a warning
that it will eventually be removed (and conform Float, etc… to the partial
equivalence protocol)
3) Provide an '&==‘ relation on Float, etc… (without a protocol)
with the native Float IEEE comparison

In this case, #2 would also apply to Numeric. You can think of the
new protocol as a failable version of Equatable, so in any case where it
can’t meet equatable’s rules, it returns nil.

Again, Numeric makes possible the generic use of == with floating-point
semantics for floating-point values and integer semantics for integer
values; this design would not.

Correct. I view this as a good thing, because another way of saying
that is: “it makes possible cases where == sometimes conforms to the rules
of Equatable and sometimes doesn’t." Under the solution I am advocating,
Numeric would instead allow generic use of '==?’.

I suppose an argument could be made that we should extend ‘&==‘ to
Numeric from FloatingPoint, but then we would end up with the Rust
situation you were talking about earlier…

This would break any `Numeric` algorithms that currently use `==`
correctly. There are useful guarantees that are common to integer `==` and
IEEE floating-point `==`; namely, they each model equivalence of their
respective types at roughly what IEEE calls "level 1" (as numbers, rather
than as their representation or encoding). Breaking that utterly
eviscerates `Numeric`.

Nope. They would continue to work as they always have, but would have a
depreciation warning on them. The authors of those algorithms would have a
full depreciation cycle to update the algorithms. Fixits would be provided
to make conversion easier.

After the depreciation cycle, Numeric would no longer guarantee a common
"level 1" comparison for conforming types.

It would, using ==?, you would just be forced to deal with the
possibility of the Equality relation not holding. '(a ==? b) == true'
would mimic the current behavior.

What are the semantic guarantees required of `==?` such that this would be
guaranteed to be the current behavior? How would this be implementable
without being so costly that, in practice, no generic numeric algorithms
would ever use such a facility?

Moreover, if `(a ==? b) == true` guarantees the current behavior for all
types, and all currently Equatable types will conform to this protocol,
haven't you just reproduced the problem seen in Rust's `PartialEq`, only
now with clumsier syntax and poorer performance?

Is it the _purpose_ of this design to make it clumsier and less performant
so people don't use it? If so, to the extent that it is an effective
deterrent, haven't you created a deterrent to the use of Numeric to an
exactly equal extent?

Now you are just being rude. We all want Swift to be awesome… let’s try to keep things civil.

Sorry if my reply came across that way! That wasn't at all the intention. I really mean to ask you those questions and am interested in the answers:

Thank you for saying that. I haven’t been sleeping well, so I am probably a bit grumpy.

Unless I misunderstand, you're arguing that your proposal is superior to Rust's design because of a new operator that returns `Bool?` instead of `Bool`; if so, how is it that you haven't reproduced Rust's design problem, only with the additional syntax involved in unwrapping the result?

Two things:

1) PartialEq was available in generic contexts and it provided the IEEE comparison. Our IEEE comparison (which I am calling ‘&==‘ for now) is not available in generic contexts beyond FloatingPoint. If we were to have this in a generic context beyond FloatingPoint, then we would end up with the same issue that Rust had.

2) It is actually semantically different. This MostlyEquatable protocol returns nil when the guarantees of the relation would be violated… and the author has to decide what to do with that. Depending on the use case, the best course of action may be to: treat it as false, trap, throw, or branch. Swift coders are used to this type of decision when encountering optionals.

And if, as I understand, your argument is that your design is superior to Rust's *because* it requires unwrapping, then isn't the extent to which people will avoid using the protocol unintentionally also equally and unavoidably the same extent to which it makes Numeric more cumbersome?

It isn’t that unwrapping is meant to be a deterrent, it is that there are cases where the Equivalence relation may fail to hold, and the programmer needs to deal with those (when working in a generic context). Failure to do so leads to subtle bugs.

Numeric has to use ‘==?’ because there are cases where the relation will fail. I’d love for it to conform to Equatable, but it really doesn’t if you look at it honestly, because it can run into cases where reflexivity doesn’t hold, and we have to deal with those cases.

As I said above, the typical ways to handle that nil would be: treat it as false, trap, throw, or branch. The current behavior is equivalent to "treat it as false”, and yes, that is the right thing for some algorithms (and you can still do that). But there are also lots of algorithms that need to trap or throw on Nan, or branch to handle it differently. The current behavior also silently fails, which is why the bugs are so hard to track down.

Premature optimization is the root of all evil.

You said it was impossible, so I gave you a very quick example showing that the current behavior was still possible. I wasn’t recommending that everyone should only ever use that example for all things.

For FloatingPoint, ‘(a &== b) == true’ would mimic the current behavior (bugs and all). It may not hold for all types.

Oops, that should be ‘==?’ (which returns an optional). I am getting tired, it is time for bed.

No, the question was how it would be possible to have these guarantees hold for `Numeric`, not merely for `FloatingPoint`, as the purpose is to use `Numeric` for generic algorithms. This requires additional semantic guarantees on what you propose to call `&==`.

Well, they hold for FloatingPoint and anything which is actually Equatable. Those are the only things I can think of that conform to Numeric right now, but I can’t guarantee that someone won’t later add a type to Numeric which also fails to actually conform to equatable in some different way.

To be fair, anything that breaks this would also break current algorithms on Numeric anyway.

The whole point is that you have to put thought into how you want to deal with the optional case where the relation’s guarantees have failed.

If you need full performance, then you would have separate overrides on Numeric for members which conform to FloatingPoint (where you could use &==) and Equatable (where you could use ==). As you get more generic, you lose opportunities for optimization. That is just the nature of generic code. The nice thing about Swift is that you have an opportunity to specialize if you want to optimize more. Once things like conditional conformances come online, all of this will be nicer, of course.

This is a non-starter then. Protocols must enable useful generic code. What you're basically saying is that you do not intend for it to be possible to use methods on `Numeric` to ask about level 1 equivalence in a way that would not be prohibitively expensive. This, again, eviscerates the purpose of `Numeric`.

I don’t consider it “prohibitively expensive”. I mean, dictionaries return an optional. Lots of things return optionals. I have to deal with them all over the place in Swift code.

I think having the tradeoff of having quicker to write code vs more performant code is completely reasonable. Ideally everything would happen instantly, but we really can’t get away from making *some* tradeoffs here.

If I just need something that works, I can use ==? and handle the nil cases. If unwrapping an optional is untenable from a speed perspective in a particular case for some reason, then I think it is completely reasonable to have the author additionally write optimized versions specializing based on additional information which is known (e.g. FloatingPoint or Equatable).

Note that I am mostly talking about library code here. Once you build up a library of functions on Numeric that handle this correctly, you can use those functions as building blocks, and you aren’t even worrying about == for the most part. For example, if we build a version of index(of:) on collection which works for our MostlyEquatable protocol, then we can pass Numeric to it generically. Whether they decided it was important enough to put in an optimization for FloatingPoint or not, it doesn’t affect the way we call it. It could even have only a generic version for years, and then gain an optimization later if it became important.

The point I'm making here, again, is that there are legitimate uses for `==` guaranteeing partial equivalence in the generic context. The approximation being put forward over and over is that generic code always requires full equivalence and concrete floating-point code always requires IEEE partial equivalence. That is _not true_. Some generic code (for instance, that which uses `Numeric`) relies on partial equivalence semantics and some floating-point code can nonetheless benefit from a notion of full equivalence.

I mean, it would be nice if Float could truly conform to Equatable, but it would also be nice if I didn’t have to check for null pointers. It would certainly be faster if instead of unwrapping optionals, I could just use pointers directly. It would even work most of the time… because I would be careful to remember to add checks where they were really important… until I forget, and then there is a bug! This kind of premature optimization has cost our economy literally Trillions of dollars.

We have optionals for exactly this reason in Swift. It forces us to take those things which will "work fine most of the time”, and consider the case where it won’t. I know it is slightly faster not to consider that case, but that is exactly why this is a notorious source of bugs.

Both concepts must be exposed in a protocol-based manner to accommodate all use cases. It will not do to say that exposing both concepts will confuse the user, because the fact remains that both concepts are already and unavoidably exposed, but sometimes without a way to express the distinction in code or any documentation about it. Disappearing the notion of partial equivalence from protocols removes legitimate use cases.

On the contrary, I am saying we should make the difference explicit.

···

On Oct 26, 2017, at 11:47 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:
On Thu, Oct 26, 2017 at 1:30 PM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

On Oct 26, 2017, at 11:01 AM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Thu, Oct 26, 2017 at 11:50 AM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

On Oct 26, 2017, at 9:40 AM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Thu, Oct 26, 2017 at 11:38 AM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

On Oct 26, 2017, at 9:34 AM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Thu, Oct 26, 2017 at 10:57 AM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

On Oct 26, 2017, at 8:19 AM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Thu, Oct 26, 2017 at 07:52 Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

On Oct 25, 2017, at 11:22 PM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Wed, Oct 25, 2017 at 11:46 PM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:
As someone mentioned earlier, we are trying to square a circle here. We can’t have everything at once… we will have to prioritize. I feel like the precedent in Swift is to prioritize safety/correctness with an option ignore safety and regain speed.

I think the 3 point solution I proposed is a good compromise that follows that precedent. It does mean that there is, by default, a small performance hit for floats in generic contexts, but in exchange for that, we get increased correctness and safety. This is the exact same tradeoff that Swift makes for optionals! Any speed lost can be regained by providing a specific override for FloatingPoint that uses ‘&==‘.

My point is not about performance. My point is that `Numeric.==` must continue to have IEEE floating-point semantics for floating-point types and integer semantics for integer types, or else existing uses of `Numeric.==` will break without any way to fix them. The whole point of *having* `Numeric` is to permit such generic algorithms to be written. But since `Numeric.==` *is* `Equatable.==`, we have a large constraint on how the semantics of `==` can be changed.

It would also conform to the new protocol and have it’s Equatable conformance depreciated. Once we have conditional conformances, we can add Equatable back conditionally. Also, while we are waiting for that, Numeric can provide overrides of important methods when the conforming type is Equatable or FloatingPoint.

For example, if someone wants to write a generic function that works both on Integer and FloatingPoint, then they would have to use the new protocol which would force them to correctly handle cases involving NaN.

What "new protocol" are you referring to, and what do you mean about "correctly handling cases involving NaN"? The existing API of `Numeric` makes it possible to write generic algorithms that accommodate both integer and floating-point types--yes, even if the value is NaN. If you change the definition of `==` or `<`, currently correct generic algorithms that use `Numeric` will start to _incorrectly_ handle NaN.

#1 from my previous email (shown again here):

Currently, I think we should do 3 things:

1) Create a new protocol with a partial equivalence relation with signature of (T, T)->Bool? and automatically conform Equatable things to it
2) Depreciate Float, etc’s… Equatable conformance with a warning that it will eventually be removed (and conform Float, etc… to the partial equivalence protocol)
3) Provide an '&==‘ relation on Float, etc… (without a protocol) with the native Float IEEE comparison

In this case, #2 would also apply to Numeric. You can think of the new protocol as a failable version of Equatable, so in any case where it can’t meet equatable’s rules, it returns nil.

Again, Numeric makes possible the generic use of == with floating-point semantics for floating-point values and integer semantics for integer values; this design would not.

Correct. I view this as a good thing, because another way of saying that is: “it makes possible cases where == sometimes conforms to the rules of Equatable and sometimes doesn’t." Under the solution I am advocating, Numeric would instead allow generic use of '==?’.

I suppose an argument could be made that we should extend ‘&==‘ to Numeric from FloatingPoint, but then we would end up with the Rust situation you were talking about earlier…

This would break any `Numeric` algorithms that currently use `==` correctly. There are useful guarantees that are common to integer `==` and IEEE floating-point `==`; namely, they each model equivalence of their respective types at roughly what IEEE calls "level 1" (as numbers, rather than as their representation or encoding). Breaking that utterly eviscerates `Numeric`.

Nope. They would continue to work as they always have, but would have a depreciation warning on them. The authors of those algorithms would have a full depreciation cycle to update the algorithms. Fixits would be provided to make conversion easier.

After the depreciation cycle, Numeric would no longer guarantee a common "level 1" comparison for conforming types.

It would, using ==?, you would just be forced to deal with the possibility of the Equality relation not holding. '(a ==? b) == true' would mimic the current behavior.

What are the semantic guarantees required of `==?` such that this would be guaranteed to be the current behavior? How would this be implementable without being so costly that, in practice, no generic numeric algorithms would ever use such a facility?

Moreover, if `(a ==? b) == true` guarantees the current behavior for all types, and all currently Equatable types will conform to this protocol, haven't you just reproduced the problem seen in Rust's `PartialEq`, only now with clumsier syntax and poorer performance?

Is it the _purpose_ of this design to make it clumsier and less performant so people don't use it? If so, to the extent that it is an effective deterrent, haven't you created a deterrent to the use of Numeric to an exactly equal extent?

Now you are just being rude. We all want Swift to be awesome… let’s try to keep things civil.

Sorry if my reply came across that way! That wasn't at all the intention. I really mean to ask you those questions and am interested in the answers:

Unless I misunderstand, you're arguing that your proposal is superior to Rust's design because of a new operator that returns `Bool?` instead of `Bool`; if so, how is it that you haven't reproduced Rust's design problem, only with the additional syntax involved in unwrapping the result?

And if, as I understand, your argument is that your design is superior to Rust's *because* it requires unwrapping, then isn't the extent to which people will avoid using the protocol unintentionally also equally and unavoidably the same extent to which it makes Numeric more cumbersome?

You said it was impossible, so I gave you a very quick example showing that the current behavior was still possible. I wasn’t recommending that everyone should only ever use that example for all things.

For FloatingPoint, ‘(a &== b) == true’ would mimic the current behavior (bugs and all). It may not hold for all types.

No, the question was how it would be possible to have these guarantees hold for `Numeric`, not merely for `FloatingPoint`, as the purpose is to use `Numeric` for generic algorithms. This requires additional semantic guarantees on what you propose to call `&==`.

Would something like this work?

Numeric.== -> Bool
traps on NaN etc.

Numeric.==? -> Bool?
returns nil on NaN etc. You likely don't want this unless you know something about floating-point.

Numeric.&== -> Bool
is IEEE equality. You should not use this unless you are a floating-point expert.

The experts can get high performance or sophisticated numeric behavior. The rest of us who naïvely use == get a relatively foolproof floating-point model. (There is no difference among these three operators for fixed-size integers, of course.)

This is analogous to what Swift does with integer overflow. I would further argue the other Numeric operators like + should be extended to the same triple of trap or optional or just-do-it. We already have two of those three operators for integer addition after all.

Numeric.+ -> T
traps on FP NaN and integer overflow

Numeric.+? -> T?
returns nil on FP NaN and integer overflow

Numeric.&+ -> T
performs FP IEEE addition and integer wraparound

The whole point is that you have to put thought into how you want to deal with the optional case where the relation’s guarantees have failed.

If you need full performance, then you would have separate overrides on Numeric for members which conform to FloatingPoint (where you could use &==) and Equatable (where you could use ==). As you get more generic, you lose opportunities for optimization. That is just the nature of generic code. The nice thing about Swift is that you have an opportunity to specialize if you want to optimize more. Once things like conditional conformances come online, all of this will be nicer, of course.

This is a non-starter then. Protocols must enable useful generic code. What you're basically saying is that you do not intend for it to be possible to use methods on `Numeric` to ask about level 1 equivalence in a way that would not be prohibitively expensive. This, again, eviscerates the purpose of `Numeric`.

I'm not sure that there is a performance problem. If your compiled code is actually making calls to generic comparison functions then you have already lost the high performance war. Any place where the compiler knows enough to use a specialized comparison function should also be a place where the compiler can optimize away unnecessary floating-point checks.

Let me make an analogous objection to the current Numerics design. How do you get the highest performance addition operator in a generic context? Currently you can't, because Numeric.+ checks for integer overflow.

The point I'm making here, again, is that there are legitimate uses for `==` guaranteeing partial equivalence in the generic context. The approximation being put forward over and over is that generic code always requires full equivalence and concrete floating-point code always requires IEEE partial equivalence. That is _not true_. Some generic code (for instance, that which uses `Numeric`) relies on partial equivalence semantics and some floating-point code can nonetheless benefit from a notion of full equivalence.

I agree that providing a way to get IEEE equality in a generic context is useful. I am not convinced that Numeric.== -> Bool is the right place to provide it.

···

On Oct 26, 2017, at 11:47 AM, Xiaodi Wu via swift-dev <swift-dev@swift.org> wrote:
On Thu, Oct 26, 2017 at 1:30 PM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

--
Greg Parker gparker@apple.com <mailto:gparker@apple.com> Runtime Wrangler

Works for me (although I'd prefer it if we could we stick to one side for the "modifier" symbols -- either "&+" and "?+", or "+&" and "+?", and likewise for "==" and its variants)

Should `Numeric` have extensions that define the variants in terms of `==`, so that authors of custom types don't have to think about it if they don't want to?

- Dave Sweeris

···

On Oct 26, 2017, at 2:57 PM, Greg Parker via swift-dev <swift-dev@swift.org> wrote:

On Oct 26, 2017, at 11:47 AM, Xiaodi Wu via swift-dev <swift-dev@swift.org <mailto:swift-dev@swift.org>> wrote:

On Thu, Oct 26, 2017 at 1:30 PM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:
Now you are just being rude. We all want Swift to be awesome… let’s try to keep things civil.

Sorry if my reply came across that way! That wasn't at all the intention. I really mean to ask you those questions and am interested in the answers:

Unless I misunderstand, you're arguing that your proposal is superior to Rust's design because of a new operator that returns `Bool?` instead of `Bool`; if so, how is it that you haven't reproduced Rust's design problem, only with the additional syntax involved in unwrapping the result?

And if, as I understand, your argument is that your design is superior to Rust's *because* it requires unwrapping, then isn't the extent to which people will avoid using the protocol unintentionally also equally and unavoidably the same extent to which it makes Numeric more cumbersome?

You said it was impossible, so I gave you a very quick example showing that the current behavior was still possible. I wasn’t recommending that everyone should only ever use that example for all things.

For FloatingPoint, ‘(a &== b) == true’ would mimic the current behavior (bugs and all). It may not hold for all types.

No, the question was how it would be possible to have these guarantees hold for `Numeric`, not merely for `FloatingPoint`, as the purpose is to use `Numeric` for generic algorithms. This requires additional semantic guarantees on what you propose to call `&==`.

Would something like this work?

Numeric.== -> Bool
traps on NaN etc.

Numeric.==? -> Bool?
returns nil on NaN etc. You likely don't want this unless you know something about floating-point.

Numeric.&== -> Bool
is IEEE equality. You should not use this unless you are a floating-point expert.

The experts can get high performance or sophisticated numeric behavior. The rest of us who naïvely use == get a relatively foolproof floating-point model. (There is no difference among these three operators for fixed-size integers, of course.)

This is analogous to what Swift does with integer overflow. I would further argue the other Numeric operators like + should be extended to the same triple of trap or optional or just-do-it. We already have two of those three operators for integer addition after all.

Numeric.+ -> T
traps on FP NaN and integer overflow

Numeric.+? -> T?
returns nil on FP NaN and integer overflow

Numeric.&+ -> T
performs FP IEEE addition and integer wraparound

Now you are just being rude. We all want Swift to be awesome… let’s try to keep things civil.

Sorry if my reply came across that way! That wasn't at all the intention. I really mean to ask you those questions and am interested in the answers:

Unless I misunderstand, you're arguing that your proposal is superior to Rust's design because of a new operator that returns `Bool?` instead of `Bool`; if so, how is it that you haven't reproduced Rust's design problem, only with the additional syntax involved in unwrapping the result?

And if, as I understand, your argument is that your design is superior to Rust's *because* it requires unwrapping, then isn't the extent to which people will avoid using the protocol unintentionally also equally and unavoidably the same extent to which it makes Numeric more cumbersome?

You said it was impossible, so I gave you a very quick example showing that the current behavior was still possible. I wasn’t recommending that everyone should only ever use that example for all things.

For FloatingPoint, ‘(a &== b) == true’ would mimic the current behavior (bugs and all). It may not hold for all types.

No, the question was how it would be possible to have these guarantees hold for `Numeric`, not merely for `FloatingPoint`, as the purpose is to use `Numeric` for generic algorithms. This requires additional semantic guarantees on what you propose to call `&==`.

Would something like this work?

Numeric.== -> Bool
traps on NaN etc.

Numeric.==? -> Bool?
returns nil on NaN etc. You likely don't want this unless you know something about floating-point.

Numeric.&== -> Bool
is IEEE equality. You should not use this unless you are a floating-point expert.

The experts can get high performance or sophisticated numeric behavior. The rest of us who naïvely use == get a relatively foolproof floating-point model. (There is no difference among these three operators for fixed-size integers, of course.)

This is analogous to what Swift does with integer overflow. I would further argue the other Numeric operators like + should be extended to the same triple of trap or optional or just-do-it. We already have two of those three operators for integer addition after all.

Numeric.+ -> T
traps on FP NaN and integer overflow

Numeric.+? -> T?
returns nil on FP NaN and integer overflow

Numeric.&+ -> T
performs FP IEEE addition and integer wraparound

Works for me (although I'd prefer it if we could we stick to one side for the "modifier" symbols -- either "&+" and "?+", or "+&" and "+?", and likewise for "==" and its variants)

At a glance this looks like a reasonable solution to me as well.

Should `Numeric` have extensions that define the variants in terms of `==`, so that authors of custom types don't have to think about it if they don't want to?

Probably not. In this design `==` is allowed to have a precondition while the variants are not.

···

On Oct 26, 2017, at 5:12 PM, David Sweeris via swift-dev <swift-dev@swift.org> wrote:

On Oct 26, 2017, at 2:57 PM, Greg Parker via swift-dev <swift-dev@swift.org <mailto:swift-dev@swift.org>> wrote:

On Oct 26, 2017, at 11:47 AM, Xiaodi Wu via swift-dev <swift-dev@swift.org <mailto:swift-dev@swift.org>> wrote:
On Thu, Oct 26, 2017 at 1:30 PM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

- Dave Sweeris
_______________________________________________
swift-dev mailing list
swift-dev@swift.org
https://lists.swift.org/mailman/listinfo/swift-dev

Now you are just being rude. We all want Swift to be awesome… let’s try
to keep things civil.

Sorry if my reply came across that way! That wasn't at all the intention.
I really mean to ask you those questions and am interested in the answers:

Thank you for saying that. I haven’t been sleeping well, so I am probably
a bit grumpy.

Unless I misunderstand, you're arguing that your proposal is superior to
Rust's design because of a new operator that returns `Bool?` instead of
`Bool`; if so, how is it that you haven't reproduced Rust's design problem,
only with the additional syntax involved in unwrapping the result?

Two things:

1) PartialEq was available in generic contexts and it provided the IEEE
comparison. Our IEEE comparison (which I am calling ‘&==‘ for now) is not
available in generic contexts beyond FloatingPoint. If we were to have this
in a generic context beyond FloatingPoint, then we would end up with the
same issue that Rust had.

What I'm saying is that we *must* have this available in generic contexts
beyond FloatingPoint, such as on Numeric, for reasons I've described and
which I'll elaborate on shortly.

2) It is actually semantically different. This MostlyEquatable protocol
returns nil when the guarantees of the relation would be violated… and the
author has to decide what to do with that. Depending on the use case, the
best course of action may be to: treat it as false, trap, throw, or
branch. Swift coders are used to this type of decision when encountering
optionals.

And if, as I understand, your argument is that your design is superior to
Rust's *because* it requires unwrapping, then isn't the extent to which
people will avoid using the protocol unintentionally also equally and
unavoidably the same extent to which it makes Numeric more cumbersome?

It isn’t that unwrapping is meant to be a deterrent, it is that there are
cases where the Equivalence relation may fail to hold, and the programmer
needs to deal with those (when working in a generic context). Failure to
do so leads to subtle bugs.

Numeric has to use ‘==?’ because there are cases where the relation will
fail. I’d love for it to conform to Equatable, but it really doesn’t if you
look at it honestly, because it can run into cases where reflexivity
doesn’t hold, and we have to deal with those cases.

Well, it's another thing entirely if you want Numeric not to be Equatable
(or, by that token, Comparable). Yes, it'd be correct, but that'd be a
surprising and user-hostile design.

As I said above, the typical ways to handle that nil would be: treat it as
false, trap, throw, or branch. The current behavior is equivalent to
"treat it as false”, and yes, that is the right thing for some algorithms
(and you can still do that). But there are also lots of algorithms that
need to trap or throw on Nan, or branch to handle it differently. The
current behavior also silently fails, which is why the bugs are so hard to
track down.

That is inherent to the IEEE definition of "quiet NaN": the operations
specified in that standard are required to silently accept NaN.

Premature optimization is the root of all evil.

You said it was impossible, so I gave you a very quick example showing

that the current behavior was still possible. I wasn’t recommending that
everyone should only ever use that example for all things.

For FloatingPoint, ‘(a &== b) == true’ would mimic the current behavior
(bugs and all). It may not hold for all types.

Oops, that should be ‘==?’ (which returns an optional). I am getting
tired, it is time for bed.

No, the question was how it would be possible to have these guarantees
hold for `Numeric`, not merely for `FloatingPoint`, as the purpose is to
use `Numeric` for generic algorithms. This requires additional semantic
guarantees on what you propose to call `&==`.

Well, they hold for FloatingPoint and anything which is actually
Equatable. Those are the only things I can think of that conform to Numeric
right now, but I can’t guarantee that someone won’t later add a type to
Numeric which also fails to actually conform to equatable in some different
way.

To be fair, anything that breaks this would also break current algorithms
on Numeric anyway.

This doesn't answer my question. If `(a ==? b) == true` is the only way to
spell what's currently spelled `==` in a generic context, then `Numeric`
must make such semantic guarantees as are necessary to guarantee that this
spelling behaves in that way for all conforming types, or else it would not
be possible to write generic numeric algorithms that operate on any
`Numeric`-conforming type. What would those guarantees have to be?

The whole point is that you have to put thought into how you want to deal

with the optional case where the relation’s guarantees have failed.

If you need full performance, then you would have separate overrides on
Numeric for members which conform to FloatingPoint (where you could use
&==) and Equatable (where you could use ==). As you get more generic, you
lose opportunities for optimization. That is just the nature of generic
code. The nice thing about Swift is that you have an opportunity to
specialize if you want to optimize more. Once things like conditional
conformances come online, all of this will be nicer, of course.

This is a non-starter then. Protocols must enable useful generic code.
What you're basically saying is that you do not intend for it to be
possible to use methods on `Numeric` to ask about level 1 equivalence in a
way that would not be prohibitively expensive. This, again, eviscerates the
purpose of `Numeric`.

I don’t consider it “prohibitively expensive”. I mean, dictionaries
return an optional. Lots of things return optionals. I have to deal with
them all over the place in Swift code.

I think having the tradeoff of having quicker to write code vs more
performant code is completely reasonable. Ideally everything would happen
instantly, but we really can’t get away from making *some* tradeoffs here.

If I just need something that works, I can use ==? and handle the nil
cases. If unwrapping an optional is untenable from a speed perspective in
a particular case for some reason, then I think it is completely reasonable
to have the author additionally write optimized versions specializing based
on additional information which is known (e.g. FloatingPoint or Equatable).

No, it's not the cost of unwrapping the result, it's the cost of computing
the result, which is much higher than the single machine instruction that
is IEEE floating-point equivalence. The point of `Numeric` is to make it
possible to write generic algorithms that do meaningful math with either
integer or floating-point types. If the only way to write such an algorithm
with reasonable performance is to specialize one version for integers and
another for floating-point values, then `Numeric` serves no purpose as a
protocol.

Note that I am mostly talking about library code here. Once you build up
a library of functions on Numeric that handle this correctly, you can use
those functions as building blocks, and you aren’t even worrying about ==
for the most part. For example, if we build a version of index(of:) on
collection which works for our MostlyEquatable protocol, then we can pass
Numeric to it generically. Whether they decided it was important enough to
put in an optimization for FloatingPoint or not, it doesn’t affect the way
we call it. It could even have only a generic version for years, and then
gain an optimization later if it became important.

You cannot do this for most collection algorithms, because they are mostly
protocol extension methods that can be shadowed but not overridden. But
again, that's not what I'm talking about. I'm talking about writing
_generic numeric algorithms_, not using numeric types with generic
collection algorithms.

The point I'm making here, again, is that there are legitimate uses for
`==` guaranteeing partial equivalence in the generic context. The
approximation being put forward over and over is that generic code always
requires full equivalence and concrete floating-point code always requires
IEEE partial equivalence. That is _not true_. Some generic code (for
instance, that which uses `Numeric`) relies on partial equivalence
semantics and some floating-point code can nonetheless benefit from a
notion of full equivalence.

I mean, it would be nice if Float could truly conform to Equatable, but it
would also be nice if I didn’t have to check for null pointers. It would
certainly be faster if instead of unwrapping optionals, I could just use
pointers directly. It would even work most of the time… because I would be
careful to remember to add checks where they were really important… until I
forget, and then there is a bug! This kind of premature optimization has
cost our economy literally Trillions of dollars.

We have optionals for exactly this reason in Swift. It forces us to take
those things which will "work fine most of the time”, and consider the case
where it won’t. I know it is slightly faster not to consider that case,
but that is exactly why this is a notorious source of bugs.

You write as though it's a foregone conclusion that Float cannot conform

to Equatable. I disagree. My starting point is that Float *can*--and in
fact *must*--conform to Equatable; the question I'm asking is, how must
Equatable be designed such that this can be possible?

···

On Thu, Oct 26, 2017 at 4:34 PM, Jonathan Hull <jhull@gbis.com> wrote:

On Oct 26, 2017, at 11:47 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:
On Thu, Oct 26, 2017 at 1:30 PM, Jonathan Hull <jhull@gbis.com> wrote:

Both concepts must be exposed in a protocol-based manner to accommodate
all use cases. It will not do to say that exposing both concepts will
confuse the user, because the fact remains that both concepts are already
and unavoidably exposed, but sometimes without a way to express the
distinction in code or any documentation about it. Disappearing the notion
of partial equivalence from protocols removes legitimate use cases.

On the contrary, I am saying we should make the difference explicit.

On Oct 26, 2017, at 11:01 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Thu, Oct 26, 2017 at 11:50 AM, Jonathan Hull <jhull@gbis.com> wrote:

On Oct 26, 2017, at 9:40 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Thu, Oct 26, 2017 at 11:38 AM, Jonathan Hull <jhull@gbis.com> wrote:

On Oct 26, 2017, at 9:34 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Thu, Oct 26, 2017 at 10:57 AM, Jonathan Hull <jhull@gbis.com> wrote:

On Oct 26, 2017, at 8:19 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Thu, Oct 26, 2017 at 07:52 Jonathan Hull <jhull@gbis.com> wrote:

On Oct 25, 2017, at 11:22 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Wed, Oct 25, 2017 at 11:46 PM, Jonathan Hull <jhull@gbis.com> >>>>>> wrote:

As someone mentioned earlier, we are trying to square a circle here.
We can’t have everything at once… we will have to prioritize. I feel like
the precedent in Swift is to prioritize safety/correctness with an option
ignore safety and regain speed.

I think the 3 point solution I proposed is a good compromise that
follows that precedent. It does mean that there is, by default, a small
performance hit for floats in generic contexts, but in exchange for that,
we get increased correctness and safety. This is the exact same tradeoff
that Swift makes for optionals! Any speed lost can be regained by
providing a specific override for FloatingPoint that uses ‘&==‘.

My point is not about performance. My point is that `Numeric.==` must
continue to have IEEE floating-point semantics for floating-point types and
integer semantics for integer types, or else existing uses of `Numeric.==`
will break without any way to fix them. The whole point of *having*
`Numeric` is to permit such generic algorithms to be written. But since
`Numeric.==` *is* `Equatable.==`, we have a large constraint on how the
semantics of `==` can be changed.

It would also conform to the new protocol and have it’s Equatable
conformance depreciated. Once we have conditional conformances, we can add
Equatable back conditionally. Also, while we are waiting for that, Numeric
can provide overrides of important methods when the conforming type is
Equatable or FloatingPoint.

For example, if someone wants to write a generic function that works

both on Integer and FloatingPoint, then they would have to use the new
protocol which would force them to correctly handle cases involving NaN.

What "new protocol" are you referring to, and what do you mean about
"correctly handling cases involving NaN"? The existing API of `Numeric`
makes it possible to write generic algorithms that accommodate both integer
and floating-point types--yes, even if the value is NaN. If you change the
definition of `==` or `<`, currently correct generic algorithms that use
`Numeric` will start to _incorrectly_ handle NaN.

#1 from my previous email (shown again here):

Currently, I think we should do 3 things:

1) Create a new protocol with a partial equivalence relation with
signature of (T, T)->Bool? and automatically conform Equatable things to it
2) Depreciate Float, etc’s… Equatable conformance with a warning
that it will eventually be removed (and conform Float, etc… to the partial
equivalence protocol)
3) Provide an '&==‘ relation on Float, etc… (without a protocol)
with the native Float IEEE comparison

In this case, #2 would also apply to Numeric. You can think of the
new protocol as a failable version of Equatable, so in any case where it
can’t meet equatable’s rules, it returns nil.

Again, Numeric makes possible the generic use of == with
floating-point semantics for floating-point values and integer semantics
for integer values; this design would not.

Correct. I view this as a good thing, because another way of saying
that is: “it makes possible cases where == sometimes conforms to the rules
of Equatable and sometimes doesn’t." Under the solution I am advocating,
Numeric would instead allow generic use of '==?’.

I suppose an argument could be made that we should extend ‘&==‘ to
Numeric from FloatingPoint, but then we would end up with the Rust
situation you were talking about earlier…

This would break any `Numeric` algorithms that currently use `==`
correctly. There are useful guarantees that are common to integer `==` and
IEEE floating-point `==`; namely, they each model equivalence of their
respective types at roughly what IEEE calls "level 1" (as numbers, rather
than as their representation or encoding). Breaking that utterly
eviscerates `Numeric`.

Nope. They would continue to work as they always have, but would have
a depreciation warning on them. The authors of those algorithms would have
a full depreciation cycle to update the algorithms. Fixits would be
provided to make conversion easier.

After the depreciation cycle, Numeric would no longer guarantee a common
"level 1" comparison for conforming types.

It would, using ==?, you would just be forced to deal with the
possibility of the Equality relation not holding. '(a ==? b) == true'
would mimic the current behavior.

What are the semantic guarantees required of `==?` such that this would
be guaranteed to be the current behavior? How would this be implementable
without being so costly that, in practice, no generic numeric algorithms
would ever use such a facility?

Moreover, if `(a ==? b) == true` guarantees the current behavior for all
types, and all currently Equatable types will conform to this protocol,
haven't you just reproduced the problem seen in Rust's `PartialEq`, only
now with clumsier syntax and poorer performance?

Is it the _purpose_ of this design to make it clumsier and less
performant so people don't use it? If so, to the extent that it is an
effective deterrent, haven't you created a deterrent to the use of Numeric
to an exactly equal extent?

Now you are just being rude. We all want Swift to be awesome… let’s try
to keep things civil.

Sorry if my reply came across that way! That wasn't at all the intention.
I really mean to ask you those questions and am interested in the answers:

Unless I misunderstand, you're arguing that your proposal is superior to
Rust's design because of a new operator that returns `Bool?` instead of
`Bool`; if so, how is it that you haven't reproduced Rust's design problem,
only with the additional syntax involved in unwrapping the result?

And if, as I understand, your argument is that your design is superior to
Rust's *because* it requires unwrapping, then isn't the extent to which
people will avoid using the protocol unintentionally also equally and
unavoidably the same extent to which it makes Numeric more cumbersome?

You said it was impossible, so I gave you a very quick example showing

that the current behavior was still possible. I wasn’t recommending that
everyone should only ever use that example for all things.

For FloatingPoint, ‘(a &== b) == true’ would mimic the current behavior
(bugs and all). It may not hold for all types.

No, the question was how it would be possible to have these guarantees
hold for `Numeric`, not merely for `FloatingPoint`, as the purpose is to
use `Numeric` for generic algorithms. This requires additional semantic
guarantees on what you propose to call `&==`.

Would something like this work?

Numeric.== -> Bool
traps on NaN etc.

This is unsatisfactory for several reasons:

- If it is not tolerable for NaN to trap when doing math with
floating-point values (and the very notion of "quiet NaN" is predicated on
that insight), then it cannot be tolerable for NaN to trap in generic
numeric code.

- As the whole raison d'être of `Numeric` is to permit useful generic
numeric algorithms, `Numeric.==` must offer the best practicable
approximation of mathematical equality for any conforming type. On a
concrete numeric type, it would be exceedingly user-hostile if `==` did not
represent the best practicable approximation of mathematical equality for
that type. Therefore, `Numeric.==` must be the same operator as
`FloatingPoint.==` and `Integer.==`. Despite necessary differences between
floating-point and integer values, these two concrete operators are spelled
the same way because they are both the best practicable approximations of
mathematical equality for the numeric values that their respective types
attempt to model (see below). If `Numeric.==` does not offer the closest
approximations of mathematical equality available for conforming types,
there is little point to offering `Numeric` as a generic protocol.

Numeric.==? -> Bool?

returns nil on NaN etc. You likely don't want this unless you know
something about floating-point.

Numeric.&== -> Bool
is IEEE equality. You should not use this unless you are a floating-point
expert.

I think we are proceeding from different starting points here.

It would be contrary to all sense to have a method named `Int.==` be
anything other than the best practicable approximation of mathematical
equality for `Int`. The same holds for floating-point types.

Either the IEEE definition of floating-point equality is the best such
approximation, or it is not. If it is not, then IEEE equality should not be
spelled `==` on any type or in any context. But, having weighed all the
alternatives, a committee of floating-point experts has blessed this
definition over others. As I understand it, this definition treats the
sequence of bits as the real number it attempts to represent to the
greatest extent possible, abstracting away encoding and representation
issues, and it excludes from the relation all NaNs because they are not in
the domain of real numbers.

So my starting point, then, is that (based on IEEE expertise) there is one
and only one proper definition of `==` for floating-point types, and that
it is the IEEE definition. You *should* use this definition in all places
to test for whether two floating-point values are equal. And Swift *should*
present IEEE equality as *the* go-to operator for equivalence of
floating-point values (which the core team has already declared on this
list to represent the uncountable set of real numbers and not the finite
set of representable numbers).

A proper design for `Equatable` and `Comparable` would accommodate
floating-point types while also making it possible to write generic
algorithms that behave correctly. It should be a non-goal to make
floating-point `==` anything other than what it is (i.e., IEEE-compliant).
Nor is it necessary (or, perhaps, even desirable) to eliminate
consideration of NaN from generic code. The only goal here (or at least, my
only goal here) is to ensure that writing generic code that uses `==` which
behaves properly with NaN is no more difficult than writing
floating-point-specific code that uses `==` which behaves properly with NaN.

The experts can get high performance or sophisticated numeric behavior. The

rest of us who naïvely use == get a relatively foolproof floating-point
model. (There is no difference among these three operators for fixed-size
integers, of course.)

This is analogous to what Swift does with integer overflow. I would
further argue the other Numeric operators like + should be extended to the
same triple of trap or optional or just-do-it. We already have two of those
three operators for integer addition after all.

Numeric.+ -> T
traps on FP NaN and integer overflow

Again, `Numeric.+` is and must be the same as `FloatingPoint.+` and
`Integer.+`. They are spelled the same way because they are both the best
practicable approximations of mathematical addition for the numeric values
that their respective types attempt to model. Wraparound is an inferior
approximation of mathematical addition, for example, because its semantics
take into consideration the underlying representation of the integral value
as a fixed-length series of bits.

Numeric.+? -> T?
returns nil on FP NaN and integer overflow

Numeric.&+ -> T
performs FP IEEE addition and integer wraparound

These two operations have entirely distinct semantics. No useful generic
algorithm could be written that uses this operator correctly.

···

On Thu, Oct 26, 2017 at 4:57 PM, Greg Parker <gparker@apple.com> wrote:

On Oct 26, 2017, at 11:47 AM, Xiaodi Wu via swift-dev <swift-dev@swift.org> > wrote:
On Thu, Oct 26, 2017 at 1:30 PM, Jonathan Hull <jhull@gbis.com> wrote:

The whole point is that you have to put thought into how you want to deal

with the optional case where the relation’s guarantees have failed.

If you need full performance, then you would have separate overrides on
Numeric for members which conform to FloatingPoint (where you could use
&==) and Equatable (where you could use ==). As you get more generic, you
lose opportunities for optimization. That is just the nature of generic
code. The nice thing about Swift is that you have an opportunity to
specialize if you want to optimize more. Once things like conditional
conformances come online, all of this will be nicer, of course.

This is a non-starter then. Protocols must enable useful generic code.
What you're basically saying is that you do not intend for it to be
possible to use methods on `Numeric` to ask about level 1 equivalence in a
way that would not be prohibitively expensive. This, again, eviscerates the
purpose of `Numeric`.

I'm not sure that there is a performance problem. If your compiled code is
actually making calls to generic comparison functions then you have already
lost the high performance war. Any place where the compiler knows enough to
use a specialized comparison function should also be a place where the
compiler can optimize away unnecessary floating-point checks.

Let me make an analogous objection to the current Numerics design. How do
you get the highest performance addition operator in a generic context?
Currently you can't, because Numeric.+ checks for integer overflow.

The point I'm making here, again, is that there are legitimate uses for
`==` guaranteeing partial equivalence in the generic context. The
approximation being put forward over and over is that generic code always
requires full equivalence and concrete floating-point code always requires
IEEE partial equivalence. That is _not true_. Some generic code (for
instance, that which uses `Numeric`) relies on partial equivalence
semantics and some floating-point code can nonetheless benefit from a
notion of full equivalence.

I agree that providing a way to get IEEE equality in a generic context is
useful. I am not convinced that Numeric.== -> Bool is the right place to
provide it.

--
Greg Parker gparker@apple.com Runtime Wrangler