Rationalizing FloatingPoint conformance to Equatable

Ben Cohen asked to continue this conversation on swift-dev--

Included in PR #12503 is a small tweak to accommodate so-called
"exceptional values" (such as NaN) in comparisons of array equality.

Currently, `Array.==` first looks to referential equality of underlying
buffers, then (if false) compares elementwise equivalence as defined by
`Element.==`. As a consequence, an array of NaN compares equal to another
array of NaN if they share the same buffer, but otherwise false. This is an
undesirable outcome.

In my analysis, this problem in fact comprises two issues.

# The proximate issue

The proximate issue is that `Array.==` is exposing an implementation detail
that the user should not need to reason about. Regardless of whether or not
it is wise for the partial equivalence relation `FloatingPoint.==` to be
deemed as semantically satisfactory for the purposes of `Equatable`, the
fact remains that Swift deliberately ships with such a conformance, and
`Array.==` should not blow up given that explicitly contemplated design
decision. This PR resolves that proximal issue.

I believe that this fix is strictly an improvement upon the status quo and
orthogonal to the second, more distal issue.

# The distal issue

The distal issue has to do with `FloatingPoint.==` and the conformance of
that protocol to `Equatable`.

The last time that this topic was brought up, competing demands voiced by
different people were not possible to reconcile:

1) Equatable `==` must be a full equivalence relation.

2) Floating-point types must conform to `Equatable` and not to some
alternative such as `PartiallyEquatable`.

3) Floating-point `==` must be IEEE-compliant (at least in the concrete
context), and therefore *not* a full equivalence relation.

4) IEEE-compliant floating-point equivalence must be spelled `==` and not
some alternative such as `&==`.

5) Behavior of `==` must not change between generic and concrete contexts.

Any solution must abolish one or more of the above must-haves. The
questions to be settled before any API-changing issues are broached are
therefore:

A) Must `Equatable.==` be a full equivalence relation?

By explicitly allowing exceptions in the documentation written for
`Comparable`, and by vending `FloatingPoint : Equatable`, the Swift
standard library currently tolerates `==` being a partial equivalence
relation, but some standard library algorithms assume that it is a full
equivalence relation.

In my view, this is an open topic for debate, as a non-trivial number of
generic algorithms can tolerate a partial equivalence relation (if NaN !=
NaN, then it's perfectly correct for Set not to deduplicate NaNs). Other
algorithms can be explicitly documented to require all elements to be in
the domain of arguments for a full equivalence relation and/or take a
custom predicate.

B) Must floating-point types conform to `Equatable`?

In my view, this is clear, and I don't believe it's a very controversial
opinion. Chris Lattner (I think?) was the one who pointed out that the Rust
experience with PartialEq and Eq, PartialOrd and Ord has been unsuccessful
because the same problems are being punted to PartialEq and PartialOrd. The
idea is that people avoid writing generic algorithms that exclude the use
of floating-point types; therefore, whatever protocol guarantees some
relation named `==` and includes both floating-point types and other types
will be the one that's used for generic algorithms, even if users need full
equivalence semantics not guaranteed by that protocol.

C) Must (concrete) floating-point `==` be IEEE-compliant?

In my view, yes; the alternative would make Swift utterly unsuitable for
numerics. I don't believe this is very controversial.

D) Must floating-point IEEE-compliant equivalence be spelled `==`?

In my view, this is something open for debate. I see no reason why it
cannot be migrated to `&==` if it were felt that `==` *must* be a full
equivalence relation. I believe this is controversial, however.

E) Must the behavior of `==` be the same in generic and concrete contexts?

In my view, it absolutely must be. Differing behavior in generic and
concrete contexts is simply too subtle to be understandable to the reader.
The explanation that a method on `Float` is a "floating-point context" but
a method on `[Float]` is *not a "floating point context"* is, IMO,
indefensible.

I actually got partway through writing up a pitch on this yesterday, but my opinion is that NaNs are so exceptional, and so prone to misuse, that we ought to treat them like integer arithmetic overflows: trap when they're detected, unless you use an `&` variant operator which indicates you know what you're doing.

I strongly suspect that, in practice, most float-manipulating code is not prepared to handle NaN and will not do anything sensible in its presence. For example, Apple platforms use floating-point types for geometry, color components, GPS locations, etc. Very little of this code will do anything sensible in the presence of a NaN. Arguably, it'd be better to exclude them through the type system, but I don't think that's a realistic possibility—we would need to have done that in a more source-break-friendly era. But that doesn't have to mean we're completely stuck.

···

On Oct 19, 2017, at 4:29 PM, Xiaodi Wu via swift-dev <swift-dev@swift.org> wrote:

D) Must floating-point IEEE-compliant equivalence be spelled `==`?

In my view, this is something open for debate. I see no reason why it cannot be migrated to `&==` if it were felt that `==` *must* be a full equivalence relation. I believe this is controversial, however.

--
Brent Royal-Gordon
Architechies

Differing behavior in generic and concrete contexts is simply too subtle to be understandable to the reader.

Hardly more subtle then the current “Equatable works like this, with these strong guarantees. Oh, except for some cases it doesn’t, in which case ¯\_(ツ)_/¯”

I realize that Comparable admits an exception for FP. This is, IMO, a serious mistake and needs to be reversed. Equatable has no such exception and rightly so.

The clearest demonstrations of how flawed this approach is can be found in the Standard Library. You can throw a brick at it and hit an example of something that’s broken by the presence of .nan: random sort orders, == implementations that differ based on the identify of the buffer, arbitrary duplication in Set/Dictionary etc.

The important point to take from this is not “how do we fix the Standard Library?” but rather “these errors are easy to make” by anyone writing generic code using standard protocols. If the Standard Library can’t get these right, how can we expect others to? There are potentially far worse bugs that could result. A differently-written sorting algorithm could corrupt elements (because it relied on substitutability). Other sorting or searching algorithms could easily go into an infinite loop. These problems exist because the code relies on the documented behavior of the protocol, because if you can’t, then what is the point in documenting that behavior?

I don’t support solutions such as adding a property indicating “containsExceptionalValues” (whatever that means), and expecting every author of a generic algorithm that uses Equatable to remember to call it, and craft custom paranoid behavior (if there is any reasonable behavior) based on it. With recursive conformance landed on master, we finally have a generics system where writing algorithms against Collection can be considered approachable by ordinary users. You no longer have to know things like how Collection.SubSequence needs to be constrained to also be a Collection – it just is. We would be negating this good work to now introduce a whole new set of gotchas that we expect people to know (without the type system even helping them in this case) about how some types, including standard library types, flout the documented rules for Equatable and Comparable, and that you need to use one of a handful of properties to hack in special cases to handle it.

And as a user of algorithms, what should you do? If a generic algorithm doesn’t document how it handles these special cases, should you assume it doesn’t? Check the code? Experiment to find out?

This problem also spreads, virus-like, once we have conditional conformance that makes containers equatable when their elements are. [Double] would need to propagate it’s elements’ “exceptionality", to avoid problems with [Double]. Double? will have to do the same.

The explanation that a method on `Float` is a "floating-point context" but a method on `[Float]` is *not a "floating point context"* is, IMO, indefensible.

Nevertheless, I will attempt to defend it :)

I find it odd that violating the documented requirements of a protocol is considered defensible, but expecting types comply with those requirements is indefensible. A principled stance would be to say that Float shouldn’t conform to Equatable (because… it doesn’t!) and requiring all calls to supply a predicate (and maybe amending types like Dictionary to allow you to supply one). That won’t fly though – users would complain – so instead we are in this murky ground.

Later in the thread, you mention a possible fix for sort:

`sort()` is problematic, but not if a custom predicate is supplied.

So, we are potentially trading off one subtlety (that < behaves differently in generic and non-generic contexts) for another (that you need to know that you need to pass in a special predicate for sorting, or you get nonsense results). Knowing when an algorithm requires you to supply a predicate (like sort) vs when handling for the special case is built in (like equatable) seems far worse complication to me than knowing one rule: that generically when constrained to Comparable, Float adheres to the requirements of Comparable. Always. That is a consistent rule that you need to learn once and that doesn’t vary depending on which algorithm you’re using.

Another alternative proposed in previous threads is to give Comparable an additional operator (<=> or .compare(to:) that will always enforce a total ordering, and sort can use that. This is, afaict, C#’s solution – double.NaN < 1.0, 1.0 < double.NaN and double.NaN == double.NaN all return false, but Comparer<double>.Default.compare returns -1, 1 and 0 respectively.

This is essentially a backdoor to having one comparison concretely, and another generically, except whether you get that different behavior is left to the whim of the algorithm’s implementor, depending on whether they used < or <=>. And what should sets use? Should a hashed set use == and allow multiple nan, but an ordered set use <=> and exclude them?

- `index(of:)` works perfectly sensibly without such a relation; if no NaN is equal to any other NaN, `index(of: .nan)` is appropriately `nil`.

It works sensibly when you are calling it from a concrete context, and you know that you have is a collection of floating point elements.

But it does not work sensibly if you are using index(of:) as a building block in a wider generic algorithm on Collection, which relies on the rules set out by Equatable. I find myself using index(of:) within generic algorithms frequently, and I really don’t know how I should reason about its use if it’s accepted that the rules of Equatable just don’t hold in the all cases. When operating generically more than one level deep like this, you really need to be able to rely on those rules.

This leads to the other argument against having algorithms generic over Equatable behave differently to ones on Float/Double (or FloatingPoint): that when you take your FP algorithm, and then loosen the constraints, it changes behavior in a way that may no longer be correct.

But we aready have the opposite case, where algorithms are written generically and then don’t work correctly for floats. I would argue that is far more harmful and subtle an issue. I honestly don’t think the situation where someone writes an algorithm first against floats, then generalizes it, is a very common path to creating a generic algorithm. The inverse – where someone writes an algorithm using Equatable, and then people use it with floats – is certainly common, hence this discussion.

There really is no way to square this circle. Every option is going to have downsides. We have to balance correctness, least surprise/most expected behavior for most people, and consistency. For me, making generic use of Equatable and Comparable stick to the documented conformance generically, while keeping FP-specific uses the way they are, is the least bad option.

···

On Oct 19, 2017, at 4:29 PM, Xiaodi Wu via swift-dev <swift-dev@swift.org <mailto:swift-dev@swift.org>> wrote:

+1 for trapping unless using &==. In the case of ‘Float?’ we could also map to nil.

This is probably a more appropriate discussion for evolution though...

···

On Oct 19, 2017, at 9:48 PM, Brent Royal-Gordon via swift-dev <swift-dev@swift.org> wrote:

On Oct 19, 2017, at 4:29 PM, Xiaodi Wu via swift-dev <swift-dev@swift.org <mailto:swift-dev@swift.org>> wrote:

D) Must floating-point IEEE-compliant equivalence be spelled `==`?

In my view, this is something open for debate. I see no reason why it cannot be migrated to `&==` if it were felt that `==` *must* be a full equivalence relation. I believe this is controversial, however.

I actually got partway through writing up a pitch on this yesterday, but my opinion is that NaNs are so exceptional, and so prone to misuse, that we ought to treat them like integer arithmetic overflows: trap when they're detected, unless you use an `&` variant operator which indicates you know what you're doing.

I strongly suspect that, in practice, most float-manipulating code is not prepared to handle NaN and will not do anything sensible in its presence. For example, Apple platforms use floating-point types for geometry, color components, GPS locations, etc. Very little of this code will do anything sensible in the presence of a NaN. Arguably, it'd be better to exclude them through the type system, but I don't think that's a realistic possibility—we would need to have done that in a more source-break-friendly era. But that doesn't have to mean we're completely stuck.

--
Brent Royal-Gordon
Architechies

_______________________________________________
swift-dev mailing list
swift-dev@swift.org
https://lists.swift.org/mailman/listinfo/swift-dev

Hi Ben,

Out of curiosity, what do you think about breaking Equatable into two protocols: Equatable and Substitutable? The former would be defined in terms of mathematics, while the latter is simple to define and usable by collections, etc. For example, while +0 equals -0, they are not substitutable (because 1.0 / +0.0 == +inf BUT 1.0 / -0.0 == -inf). In the case of NaNs, substitutability would depend on the NaN payload being the same or not (because some later subsystem might interpret the NaN payload). Finally, and unless I’m missing something, floating-point “substitutability” would translate to bitwise equality at the hardware level, which is nice from a performance perspective.

On the topic of sorting, we could do the same thing, where we break Comparable into two protocols: Comparable and Sortable. The former would be defined in terms of mathematics, while the latter has no mathematical obligation and therefore values like +0 and -0 can be consistently sorted. The same goes of NaNs with payloads (they can be sorted). And finally, at the hardware level, this should be quite efficient because of how the floating point bits are laid out (in that an integer comparison of FP bits should work for this Sortable proposal).

Dave

···

On Oct 24, 2017, at 14:55, Ben Cohen via swift-dev <swift-dev@swift.org> wrote:

There really is no way to square this circle. Every option is going to have downsides. We have to balance correctness, least surprise/most expected behavior for most people, and consistency. For me, making generic use of Equatable and Comparable stick to the documented conformance generically, while keeping FP-specific uses the way they are, is the least bad option.

Differing behavior in generic and concrete contexts is simply too subtle
to be understandable to the reader.

Hardly more subtle then the current “Equatable works like this, with these
strong guarantees. Oh, except for some cases it doesn’t, in which case
¯\_(ツ)_/¯”

I'm not saying that the status quo is a superior alternative.

However, one option is to _weaken_ the guarantees of Equatable such that it
guarantees only partial equivalence for `==`. From the perspective of
documented semantics, it's not subtle at all but a giant hammer of a
change. However, from an actual what-does-the-implementation-do standpoint,
it would be acknowledging what is already true. Only code that is already
broken when used with floating-point values would become formally
"incorrect" in the sense of relying on semantics that are then no longer
guaranteed.

Such a solution would avoid, as you might say, perpetuating the ¯\_(ツ)_/¯
approach to floating point.

I realize that Comparable admits an exception for FP. This is, IMO, a

serious mistake and needs to be reversed. Equatable has no such exception
and rightly so.

The clearest demonstrations of how flawed this approach is can be found in
the Standard Library. You can throw a brick at it and hit an example of
something that’s broken by the presence of .nan: random sort orders, ==
implementations that differ based on the identify of the buffer,

In my view, if a sort algorithm cannot accommodate NaN, it's entirely
acceptable to trap on NaN--and that is a trivial change. After all, NaN is
unordered with respect to everything and one cannot sort the unsortable.
And, as shown, the `Array.==` implementation is trivially fixable. The
entire standard library can be made NaN-safe in like manner.

arbitrary duplication in Set/Dictionary etc.

(I disagree that it's arbitrary. If NaN != NaN, then every NaN is properly
unique.)

The important point to take from this is not “how do we fix the Standard
Library?” but rather “these errors are easy to make” by anyone writing
generic code using standard protocols. If the Standard Library can’t get
these right, how can we expect others to? There are potentially far worse
bugs that could result. A differently-written sorting algorithm could
corrupt elements (because it relied on substitutability). Other sorting or
searching algorithms could easily go into an infinite loop. These problems
exist because the code relies on the documented behavior of the protocol,
because if you can’t, then what is the point in documenting that behavior?

It's not that the standard library *can't* get these right, but that it
currently *doesn't*, because it documents one set of semantics but
implements another, then relies on documented semantics that it knows it
does not implement. We both agree that this needs to be fixed.

The question here is whether it is to be fixed by sticking to the
documented semantic guarantees of `==` and bringing all implementations
into proper conformance, or alternatively sticking to the implemented
behavior of `==` and aligning the documented semantic guarantees to that.

I don’t support solutions such as adding a property indicating
“containsExceptionalValues” (whatever that means), and expecting every
author of a generic algorithm that uses Equatable to remember to call it,
and craft custom paranoid behavior (if there is any reasonable behavior)
based on it. With recursive conformance landed on master, we finally have a
generics system where writing algorithms against Collection can be
considered approachable by ordinary users. You no longer have to know
things like how Collection.SubSequence needs to be constrained to also be a
Collection – it just is. We would be negating this good work to now
introduce a whole new set of gotchas that we expect people to know (without
the type system even helping them in this case) about how some types,
including standard library types, flout the documented rules for Equatable
and Comparable, and that you need to use one of a handful of properties to
hack in special cases to handle it.

The gotchas aren't new; they arise when using floating point values,
originate with the IEEE definition of floating point equivalence, and exist
in some form in every language that has implemented collections of floating
point values. Crucially, they exist today in Swift; only, we haven't
documented it.

And as a user of algorithms, what should you do? If a generic algorithm

doesn’t document how it handles these special cases, should you assume it
doesn’t? Check the code? Experiment to find out?

This problem also spreads, virus-like, once we have conditional
conformance that makes containers equatable when their elements are.
[Double] would need to propagate it’s elements’ “exceptionality", to
avoid problems with [Double]. Double? will have to do the same.

This isn't a _problem_. In fact, I consider this to be a key _feature_.
Naturally, every protocol conformance (conditional or not) must implement
all protocol requirements, so if we add additional requirements they must
be implemented. What I'm saying here is that *it may be desirable* to have
some protocol-based API to distinguish partial from full equivalence
relations. If you accept that premise, then it is the logical consequence
that if you conditionally conform `Array` to `Equatable`, you will have to
implement any new APIs, and in so doing, document how equivalence of arrays
of floating point values relates to floating point equivalence. For me,
this is a _good thing_: it documents _in code_ something that today is
muddled through.

The explanation that a method on `Float` is a "floating-point context" but

a method on `[Float]` is *not a "floating point context"* is, IMO,
indefensible.

Nevertheless, I will attempt to defend it :)

I find it odd that violating the documented requirements of a protocol is
considered defensible, but expecting types comply with those requirements
is indefensible. A principled stance would be to say that Float shouldn’t
conform to Equatable (because… it doesn’t!) and requiring all calls to
supply a predicate (and maybe amending types like Dictionary to allow you
to supply one). That won’t fly though – users would complain – so instead
we are in this murky ground.

I don't think we should defend violating the documented requirements of a
protocol. Either (a) Float should not conform to Equatable (agree, this is
a non-starter); (b) how Float conforms to Equatable should be brought into
conformance with documented semantics (your stance); or (c) what semantics
are documented should be brought into alignment with how conformance is
actually implemented (my stance). Naturally, in the last case, additional
APIs should be added as needed to make such reduced semantic guarantees
useful for generic algorithms.

Later in the thread, you mention a possible fix for sort:

`sort()` is problematic, but not if a custom predicate is supplied.

So, we are potentially trading off one subtlety (that < behaves
differently in generic and non-generic contexts) for another (that you need
to know that you need to pass in a special predicate for sorting, or you
get nonsense results). Knowing when an algorithm requires you to supply a
predicate (like sort) vs when handling for the special case is built in
(like equatable) seems far worse complication to me than knowing one rule:
that generically when constrained to Comparable, Float adheres to the
requirements of Comparable. Always. That is a consistent rule that you need
to learn once and that doesn’t vary depending on which algorithm you’re
using.

I would argue that Float should _always_ adhere to the requirements of
Comparable, in all contexts. The question is, rather: what can be the
requirements of Comparable such that Float can always adhere to them?

Another alternative proposed in previous threads is to give Comparable an

additional operator (<=> or .compare(to:) that will always enforce a total
ordering, and sort can use that. This is, afaict, C#’s solution –
double.NaN < 1.0, 1.0 < double.NaN and double.NaN == double.NaN all return
false, but Comparer<double>.Default.compare returns -1, 1 and 0
respectively.

This is, essentially, the endpoint of what I'm proposing.

Equatable would vend (modulo bikeshedding):
`==`, a partial equivalence relation
`~`, a full equivalence relation
`containsExceptionalValues` (yes, this is a deliberately terrible name,
because it's meant to go through bikeshedding), a Boolean value to indicate
whether `==` is the same as `~`

Comparable would vend (modulo bikeshedding):
`<`, `>`, <=`, `>=`, defined as now
`<=>`, as in C# `compare` (or maybe, to emphasize the point, `<~>`)
`containsExceptionalValues`, inherited from `Equatable`, to document the
relationship between `<` (etc.) and the spaceship operator

This is essentially a backdoor to having one comparison concretely, and
another generically, except whether you get that different behavior is left
to the whim of the algorithm’s implementor, depending on whether they used
< or <=>. And what should sets use? Should a hashed set use == and allow
multiple nan, but an ordered set use <=> and exclude them?

This should absolutely be in the control of the implementor and not the
type system. It's superior to have the question of whether
`MyCollection.==` uses floating-point equivalence or not be a decision
that's made in the implementation of `==` and then explicitly recorded in
code (`containsExceptionalValues`); it *shouldn't* be a subtlety of whether
it's `MyCollection<T> where T : Equatable` that conditionally conforms to
`Equatable` or instead `MyCollection<T> where T : FloatingPoint`.

- `index(of:)` works perfectly sensibly without such a relation; if no NaN

is equal to any other NaN, `index(of: .nan)` is appropriately `nil`.

It works sensibly when you are calling it from a concrete context, and you
know that you have is a collection of floating point elements.

But it does not work sensibly if you are using index(of:) as a building
block in a wider generic algorithm on Collection, which relies on the rules
set out by Equatable. I find myself using index(of:) within generic
algorithms frequently, and I really don’t know how I should reason about
its use if it’s accepted that the rules of Equatable just don’t hold in the
all cases. When operating generically more than one level deep like this,
you really need to be able to rely on those rules.

But, since `index(of:)` is a generic extension method itself, having
different behaviors for `==` in the generic vs. concrete context would mean
that the behavior of this method *would no longer make sense if you are
using `index(of:)` from a concrete context* where you know you have a
collection of floating-point elements. And while one may write generic
algorithms frequently, I'll bet that the average user of Swift calls
`index(of:)` in concrete contexts far more frequently than in generic ones.

To follow your line of argument, you'd also want `index(of:)` to have
different behaviors in the generic vs. concrete context, just like `==`
does. Carrying this out consistently would ultimately require having an
alternative implementation on every collection type of every generic
algorithm such as `index(of:)`, using the constraint `Element :
FloatingPoint`.

This leads to the other argument against having algorithms generic over

···

On Tue, Oct 24, 2017 at 1:55 PM, Ben Cohen <ben_cohen@apple.com> wrote:

On Oct 19, 2017, at 4:29 PM, Xiaodi Wu via swift-dev <swift-dev@swift.org> > wrote:
Equatable behave differently to ones on Float/Double (or FloatingPoint):
that when you take your FP algorithm, and then loosen the constraints, it
changes behavior in a way that may no longer be correct.

But we aready have the opposite case, where algorithms are written
generically and then don’t work correctly for floats. I would argue that is
far more harmful and subtle an issue. I honestly don’t think the situation
where someone writes an algorithm first against floats, then generalizes
it, is a very common path to creating a generic algorithm. The inverse –
where someone writes an algorithm using Equatable, and then people use it
with floats – is certainly common, hence this discussion.

There really is no way to square this circle. Every option is going to
have downsides. We have to balance correctness, least surprise/most
expected behavior for most people, and consistency. For me, making generic
use of Equatable and Comparable stick to the documented conformance
generically, while keeping FP-specific uses the way they are, is the least
bad option.

Built-in floating point operators, as well as libc/libm math functions, are
designed to propagate NaN correctly. This is not meant to be a thread about
NaN, and we need to be cautious to define the scope of the problem to be
solved from the outset. The tendency of having ever-expanding discussion
where issues such as method names turn into discussions about the entire
standard library go nowhere.

The question here is about `==` specifically and how to accommodate partial
equivalence relations. For sanity, we start with the premise that NaN will
forever be as it is.

···

On Fri, Oct 20, 2017 at 1:22 AM, Jonathan Hull <jhull@gbis.com> wrote:

+1 for trapping unless using &==. In the case of ‘Float?’ we could also
map to nil.

This is probably a more appropriate discussion for evolution though...

On Oct 19, 2017, at 9:48 PM, Brent Royal-Gordon via swift-dev < > swift-dev@swift.org> wrote:

On Oct 19, 2017, at 4:29 PM, Xiaodi Wu via swift-dev <swift-dev@swift.org> > wrote:

D) Must floating-point IEEE-compliant equivalence be spelled `==`?

In my view, this is something open for debate. I see no reason why it
cannot be migrated to `&==` if it were felt that `==` *must* be a full
equivalence relation. I believe this is controversial, however.

I actually got partway through writing up a pitch on this yesterday, but
my opinion is that NaNs are so exceptional, and so prone to misuse, that we
ought to treat them like integer arithmetic overflows: trap when they're
detected, unless you use an `&` variant operator which indicates you know
what you're doing.

I strongly suspect that, in practice, most float-manipulating code is not
prepared to handle NaN and will not do anything sensible in its presence.
For example, Apple platforms use floating-point types for geometry, color
components, GPS locations, etc. Very little of this code will do anything
sensible in the presence of a NaN. Arguably, it'd be better to exclude them
through the type system, but I don't think that's a realistic
possibility—we would need to have done that in a more source-break-friendly
era. But that doesn't have to mean we're completely stuck.

Differing behavior in generic and concrete contexts is simply too subtle to be understandable to the reader.

Hardly more subtle then the current “Equatable works like this, with these strong guarantees. Oh, except for some cases it doesn’t, in which case ¯\_(ツ)_/¯”

I'm not saying that the status quo is a superior alternative.

However, one option is to _weaken_ the guarantees of Equatable such that it guarantees only partial equivalence for `==`. From the perspective of documented semantics, it's not subtle at all but a giant hammer of a change. However, from an actual what-does-the-implementation-do standpoint, it would be acknowledging what is already true. Only code that is already broken when used with floating-point values would become formally "incorrect" in the sense of relying on semantics that are then no longer guaranteed.

Such a solution would avoid, as you might say, perpetuating the ¯\_(ツ)_/¯ approach to floating point.

I realize that Comparable admits an exception for FP. This is, IMO, a serious mistake and needs to be reversed. Equatable has no such exception and rightly so.

The clearest demonstrations of how flawed this approach is can be found in the Standard Library. You can throw a brick at it and hit an example of something that’s broken by the presence of .nan: random sort orders, == implementations that differ based on the identify of the buffer,

In my view, if a sort algorithm cannot accommodate NaN, it's entirely acceptable to trap on NaN--and that is a trivial change.

I think this would be user-hostile. This isn’t like out-of-bounds subscript where it’s just not possible to reasonably proceed. NaNs crop up and people don’t expect them to trap when you sort – they expect them to sort to one end, like in Excel.

After all, NaN is unordered with respect to everything and one cannot sort the unsortable. And, as shown, the `Array.==` implementation is trivially fixable. The entire standard library can be made NaN-safe in like manner.

My point was, it’s not about what we can do in the standard library. The std lib only has a handful of methods and sure, we can fix them one by one. It’s about whether the standard library defines types and protocols such that it’s reasonable for programmers to use them to write and use generic algorithms correctly. I’m citing the existing std lib implementations as proof that it’s easy to make mistakes. And I think a more complicated approach, with more operators, more properties, more rules, won’t fix this problem.

arbitrary duplication in Set/Dictionary etc.

(I disagree that it's arbitrary. If NaN != NaN, then every NaN is properly unique.)

The important point to take from this is not “how do we fix the Standard Library?” but rather “these errors are easy to make” by anyone writing generic code using standard protocols. If the Standard Library can’t get these right, how can we expect others to? There are potentially far worse bugs that could result. A differently-written sorting algorithm could corrupt elements (because it relied on substitutability). Other sorting or searching algorithms could easily go into an infinite loop. These problems exist because the code relies on the documented behavior of the protocol, because if you can’t, then what is the point in documenting that behavior?

It's not that the standard library *can't* get these right, but that it currently *doesn't*, because it documents one set of semantics but implements another, then relies on documented semantics that it knows it does not implement. We both agree that this needs to be fixed.

The question here is whether it is to be fixed by sticking to the documented semantic guarantees of `==` and bringing all implementations into proper conformance, or alternatively sticking to the implemented behavior of `==` and aligning the documented semantic guarantees to that.

I don’t support solutions such as adding a property indicating “containsExceptionalValues” (whatever that means), and expecting every author of a generic algorithm that uses Equatable to remember to call it, and craft custom paranoid behavior (if there is any reasonable behavior) based on it. With recursive conformance landed on master, we finally have a generics system where writing algorithms against Collection can be considered approachable by ordinary users. You no longer have to know things like how Collection.SubSequence needs to be constrained to also be a Collection – it just is. We would be negating this good work to now introduce a whole new set of gotchas that we expect people to know (without the type system even helping them in this case) about how some types, including standard library types, flout the documented rules for Equatable and Comparable, and that you need to use one of a handful of properties to hack in special cases to handle it.

The gotchas aren't new; they arise when using floating point values, originate with the IEEE definition of floating point equivalence, and exist in some form in every language that has implemented collections of floating point values. Crucially, they exist today in Swift; only, we haven't documented it.

And as a user of algorithms, what should you do? If a generic algorithm doesn’t document how it handles these special cases, should you assume it doesn’t? Check the code? Experiment to find out?

This problem also spreads, virus-like, once we have conditional conformance that makes containers equatable when their elements are. [Double] would need to propagate it’s elements’ “exceptionality", to avoid problems with [Double]. Double? will have to do the same.

This isn't a _problem_. In fact, I consider this to be a key _feature_. Naturally, every protocol conformance (conditional or not) must implement all protocol requirements, so if we add additional requirements they must be implemented. What I'm saying here is that *it may be desirable* to have some protocol-based API to distinguish partial from full equivalence relations. If you accept that premise, then it is the logical consequence that if you conditionally conform `Array` to `Equatable`, you will have to implement any new APIs, and in so doing, document how equivalence of arrays of floating point values relates to floating point equivalence. For me, this is a _good thing_: it documents _in code_ something that today is muddled through.

The explanation that a method on `Float` is a "floating-point context" but a method on `[Float]` is *not a "floating point context"* is, IMO, indefensible.

Nevertheless, I will attempt to defend it :)

I find it odd that violating the documented requirements of a protocol is considered defensible, but expecting types comply with those requirements is indefensible. A principled stance would be to say that Float shouldn’t conform to Equatable (because… it doesn’t!) and requiring all calls to supply a predicate (and maybe amending types like Dictionary to allow you to supply one). That won’t fly though – users would complain – so instead we are in this murky ground.

I don't think we should defend violating the documented requirements of a protocol. Either (a) Float should not conform to Equatable (agree, this is a non-starter); (b) how Float conforms to Equatable should be brought into conformance with documented semantics (your stance); or (c) what semantics are documented should be brought into alignment with how conformance is actually implemented (my stance). Naturally, in the last case, additional APIs should be added as needed to make such reduced semantic guarantees useful for generic algorithms.

Later in the thread, you mention a possible fix for sort:

`sort()` is problematic, but not if a custom predicate is supplied.

So, we are potentially trading off one subtlety (that < behaves differently in generic and non-generic contexts) for another (that you need to know that you need to pass in a special predicate for sorting, or you get nonsense results). Knowing when an algorithm requires you to supply a predicate (like sort) vs when handling for the special case is built in (like equatable) seems far worse complication to me than knowing one rule: that generically when constrained to Comparable, Float adheres to the requirements of Comparable. Always. That is a consistent rule that you need to learn once and that doesn’t vary depending on which algorithm you’re using.

I would argue that Float should _always_ adhere to the requirements of Comparable, in all contexts. The question is, rather: what can be the requirements of Comparable such that Float can always adhere to them?

Another alternative proposed in previous threads is to give Comparable an additional operator (<=> or .compare(to:) that will always enforce a total ordering, and sort can use that. This is, afaict, C#’s solution – double.NaN < 1.0, 1.0 < double.NaN and double.NaN == double.NaN all return false, but Comparer<double>.Default.compare returns -1, 1 and 0 respectively.

This is, essentially, the endpoint of what I'm proposing.

Equatable would vend (modulo bikeshedding):
`==`, a partial equivalence relation
`~`, a full equivalence relation
`containsExceptionalValues` (yes, this is a deliberately terrible name, because it's meant to go through bikeshedding), a Boolean value to indicate whether `==` is the same as `~`

Comparable would vend (modulo bikeshedding):
`<`, `>`, <=`, `>=`, defined as now
`<=>`, as in C# `compare` (or maybe, to emphasize the point, `<~>`)
`containsExceptionalValues`, inherited from `Equatable`, to document the relationship between `<` (etc.) and the spaceship operator

This looks to me to be an absurd mess of operations, none of which will have much hope of being used in a coherent fashion by most people. Should I use == or ~ here? What are the rules again? Will people remember to not use < when they really need <=>? Probably not. Did the author of this framework I’m using remember? Dunno.

The C# model of compare works because < is not available generically. There is no choice between < and <=>, and so the model is simple and easily understood by both algorithm implementors and users. And if you need a different ordering, you can supply your own custom comparator. As far as I can tell, it’s a good model and users are happy with it. Swift is different, since the concrete < is exposed to the generic implementation, but having two possibilities and expecting users to pick is IMO a bad idea. Hence the proposed fix that Float’s Comparable.< is required to be a total order, per the requirements of Comparable, essentially giving us the C# model.

This is essentially a backdoor to having one comparison concretely, and another generically, except whether you get that different behavior is left to the whim of the algorithm’s implementor, depending on whether they used < or <=>. And what should sets use? Should a hashed set use == and allow multiple nan, but an ordered set use <=> and exclude them?

This should absolutely be in the control of the implementor and not the type system. It's superior to have the question of whether `MyCollection.==` uses floating-point equivalence or not be a decision that's made in the implementation of `==` and then explicitly recorded in code (`containsExceptionalValues`); it *shouldn't* be a subtlety of whether it's `MyCollection<T> where T : Equatable` that conditionally conforms to `Equatable` or instead `MyCollection<T> where T : FloatingPoint`.

- `index(of:)` works perfectly sensibly without such a relation; if no NaN is equal to any other NaN, `index(of: .nan)` is appropriately `nil`.

It works sensibly when you are calling it from a concrete context, and you know that you have is a collection of floating point elements.

But it does not work sensibly if you are using index(of:) as a building block in a wider generic algorithm on Collection, which relies on the rules set out by Equatable. I find myself using index(of:) within generic algorithms frequently, and I really don’t know how I should reason about its use if it’s accepted that the rules of Equatable just don’t hold in the all cases. When operating generically more than one level deep like this, you really need to be able to rely on those rules.

But, since `index(of:)` is a generic extension method itself, having different behaviors for `==` in the generic vs. concrete context would mean that the behavior of this method *would no longer make sense if you are using `index(of:)` from a concrete context* where you know you have a collection of floating-point elements. And while one may write generic algorithms frequently, I'll bet that the average user of Swift calls `index(of:)` in concrete contexts far more frequently than in generic ones.

To follow your line of argument, you'd also want `index(of:)` to have different behaviors in the generic vs. concrete context,

There is no concrete context for index(of:), really. There is only a generic context inside its implementation, since it uses Equatable, even if calling it directly from concrete array of floats. If you want “concrete” behavior, you would need to use index(where:), and pass in your own predicate (again, same model as with C#).

···

On Oct 24, 2017, at 6:48 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Tue, Oct 24, 2017 at 1:55 PM, Ben Cohen <ben_cohen@apple.com> wrote:

On Oct 19, 2017, at 4:29 PM, Xiaodi Wu via swift-dev <swift-dev@swift.org> wrote:

just like `==` does. Carrying this out consistently would ultimately require having an alternative implementation on every collection type of every generic algorithm such as `index(of:)`, using the constraint `Element : FloatingPoint`.

This leads to the other argument against having algorithms generic over Equatable behave differently to ones on Float/Double (or FloatingPoint): that when you take your FP algorithm, and then loosen the constraints, it changes behavior in a way that may no longer be correct.

But we aready have the opposite case, where algorithms are written generically and then don’t work correctly for floats. I would argue that is far more harmful and subtle an issue. I honestly don’t think the situation where someone writes an algorithm first against floats, then generalizes it, is a very common path to creating a generic algorithm. The inverse – where someone writes an algorithm using Equatable, and then people use it with floats – is certainly common, hence this discussion.

There really is no way to square this circle. Every option is going to have downsides. We have to balance correctness, least surprise/most expected behavior for most people, and consistency. For me, making generic use of Equatable and Comparable stick to the documented conformance generically, while keeping FP-specific uses the way they are, is the least bad option.
  

There really is no way to square this circle. Every option is going to
have downsides. We have to balance correctness, least surprise/most
expected behavior for most people, and consistency. For me, making generic
use of Equatable and Comparable stick to the documented conformance
generically, while keeping FP-specific uses the way they are, is the least
bad option.

Hi Ben,

Out of curiosity, what do you think about breaking Equatable into two
protocols: Equatable and Substitutable? The former would be defined in
terms of mathematics, while the latter is simple to define and usable by
collections, etc. For example, while +0 equals -0, they are not
substitutable (because 1.0 / +0.0 == +inf BUT 1.0 / -0.0 == -inf). In the
case of NaNs, substitutability would depend on the NaN payload being the
same or not (because some later subsystem might interpret the NaN payload).

Finally, and unless I’m missing something, floating-point
“substitutability” would translate to bitwise equality at the hardware
level, which is nice from a performance perspective.

As Steve pointed out, floating-point "substitutability" as you define it is
unlikely to be desired in most contexts; differentiation by NaN payload or
by decimal value representation is highly unintuitive, and it would be
intolerable for generic `.contains(.nan)` to sometimes return true and
sometimes false depending on NaN payload.

On the topic of sorting, we could do the same thing, where we break
Comparable into two protocols: Comparable and Sortable. The former would be
defined in terms of mathematics, while the latter has no mathematical
obligation and therefore values like +0 and -0 can be consistently sorted.
The same goes of NaNs with payloads (they can be sorted). And finally, at
the hardware level, this should be quite efficient because of how the
floating point bits are laid out (in that an integer comparison of FP bits
should work for this Sortable proposal).

Floating point values can certainly be consistently sorted as an sequence
of bits, but again, it is unlikely that anyone would want such a sort of
their floating point values.

But as to the topic of splitting protocols, it is essentially the design
adopted by Rust. During discussion of Ben's original proposal, some core
team members argued very eloquently against such a design based on what's
happened in Rust. Namely:

- If Float is not Equatable but instead SomethingOtherThanEquatable (in
Rust, it's called `PartialEq`), then we will have officially done our job
in terms of stdlib semantic guarantees. However, in practice, people *want*
to write generic algorithms that work with both integers and floating point
values.
- Based on the Rust experience, what people will do when they realize that
Float doesn't conform to Equatable is instead to write generic algorithms
that operate on SomethingOtherThanEquatable while assuming that integer
`==` and floating point `==` will work the same way.
- The result is that you now have two protocols instead of one, but most
people still use only one--and still use it incorrectly. The existing
problem has simply been transferred from Equatable to
SomethingOtherThanEquatable. The new Equatable accomplishes nothing in
terms of helping people write better code.

···

On Tue, Oct 24, 2017 at 4:28 PM, David Zarzycki <dave@znu.io> wrote:

On Oct 24, 2017, at 14:55, Ben Cohen via swift-dev <swift-dev@swift.org> > wrote:

Differing behavior in generic and concrete contexts is simply too subtle
to be understandable to the reader.

Hardly more subtle then the current “Equatable works like this, with
these strong guarantees. Oh, except for some cases it doesn’t, in which
case ¯\_(ツ)_/¯”

I'm not saying that the status quo is a superior alternative.

However, one option is to _weaken_ the guarantees of Equatable such that
it guarantees only partial equivalence for `==`. From the perspective of
documented semantics, it's not subtle at all but a giant hammer of a
change. However, from an actual what-does-the-implementation-do
standpoint, it would be acknowledging what is already true. Only code that
is already broken when used with floating-point values would become
formally "incorrect" in the sense of relying on semantics that are then no
longer guaranteed.

Such a solution would avoid, as you might say, perpetuating the ¯\_(ツ)_/¯
approach to floating point.

I realize that Comparable admits an exception for FP. This is, IMO, a

serious mistake and needs to be reversed. Equatable has no such exception
and rightly so.

The clearest demonstrations of how flawed this approach is can be found
in the Standard Library. You can throw a brick at it and hit an example of
something that’s broken by the presence of .nan: random sort orders, ==
implementations that differ based on the identify of the buffer,

In my view, if a sort algorithm cannot accommodate NaN, it's entirely
acceptable to trap on NaN--and that is a trivial change.

I think this would be user-hostile. This isn’t like out-of-bounds
subscript where it’s just not possible to reasonably proceed. NaNs crop up
and people don’t expect them to trap when you sort – they expect them to
sort to one end, like in Excel.

Honestly, I don't know that most users have thought about this possibility
at all. Sure, a sort that matches IEEE total order _might_ be justifiable.
But users are as likely to expect that the last item in the sorted
collection will be the greatest and that the first item in the sorted
collection will be the smallest. Now, you can say that NaN compares larger
than everything, everywhere. But the moment that they try to plug that last
element into, say, an AppKit UI function, they're toast.

I certainly disagree with ideas of trapping on NaN inside `==` or similar
functions, but I really do think that an argument can be made that it is
not reasonable to proceed with sorting an array that contains NaN.

After all, NaN is unordered with respect to everything and one cannot sort
the unsortable. And, as shown, the `Array.==` implementation is trivially
fixable. The entire standard library can be made NaN-safe in like manner.

My point was, it’s not about what we can do in the standard library. The
std lib only has a handful of methods and sure, we can fix them one by one.
It’s about whether the standard library defines types and protocols such
that it’s reasonable for programmers to use them to write and use generic
algorithms correctly. I’m citing the existing std lib implementations as
proof that it’s easy to make mistakes. And I think a more complicated
approach, with more operators, more properties, more rules, won’t fix this
problem.

Well, to my mind, this problem you state really works out to:

(a) People expect generic algorithms that operate on Comparable types to
work correctly with floating-point types
(b) Generic algorithms that operate on Comparable types don't work
correctly with floating-point types unless the author is very, very careful
(c) People shouldn't have to be very, very careful to write working generic
algorithms that work with floating-point types

Which, in turn, really boils down to:

(d) People expect floating-point types not to have numerous unintuitive
(but learnable) properties, including NaN being unordered
(e) Floating-point types have numerous unintuitive (but learnable)
properties, including NaN being unordered

The reason I'm writing to swift-dev (rather than evolution) is that my
interest is in fixing the standard library. I'm not even convinced that
this problem you state is fixable, at least on your terms. In the interest
of not increasing the API surface area, you would propose to blow away (e)
in the generic but not concrete context. Now, while it's true that an
alternative to increasing the API surface area is to have the same API
exhibit context-specific behaviors, that certainly isn't any less
complicated conceptually, as we would then be faced with the notion that
floating-point types both have and do not have numerous unintuitive
properties, depending on the context in which they are used.

arbitrary duplication in Set/Dictionary etc.

(I disagree that it's arbitrary. If NaN != NaN, then every NaN is properly
unique.)

The important point to take from this is not “how do we fix the Standard
Library?” but rather “these errors are easy to make” by anyone writing
generic code using standard protocols. If the Standard Library can’t get
these right, how can we expect others to? There are potentially far worse
bugs that could result. A differently-written sorting algorithm could
corrupt elements (because it relied on substitutability). Other sorting or
searching algorithms could easily go into an infinite loop. These problems
exist because the code relies on the documented behavior of the protocol,
because if you can’t, then what is the point in documenting that behavior?

It's not that the standard library *can't* get these right, but that it
currently *doesn't*, because it documents one set of semantics but
implements another, then relies on documented semantics that it knows it
does not implement. We both agree that this needs to be fixed.

The question here is whether it is to be fixed by sticking to the
documented semantic guarantees of `==` and bringing all implementations
into proper conformance, or alternatively sticking to the implemented
behavior of `==` and aligning the documented semantic guarantees to that.

I don’t support solutions such as adding a property indicating
“containsExceptionalValues” (whatever that means), and expecting every
author of a generic algorithm that uses Equatable to remember to call it,
and craft custom paranoid behavior (if there is any reasonable behavior)
based on it. With recursive conformance landed on master, we finally have a
generics system where writing algorithms against Collection can be
considered approachable by ordinary users. You no longer have to know
things like how Collection.SubSequence needs to be constrained to also be a
Collection – it just is. We would be negating this good work to now
introduce a whole new set of gotchas that we expect people to know (without
the type system even helping them in this case) about how some types,
including standard library types, flout the documented rules for Equatable
and Comparable, and that you need to use one of a handful of properties to
hack in special cases to handle it.

The gotchas aren't new; they arise when using floating point values,
originate with the IEEE definition of floating point equivalence, and exist
in some form in every language that has implemented collections of floating
point values. Crucially, they exist today in Swift; only, we haven't
documented it.

And as a user of algorithms, what should you do? If a generic algorithm

doesn’t document how it handles these special cases, should you assume it
doesn’t? Check the code? Experiment to find out?

This problem also spreads, virus-like, once we have conditional
conformance that makes containers equatable when their elements are.
[Double] would need to propagate it’s elements’ “exceptionality", to
avoid problems with [Double]. Double? will have to do the same.

This isn't a _problem_. In fact, I consider this to be a key _feature_.
Naturally, every protocol conformance (conditional or not) must implement
all protocol requirements, so if we add additional requirements they must
be implemented. What I'm saying here is that *it may be desirable* to have
some protocol-based API to distinguish partial from full equivalence
relations. If you accept that premise, then it is the logical consequence
that if you conditionally conform `Array` to `Equatable`, you will have to
implement any new APIs, and in so doing, document how equivalence of arrays
of floating point values relates to floating point equivalence. For me,
this is a _good thing_: it documents _in code_ something that today is
muddled through.

The explanation that a method on `Float` is a "floating-point context" but

a method on `[Float]` is *not a "floating point context"* is, IMO,
indefensible.

Nevertheless, I will attempt to defend it :)

I find it odd that violating the documented requirements of a protocol is
considered defensible, but expecting types comply with those requirements
is indefensible. A principled stance would be to say that Float shouldn’t
conform to Equatable (because… it doesn’t!) and requiring all calls to
supply a predicate (and maybe amending types like Dictionary to allow you
to supply one). That won’t fly though – users would complain – so instead
we are in this murky ground.

I don't think we should defend violating the documented requirements of a
protocol. Either (a) Float should not conform to Equatable (agree, this is
a non-starter); (b) how Float conforms to Equatable should be brought into
conformance with documented semantics (your stance); or (c) what semantics
are documented should be brought into alignment with how conformance is
actually implemented (my stance). Naturally, in the last case, additional
APIs should be added as needed to make such reduced semantic guarantees
useful for generic algorithms.

Later in the thread, you mention a possible fix for sort:

`sort()` is problematic, but not if a custom predicate is supplied.

So, we are potentially trading off one subtlety (that < behaves
differently in generic and non-generic contexts) for another (that you need
to know that you need to pass in a special predicate for sorting, or you
get nonsense results). Knowing when an algorithm requires you to supply a
predicate (like sort) vs when handling for the special case is built in
(like equatable) seems far worse complication to me than knowing one rule:
that generically when constrained to Comparable, Float adheres to the
requirements of Comparable. Always. That is a consistent rule that you need
to learn once and that doesn’t vary depending on which algorithm you’re
using.

I would argue that Float should _always_ adhere to the requirements of
Comparable, in all contexts. The question is, rather: what can be the
requirements of Comparable such that Float can always adhere to them?

Another alternative proposed in previous threads is to give Comparable an

additional operator (<=> or .compare(to:) that will always enforce a total
ordering, and sort can use that. This is, afaict, C#’s solution –
double.NaN < 1.0, 1.0 < double.NaN and double.NaN == double.NaN all return
false, but Comparer<double>.Default.compare returns -1, 1 and 0
respectively.

This is, essentially, the endpoint of what I'm proposing.

Equatable would vend (modulo bikeshedding):
`==`, a partial equivalence relation
`~`, a full equivalence relation
`containsExceptionalValues` (yes, this is a deliberately terrible name,
because it's meant to go through bikeshedding), a Boolean value to indicate
whether `==` is the same as `~`

Comparable would vend (modulo bikeshedding):
`<`, `>`, <=`, `>=`, defined as now
`<=>`, as in C# `compare` (or maybe, to emphasize the point, `<~>`)
`containsExceptionalValues`, inherited from `Equatable`, to document the
relationship between `<` (etc.) and the spaceship operator

This looks to me to be an absurd mess of operations, none of which will
have much hope of being used in a coherent fashion by most people. Should I
use == or ~ here? What are the rules again? Will people remember to not use
< when they really need <=>? Probably not. Did the author of this framework
I’m using remember? Dunno.

The syntax here is not the point (or if it is, it can be bikeshedded). The
point I'm trying to make is that what you're criticizing as _incoherent_ is
also _inescapable_. Floating-point types have a notion of equivalence that
isn't full equivalence. For certain use cases (both concrete and generic),
we want that partial equivalence, while for other use cases (both concrete
and generic), we truly want full equivalence. To work with floating-point
types correctly, a user must know that there is a difference between the
two. If there is no hope of "most people" understanding this distinction
when one relation is named `==` and the other is named `~`, then _a
fortiori_ there is no hope of "most people" understanding the distinction
when they're conflated into one operator `==` that has different behaviors
in different contexts.

The C# model of compare works because < is not available generically. There

is no choice between < and <=>, and so the model is simple and easily
understood by both algorithm implementors and users. And if you need a
different ordering, you can supply your own custom comparator. As far as I
can tell, it’s a good model and users are happy with it. Swift is
different, since the concrete < *is* exposed to the generic
implementation, but having two possibilities and expecting users to pick is
IMO a bad idea. Hence the proposed fix that Float’s Comparable.< is
required to be a total order, per the requirements of Comparable,
essentially giving us the C# model.

A true C# model would be fine, but the key point of that model to my mind
is that partial equivalence and full equivalence are spelled differently
(that is, `==` and `Equals`, respectively). It would not work with IEEE
`==` being spelled the same way as Comparable `==`. If we were to rename
the IEEE operation `&==` instead, then we'd functionally have a design
that's broadly similar to the earlier version, only with different names:

Equatable would vend `==`, a full equivalence relation (and `!=`)
Comparable would vend `<`, `>`, `<=`, `>=`, now operators that reflect a
total order over the set of all values; and maybe `<=>`
Floating point would additionally vend `&==` and `&<` (and `&!=`, `&<`,
`&>`, `&<=`, `&>=`)

One key difference here would be that the partial equivalence relation
would now only be found on floating-point types, and it would not be
possible to write a generic algorithm that operates on any partially
equatable or equatable type. But the other--and major--issues would be (a)
that all concrete uses of floating-point comparison operators would have to
be migrated to append an extra `&`; and (b) this syntax suggests that most
users want to use `==` *instead of* `&==`, which I'm not sure is the
case--and certainly isn't the case if they're trying to do the same things
they're used to doing with floating-point values in other languages.

This is essentially a backdoor to having one comparison concretely, and

another generically, except whether you get that different behavior is left
to the whim of the algorithm’s implementor, depending on whether they used
< or <=>. And what should sets use? Should a hashed set use == and allow
multiple nan, but an ordered set use <=> and exclude them?

This should absolutely be in the control of the implementor and not the
type system. It's superior to have the question of whether
`MyCollection.==` uses floating-point equivalence or not be a decision
that's made in the implementation of `==` and then explicitly recorded in
code (`containsExceptionalValues`); it *shouldn't* be a subtlety of whether
it's `MyCollection<T> where T : Equatable` that conditionally conforms to
`Equatable` or instead `MyCollection<T> where T : FloatingPoint`.

- `index(of:)` works perfectly sensibly without such a relation; if no NaN

is equal to any other NaN, `index(of: .nan)` is appropriately `nil`.

It works sensibly when you are calling it from a concrete context, and
you know that you have is a collection of floating point elements.

But it does not work sensibly if you are using index(of:) as a building
block in a wider generic algorithm on Collection, which relies on the rules
set out by Equatable. I find myself using index(of:) within generic
algorithms frequently, and I really don’t know how I should reason about
its use if it’s accepted that the rules of Equatable just don’t hold in the
all cases. When operating generically more than one level deep like this,
you really need to be able to rely on those rules.

But, since `index(of:)` is a generic extension method itself, having
different behaviors for `==` in the generic vs. concrete context would mean
that the behavior of this method *would no longer make sense if you are
using `index(of:)` from a concrete context* where you know you have a
collection of floating-point elements. And while one may write generic
algorithms frequently, I'll bet that the average user of Swift calls
`index(of:)` in concrete contexts far more frequently than in generic ones.

To follow your line of argument, you'd also want `index(of:)` to have
different behaviors in the generic vs. concrete context,

There is no concrete context for index(of:), really. There is only a
generic context inside its implementation, since it uses Equatable, even if
calling it directly from concrete array of floats. If you want “concrete”
behavior, you would need to use index(where:), and pass in your own
predicate (again, same model as with C#).

This would make no sense to a user of `index(of:)` in the concrete context
unless that user already groks the difference between concrete partial
equivalence and generic full equivalence--a situation you've already
stipulated there isn't much hope of.

just like `==` does. Carrying this out consistently would ultimately

···

On Tue, Oct 24, 2017 at 10:08 PM, Ben Cohen <ben_cohen@apple.com> wrote:

On Oct 24, 2017, at 6:48 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:
On Tue, Oct 24, 2017 at 1:55 PM, Ben Cohen <ben_cohen@apple.com> wrote:

On Oct 19, 2017, at 4:29 PM, Xiaodi Wu via swift-dev <swift-dev@swift.org> >> wrote:

require having an alternative implementation on every collection type of
every generic algorithm such as `index(of:)`, using the constraint `Element
: FloatingPoint`.

This leads to the other argument against having algorithms generic over

Equatable behave differently to ones on Float/Double (or FloatingPoint):
that when you take your FP algorithm, and then loosen the constraints, it
changes behavior in a way that may no longer be correct.

But we aready have the opposite case, where algorithms are written
generically and then don’t work correctly for floats. I would argue that is
far more harmful and subtle an issue. I honestly don’t think the situation
where someone writes an algorithm first against floats, then generalizes
it, is a very common path to creating a generic algorithm. The inverse –
where someone writes an algorithm using Equatable, and then people use it
with floats – is certainly common, hence this discussion.

There really is no way to square this circle. Every option is going to
have downsides. We have to balance correctness, least surprise/most
expected behavior for most people, and consistency. For me, making generic
use of Equatable and Comparable stick to the documented conformance
generically, while keeping FP-specific uses the way they are, is the least
bad option.

I support Jonathan’s argument. If Swift wants to trap on NaN to improve self-consistency and simplicity, then the tradeoff might be worth it. The alternative, teaching the Equality protocol about NaNs, feels like “the tail wagging the dog".

In short: what IEEE requires of floating-point hardware is separable from IEEE’s opinions about language/library design.

Dave

···

On Oct 20, 2017, at 07:51, Xiaodi Wu via swift-dev <swift-dev@swift.org> wrote:

On Fri, Oct 20, 2017 at 1:22 AM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:
+1 for trapping unless using &==. In the case of ‘Float?’ we could also map to nil.

This is probably a more appropriate discussion for evolution though...

On Oct 19, 2017, at 9:48 PM, Brent Royal-Gordon via swift-dev <swift-dev@swift.org <mailto:swift-dev@swift.org>> wrote:

On Oct 19, 2017, at 4:29 PM, Xiaodi Wu via swift-dev <swift-dev@swift.org <mailto:swift-dev@swift.org>> wrote:

D) Must floating-point IEEE-compliant equivalence be spelled `==`?

In my view, this is something open for debate. I see no reason why it cannot be migrated to `&==` if it were felt that `==` *must* be a full equivalence relation. I believe this is controversial, however.

I actually got partway through writing up a pitch on this yesterday, but my opinion is that NaNs are so exceptional, and so prone to misuse, that we ought to treat them like integer arithmetic overflows: trap when they're detected, unless you use an `&` variant operator which indicates you know what you're doing.

I strongly suspect that, in practice, most float-manipulating code is not prepared to handle NaN and will not do anything sensible in its presence. For example, Apple platforms use floating-point types for geometry, color components, GPS locations, etc. Very little of this code will do anything sensible in the presence of a NaN. Arguably, it'd be better to exclude them through the type system, but I don't think that's a realistic possibility—we would need to have done that in a more source-break-friendly era. But that doesn't have to mean we're completely stuck.

Built-in floating point operators, as well as libc/libm math functions, are designed to propagate NaN correctly. This is not meant to be a thread about NaN, and we need to be cautious to define the scope of the problem to be solved from the outset. The tendency of having ever-expanding discussion where issues such as method names turn into discussions about the entire standard library go nowhere.

The question here is about `==` specifically and how to accommodate partial equivalence relations. For sanity, we start with the premise that NaN will forever be as it is.

Hi Xiaodi,

Based on what you wrote above, I am definitely *not* proposing for the Rust model. I was proposing something different where both fixed and floating point types would continue to conform to Equatable and Comparable, and those protocols would continue to be defined in terms of mathematics.

What I am proposing is that collection classes, sorting algorithms, etc. use protocols that are *not* defined in terms of mathematics. While this may be temporarily surprising to some, I think the net result is easy to understand, easy to reason about, and defensible.

Finally, on the specific topic of “.contains(.nan)”, I think this is both solvable and overblown. Just replace “.nan” with “.nan(.default)” and people will quickly understand and move on.

Dave

···

On Oct 24, 2017, at 22:04, Xiaodi Wu via swift-dev <swift-dev@swift.org> wrote:

On Tue, Oct 24, 2017 at 4:28 PM, David Zarzycki <dave@znu.io <mailto:dave@znu.io>> wrote:

On Oct 24, 2017, at 14:55, Ben Cohen via swift-dev <swift-dev@swift.org <mailto:swift-dev@swift.org>> wrote:

There really is no way to square this circle. Every option is going to have downsides. We have to balance correctness, least surprise/most expected behavior for most people, and consistency. For me, making generic use of Equatable and Comparable stick to the documented conformance generically, while keeping FP-specific uses the way they are, is the least bad option.

Hi Ben,

Out of curiosity, what do you think about breaking Equatable into two protocols: Equatable and Substitutable? The former would be defined in terms of mathematics, while the latter is simple to define and usable by collections, etc. For example, while +0 equals -0, they are not substitutable (because 1.0 / +0.0 == +inf BUT 1.0 / -0.0 == -inf). In the case of NaNs, substitutability would depend on the NaN payload being the same or not (because some later subsystem might interpret the NaN payload).

Finally, and unless I’m missing something, floating-point “substitutability” would translate to bitwise equality at the hardware level, which is nice from a performance perspective.

As Steve pointed out, floating-point "substitutability" as you define it is unlikely to be desired in most contexts; differentiation by NaN payload or by decimal value representation is highly unintuitive, and it would be intolerable for generic `.contains(.nan)` to sometimes return true and sometimes false depending on NaN payload.

On the topic of sorting, we could do the same thing, where we break Comparable into two protocols: Comparable and Sortable. The former would be defined in terms of mathematics, while the latter has no mathematical obligation and therefore values like +0 and -0 can be consistently sorted. The same goes of NaNs with payloads (they can be sorted). And finally, at the hardware level, this should be quite efficient because of how the floating point bits are laid out (in that an integer comparison of FP bits should work for this Sortable proposal).

Floating point values can certainly be consistently sorted as an sequence of bits, but again, it is unlikely that anyone would want such a sort of their floating point values.

But as to the topic of splitting protocols, it is essentially the design adopted by Rust. During discussion of Ben's original proposal, some core team members argued very eloquently against such a design based on what's happened in Rust. Namely:

- If Float is not Equatable but instead SomethingOtherThanEquatable (in Rust, it's called `PartialEq`), then we will have officially done our job in terms of stdlib semantic guarantees. However, in practice, people *want* to write generic algorithms that work with both integers and floating point values.
- Based on the Rust experience, what people will do when they realize that Float doesn't conform to Equatable is instead to write generic algorithms that operate on SomethingOtherThanEquatable while assuming that integer `==` and floating point `==` will work the same way.
- The result is that you now have two protocols instead of one, but most people still use only one--and still use it incorrectly. The existing problem has simply been transferred from Equatable to SomethingOtherThanEquatable. The new Equatable accomplishes nothing in terms of helping people write better code.

What about having the protocol hierarchy look like this? (All names subject to bikeshedding, of course)

protocol MaybeEquatable {
  static func ?== (lhs: Self, rhs: Self) -> Bool?
}
protocol MostlyEquatable : MaybeEquatable {
  static func == (lhs: Self, rhs: Self) -> Bool
}
extension MostlyEquatable {
  static func ?== (lhs: Self, rhs: Self) -> Bool? { return lhs == rhs } // allows a `MostlyEquatable` or `Equatable` to function as a `MaybeEquatable` without any extra code
}
protocol Equatable : MostlyEquatable {} // purely a semantic difference, no extra syntax

protocol MaybeComparable : MaybeEquatable {
  static func ?< (lhs: Self, rhs: Self) -> Bool?
  // plus the rest of them
}
protocol MostlyComparable : MaybeComparable, MostlyEquatable {
  static func < (lhs: Self, rhs: Self) -> Bool
  // plus the rest of them
}
extension MostlyComparable {
  static func ?< (lhs: Self, rhs: Self) -> Bool? { return lhs < rhs } // allows a `MostlyComparable` or `Comparable` to function as a `MaybeComparable` without any extra code
  // plus the rest of them
}
protocol Comparable : MostlyComparable, Equatable {} // purely a semantic difference, no extra syntax

extension Double : MostlyComparable {
  static func ?== (lhs: Double, rhs: Double) -> Bool? {
    return lhs.isNaN || rhs.isNaN ? nil : lhs == rhs
  }
  static func ?< (lhs: Double, rhs: Double) -> Bool? {
    return lhs.isNaN || rhs.isNaN || (lhs.isInfinite == true && rhs.isInfinite == true && lhs.sign == rhs.sign) ? nil : lhs < rhs
  }
  static func == (lhs: Double, rhs: Double) -> Bool {
    // whatever current impl is
  }
  static func < (lhs: Double, rhs: Double) -> Bool {
    // whatever current impl is
  }
}

This would let people easily switch between the two kinds of "correct" generic comparisons (returning a `Bool?` for types that could have invalid comparisons and a `Bool` for types that can't), as well as easily opting into using IEEE-754's current "arguably incompatible with sane generic programming" behavior (by constraining T to "MostlyComparable" instead of "Comparable") if that's what they really want.

`Collection` could have different implementations of `==` depending on whether `Element` conforms to "MaybeEquatable" or "MostlyEquatable/Equatable", solving the "a = [Double.nan]; print(a == a) // prints true" issue.

This doesn't involve trapping on anything, so dunno what the performance implications would be. All the extra work would be contained in the "?*" functions, though, so at least existing code wouldn't sudden get slower.

- Dave Sweeris

···

On Oct 24, 2017, at 9:06 PM, Xiaodi Wu via swift-dev <swift-dev@swift.org> wrote:

On Tue, Oct 24, 2017 at 10:08 PM, Ben Cohen <ben_cohen@apple.com <mailto:ben_cohen@apple.com>> wrote:

On Oct 24, 2017, at 6:48 PM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Tue, Oct 24, 2017 at 1:55 PM, Ben Cohen <ben_cohen@apple.com <mailto:ben_cohen@apple.com>> wrote:

On Oct 19, 2017, at 4:29 PM, Xiaodi Wu via swift-dev <swift-dev@swift.org <mailto:swift-dev@swift.org>> wrote:

Differing behavior in generic and concrete contexts is simply too subtle to be understandable to the reader.

Hardly more subtle then the current “Equatable works like this, with these strong guarantees. Oh, except for some cases it doesn’t, in which case ¯\_(ツ)_/¯”

I'm not saying that the status quo is a superior alternative.

However, one option is to _weaken_ the guarantees of Equatable such that it guarantees only partial equivalence for `==`. From the perspective of documented semantics, it's not subtle at all but a giant hammer of a change. However, from an actual what-does-the-implementation-do standpoint, it would be acknowledging what is already true. Only code that is already broken when used with floating-point values would become formally "incorrect" in the sense of relying on semantics that are then no longer guaranteed.

Such a solution would avoid, as you might say, perpetuating the ¯\_(ツ)_/¯ approach to floating point.

I realize that Comparable admits an exception for FP. This is, IMO, a serious mistake and needs to be reversed. Equatable has no such exception and rightly so.

The clearest demonstrations of how flawed this approach is can be found in the Standard Library. You can throw a brick at it and hit an example of something that’s broken by the presence of .nan: random sort orders, == implementations that differ based on the identify of the buffer,

In my view, if a sort algorithm cannot accommodate NaN, it's entirely acceptable to trap on NaN--and that is a trivial change.

I think this would be user-hostile. This isn’t like out-of-bounds subscript where it’s just not possible to reasonably proceed. NaNs crop up and people don’t expect them to trap when you sort – they expect them to sort to one end, like in Excel.

Honestly, I don't know that most users have thought about this possibility at all. Sure, a sort that matches IEEE total order _might_ be justifiable. But users are as likely to expect that the last item in the sorted collection will be the greatest and that the first item in the sorted collection will be the smallest. Now, you can say that NaN compares larger than everything, everywhere. But the moment that they try to plug that last element into, say, an AppKit UI function, they're toast.

I certainly disagree with ideas of trapping on NaN inside `==` or similar functions, but I really do think that an argument can be made that it is not reasonable to proceed with sorting an array that contains NaN.

After all, NaN is unordered with respect to everything and one cannot sort the unsortable. And, as shown, the `Array.==` implementation is trivially fixable. The entire standard library can be made NaN-safe in like manner.

My point was, it’s not about what we can do in the standard library. The std lib only has a handful of methods and sure, we can fix them one by one. It’s about whether the standard library defines types and protocols such that it’s reasonable for programmers to use them to write and use generic algorithms correctly. I’m citing the existing std lib implementations as proof that it’s easy to make mistakes. And I think a more complicated approach, with more operators, more properties, more rules, won’t fix this problem.

Well, to my mind, this problem you state really works out to:

(a) People expect generic algorithms that operate on Comparable types to work correctly with floating-point types
(b) Generic algorithms that operate on Comparable types don't work correctly with floating-point types unless the author is very, very careful
(c) People shouldn't have to be very, very careful to write working generic algorithms that work with floating-point types

Which, in turn, really boils down to:

(d) People expect floating-point types not to have numerous unintuitive (but learnable) properties, including NaN being unordered
(e) Floating-point types have numerous unintuitive (but learnable) properties, including NaN being unordered

The reason I'm writing to swift-dev (rather than evolution) is that my interest is in fixing the standard library. I'm not even convinced that this problem you state is fixable, at least on your terms. In the interest of not increasing the API surface area, you would propose to blow away (e) in the generic but not concrete context. Now, while it's true that an alternative to increasing the API surface area is to have the same API exhibit context-specific behaviors, that certainly isn't any less complicated conceptually, as we would then be faced with the notion that floating-point types both have and do not have numerous unintuitive properties, depending on the context in which they are used.

arbitrary duplication in Set/Dictionary etc.

(I disagree that it's arbitrary. If NaN != NaN, then every NaN is properly unique.)

The important point to take from this is not “how do we fix the Standard Library?” but rather “these errors are easy to make” by anyone writing generic code using standard protocols. If the Standard Library can’t get these right, how can we expect others to? There are potentially far worse bugs that could result. A differently-written sorting algorithm could corrupt elements (because it relied on substitutability). Other sorting or searching algorithms could easily go into an infinite loop. These problems exist because the code relies on the documented behavior of the protocol, because if you can’t, then what is the point in documenting that behavior?

It's not that the standard library *can't* get these right, but that it currently *doesn't*, because it documents one set of semantics but implements another, then relies on documented semantics that it knows it does not implement. We both agree that this needs to be fixed.

The question here is whether it is to be fixed by sticking to the documented semantic guarantees of `==` and bringing all implementations into proper conformance, or alternatively sticking to the implemented behavior of `==` and aligning the documented semantic guarantees to that.

I don’t support solutions such as adding a property indicating “containsExceptionalValues” (whatever that means), and expecting every author of a generic algorithm that uses Equatable to remember to call it, and craft custom paranoid behavior (if there is any reasonable behavior) based on it. With recursive conformance landed on master, we finally have a generics system where writing algorithms against Collection can be considered approachable by ordinary users. You no longer have to know things like how Collection.SubSequence needs to be constrained to also be a Collection – it just is. We would be negating this good work to now introduce a whole new set of gotchas that we expect people to know (without the type system even helping them in this case) about how some types, including standard library types, flout the documented rules for Equatable and Comparable, and that you need to use one of a handful of properties to hack in special cases to handle it.

The gotchas aren't new; they arise when using floating point values, originate with the IEEE definition of floating point equivalence, and exist in some form in every language that has implemented collections of floating point values. Crucially, they exist today in Swift; only, we haven't documented it.

And as a user of algorithms, what should you do? If a generic algorithm doesn’t document how it handles these special cases, should you assume it doesn’t? Check the code? Experiment to find out?

This problem also spreads, virus-like, once we have conditional conformance that makes containers equatable when their elements are. [Double] would need to propagate it’s elements’ “exceptionality", to avoid problems with [Double]. Double? will have to do the same.

This isn't a _problem_. In fact, I consider this to be a key _feature_. Naturally, every protocol conformance (conditional or not) must implement all protocol requirements, so if we add additional requirements they must be implemented. What I'm saying here is that *it may be desirable* to have some protocol-based API to distinguish partial from full equivalence relations. If you accept that premise, then it is the logical consequence that if you conditionally conform `Array` to `Equatable`, you will have to implement any new APIs, and in so doing, document how equivalence of arrays of floating point values relates to floating point equivalence. For me, this is a _good thing_: it documents _in code_ something that today is muddled through.

The explanation that a method on `Float` is a "floating-point context" but a method on `[Float]` is *not a "floating point context"* is, IMO, indefensible.

Nevertheless, I will attempt to defend it :)

I find it odd that violating the documented requirements of a protocol is considered defensible, but expecting types comply with those requirements is indefensible. A principled stance would be to say that Float shouldn’t conform to Equatable (because… it doesn’t!) and requiring all calls to supply a predicate (and maybe amending types like Dictionary to allow you to supply one). That won’t fly though – users would complain – so instead we are in this murky ground.

I don't think we should defend violating the documented requirements of a protocol. Either (a) Float should not conform to Equatable (agree, this is a non-starter); (b) how Float conforms to Equatable should be brought into conformance with documented semantics (your stance); or (c) what semantics are documented should be brought into alignment with how conformance is actually implemented (my stance). Naturally, in the last case, additional APIs should be added as needed to make such reduced semantic guarantees useful for generic algorithms.

Later in the thread, you mention a possible fix for sort:

`sort()` is problematic, but not if a custom predicate is supplied.

So, we are potentially trading off one subtlety (that < behaves differently in generic and non-generic contexts) for another (that you need to know that you need to pass in a special predicate for sorting, or you get nonsense results). Knowing when an algorithm requires you to supply a predicate (like sort) vs when handling for the special case is built in (like equatable) seems far worse complication to me than knowing one rule: that generically when constrained to Comparable, Float adheres to the requirements of Comparable. Always. That is a consistent rule that you need to learn once and that doesn’t vary depending on which algorithm you’re using.

I would argue that Float should _always_ adhere to the requirements of Comparable, in all contexts. The question is, rather: what can be the requirements of Comparable such that Float can always adhere to them?

Another alternative proposed in previous threads is to give Comparable an additional operator (<=> or .compare(to:) that will always enforce a total ordering, and sort can use that. This is, afaict, C#’s solution – double.NaN < 1.0, 1.0 < double.NaN and double.NaN == double.NaN all return false, but Comparer<double>.Default.compare returns -1, 1 and 0 respectively.

This is, essentially, the endpoint of what I'm proposing.

Equatable would vend (modulo bikeshedding):
`==`, a partial equivalence relation
`~`, a full equivalence relation
`containsExceptionalValues` (yes, this is a deliberately terrible name, because it's meant to go through bikeshedding), a Boolean value to indicate whether `==` is the same as `~`

Comparable would vend (modulo bikeshedding):
`<`, `>`, <=`, `>=`, defined as now
`<=>`, as in C# `compare` (or maybe, to emphasize the point, `<~>`)
`containsExceptionalValues`, inherited from `Equatable`, to document the relationship between `<` (etc.) and the spaceship operator

This looks to me to be an absurd mess of operations, none of which will have much hope of being used in a coherent fashion by most people. Should I use == or ~ here? What are the rules again? Will people remember to not use < when they really need <=>? Probably not. Did the author of this framework I’m using remember? Dunno.

The syntax here is not the point (or if it is, it can be bikeshedded). The point I'm trying to make is that what you're criticizing as _incoherent_ is also _inescapable_. Floating-point types have a notion of equivalence that isn't full equivalence. For certain use cases (both concrete and generic), we want that partial equivalence, while for other use cases (both concrete and generic), we truly want full equivalence. To work with floating-point types correctly, a user must know that there is a difference between the two. If there is no hope of "most people" understanding this distinction when one relation is named `==` and the other is named `~`, then _a fortiori_ there is no hope of "most people" understanding the distinction when they're conflated into one operator `==` that has different behaviors in different contexts.

The C# model of compare works because < is not available generically. There is no choice between < and <=>, and so the model is simple and easily understood by both algorithm implementors and users. And if you need a different ordering, you can supply your own custom comparator. As far as I can tell, it’s a good model and users are happy with it. Swift is different, since the concrete < isexposed to the generic implementation, but having two possibilities and expecting users to pick is IMO a bad idea. Hence the proposed fix that Float’s Comparable.< is required to be a total order, per the requirements of Comparable, essentially giving us the C# model.

A true C# model would be fine, but the key point of that model to my mind is that partial equivalence and full equivalence are spelled differently (that is, `==` and `Equals`, respectively). It would not work with IEEE `==` being spelled the same way as Comparable `==`. If we were to rename the IEEE operation `&==` instead, then we'd functionally have a design that's broadly similar to the earlier version, only with different names:

Equatable would vend `==`, a full equivalence relation (and `!=`)
Comparable would vend `<`, `>`, `<=`, `>=`, now operators that reflect a total order over the set of all values; and maybe `<=>`
Floating point would additionally vend `&==` and `&<` (and `&!=`, `&<`, `&>`, `&<=`, `&>=`)

One key difference here would be that the partial equivalence relation would now only be found on floating-point types, and it would not be possible to write a generic algorithm that operates on any partially equatable or equatable type. But the other--and major--issues would be (a) that all concrete uses of floating-point comparison operators would have to be migrated to append an extra `&`; and (b) this syntax suggests that most users want to use `==` *instead of* `&==`, which I'm not sure is the case--and certainly isn't the case if they're trying to do the same things they're used to doing with floating-point values in other languages.

IEEE 754 requires certain behaviors of conforming implementations and is
meant to allow for portable use of floating point. Swift’s floating point
facilities are conformant to that standard, and breaking IEEE 754
conformance has been repeatedly stated to be a non-starter.

The essential idea behind quiet NaN is that it can be used as input in
operations that take floating point values without raising errors. Since
breaking IEEE 754 conformance is a non-starter, trapping on NaN is outside
the realm of available solutions.

It has been pointed out, however, that IEEE does not require a specific
syntax for floating point equivalence, hence the question of whether it can
be spelled differently. However, trapping on NaN is emphatically not an
option.

But I really don’t want to discuss this at all here. The topic of this
thread is about the semantics of Equatable.

···

On Fri, Oct 20, 2017 at 07:21 David Zarzycki <dave@znu.io> wrote:

On Oct 20, 2017, at 07:51, Xiaodi Wu via swift-dev <swift-dev@swift.org> > wrote:

On Fri, Oct 20, 2017 at 1:22 AM, Jonathan Hull <jhull@gbis.com> wrote:

+1 for trapping unless using &==. In the case of ‘Float?’ we could also
map to nil.

This is probably a more appropriate discussion for evolution though...

On Oct 19, 2017, at 9:48 PM, Brent Royal-Gordon via swift-dev < >> swift-dev@swift.org> wrote:

On Oct 19, 2017, at 4:29 PM, Xiaodi Wu via swift-dev <swift-dev@swift.org> >> wrote:

D) Must floating-point IEEE-compliant equivalence be spelled `==`?

In my view, this is something open for debate. I see no reason why it
cannot be migrated to `&==` if it were felt that `==` *must* be a full
equivalence relation. I believe this is controversial, however.

I actually got partway through writing up a pitch on this yesterday, but
my opinion is that NaNs are so exceptional, and so prone to misuse, that we
ought to treat them like integer arithmetic overflows: trap when they're
detected, unless you use an `&` variant operator which indicates you know
what you're doing.

I strongly suspect that, in practice, most float-manipulating code is not
prepared to handle NaN and will not do anything sensible in its presence.
For example, Apple platforms use floating-point types for geometry, color
components, GPS locations, etc. Very little of this code will do anything
sensible in the presence of a NaN. Arguably, it'd be better to exclude them
through the type system, but I don't think that's a realistic
possibility—we would need to have done that in a more source-break-friendly
era. But that doesn't have to mean we're completely stuck.

Built-in floating point operators, as well as libc/libm math functions,
are designed to propagate NaN correctly. This is not meant to be a thread
about NaN, and we need to be cautious to define the scope of the problem to
be solved from the outset. The tendency of having ever-expanding discussion
where issues such as method names turn into discussions about the entire
standard library go nowhere.

The question here is about `==` specifically and how to accommodate
partial equivalence relations. For sanity, we start with the premise that
NaN will forever be as it is.

I support Jonathan’s argument. If Swift wants to trap on NaN to improve
self-consistency and simplicity, then the tradeoff might be worth it. The
alternative, teaching the Equality protocol about NaNs, feels like “the
tail wagging the dog".

In short: what IEEE requires of floating-point hardware is separable from
IEEE’s opinions about language/library design.

Just to be precise: IEEE 754 places no requirements on hardware. The entirety of IEEE 754 is about what *languages* should provide. It just happens to be advantageous to implement many of the requirements directly in hardware.

[The rest of this is a response to the thread as a whole, not to Dave]

I have no philosophical objection to trapping on NaN. IEEE 754 says that the default behavior should be to not trap, but other non-default forms of exception handling* are explicitly allowed by IEEE 754.

From a practical standpoint, it’s is counter to everything about the way much floating-point hardware is designed, and that should give us some pause. On x86 it’s possible to unmask the “invalid floating point exception”, which results in any operation that generates a NaN faulting. However, it will *not* cause a fault if an operand is already a quiet NaN, so Swift would need to either test every operand that’s read from memory at the time that it’s moved into register, or test every result.

On some non-x86 architectures (including in particular most ARM implementations) there is no hardware support for unmasking exceptions, so there’s no way to automatically trap on invalid operations, you would have to explicitly check for NaN on every operation. This is much, much more expensive than checking for overflow on integer arithmetic (where for addition / subtraction, it’s just an easily-predicted conditional branch). Including these checks would introduce significant code bloat and slow down naive arithmetic by roughly an order of magnitude on current hardware, which is probably a non-starter.

Trapping only for == is much, much more palatable, but as Xiaodi said, doesn’t actually get you the semantics that you want for ==.

&== is ugly but workable. You will have inevitable bugs from people who naively adapt code from literally any other language that assumes IEEE 754 semantics for ==, however.

– Steve

[*] note that “exception handling” in an IEEE 754 context does not mean what you think it does if you’re coming from a generic CS not-floating-point background.

···

On Oct 20, 2017, at 8:21 AM, David Zarzycki via swift-dev <swift-dev@swift.org> wrote:

On Oct 20, 2017, at 07:51, Xiaodi Wu via swift-dev <swift-dev@swift.org <mailto:swift-dev@swift.org>> wrote:

On Fri, Oct 20, 2017 at 1:22 AM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:
+1 for trapping unless using &==. In the case of ‘Float?’ we could also map to nil.

This is probably a more appropriate discussion for evolution though...

On Oct 19, 2017, at 9:48 PM, Brent Royal-Gordon via swift-dev <swift-dev@swift.org <mailto:swift-dev@swift.org>> wrote:

On Oct 19, 2017, at 4:29 PM, Xiaodi Wu via swift-dev <swift-dev@swift.org <mailto:swift-dev@swift.org>> wrote:

D) Must floating-point IEEE-compliant equivalence be spelled `==`?

In my view, this is something open for debate. I see no reason why it cannot be migrated to `&==` if it were felt that `==` *must* be a full equivalence relation. I believe this is controversial, however.

I actually got partway through writing up a pitch on this yesterday, but my opinion is that NaNs are so exceptional, and so prone to misuse, that we ought to treat them like integer arithmetic overflows: trap when they're detected, unless you use an `&` variant operator which indicates you know what you're doing.

I strongly suspect that, in practice, most float-manipulating code is not prepared to handle NaN and will not do anything sensible in its presence. For example, Apple platforms use floating-point types for geometry, color components, GPS locations, etc. Very little of this code will do anything sensible in the presence of a NaN. Arguably, it'd be better to exclude them through the type system, but I don't think that's a realistic possibility—we would need to have done that in a more source-break-friendly era. But that doesn't have to mean we're completely stuck.

Built-in floating point operators, as well as libc/libm math functions, are designed to propagate NaN correctly. This is not meant to be a thread about NaN, and we need to be cautious to define the scope of the problem to be solved from the outset. The tendency of having ever-expanding discussion where issues such as method names turn into discussions about the entire standard library go nowhere.

The question here is about `==` specifically and how to accommodate partial equivalence relations. For sanity, we start with the premise that NaN will forever be as it is.

I support Jonathan’s argument. If Swift wants to trap on NaN to improve self-consistency and simplicity, then the tradeoff might be worth it. The alternative, teaching the Equality protocol about NaNs, feels like “the tail wagging the dog".

In short: what IEEE requires of floating-point hardware is separable from IEEE’s opinions about language/library design.

Differing behavior in generic and concrete contexts is simply too subtle
to be understandable to the reader.

Hardly more subtle then the current “Equatable works like this, with
these strong guarantees. Oh, except for some cases it doesn’t, in which
case ¯\_(ツ)_/¯”

I'm not saying that the status quo is a superior alternative.

However, one option is to _weaken_ the guarantees of Equatable such that
it guarantees only partial equivalence for `==`. From the perspective of
documented semantics, it's not subtle at all but a giant hammer of a
change. However, from an actual what-does-the-implementation-do
standpoint, it would be acknowledging what is already true. Only code that
is already broken when used with floating-point values would become
formally "incorrect" in the sense of relying on semantics that are then no
longer guaranteed.

Such a solution would avoid, as you might say, perpetuating the ¯\_(ツ)_/¯
approach to floating point.

I realize that Comparable admits an exception for FP. This is, IMO, a

serious mistake and needs to be reversed. Equatable has no such exception
and rightly so.

The clearest demonstrations of how flawed this approach is can be found
in the Standard Library. You can throw a brick at it and hit an example of
something that’s broken by the presence of .nan: random sort orders, ==
implementations that differ based on the identify of the buffer,

In my view, if a sort algorithm cannot accommodate NaN, it's entirely
acceptable to trap on NaN--and that is a trivial change.

I think this would be user-hostile. This isn’t like out-of-bounds
subscript where it’s just not possible to reasonably proceed. NaNs crop up
and people don’t expect them to trap when you sort – they expect them to
sort to one end, like in Excel.

Honestly, I don't know that most users have thought about this possibility
at all. Sure, a sort that matches IEEE total order _might_ be justifiable.
But users are as likely to expect that the last item in the sorted
collection will be the greatest and that the first item in the sorted
collection will be the smallest. Now, you can say that NaN compares larger
than everything, everywhere. But the moment that they try to plug that last
element into, say, an AppKit UI function, they're toast.

I certainly disagree with ideas of trapping on NaN inside `==` or similar
functions, but I really do think that an argument can be made that it is
not reasonable to proceed with sorting an array that contains NaN.

After all, NaN is unordered with respect to everything and one cannot
sort the unsortable. And, as shown, the `Array.==` implementation is
trivially fixable. The entire standard library can be made NaN-safe in like
manner.

My point was, it’s not about what we can do in the standard library. The
std lib only has a handful of methods and sure, we can fix them one by one.
It’s about whether the standard library defines types and protocols such
that it’s reasonable for programmers to use them to write and use generic
algorithms correctly. I’m citing the existing std lib implementations as
proof that it’s easy to make mistakes. And I think a more complicated
approach, with more operators, more properties, more rules, won’t fix this
problem.

Well, to my mind, this problem you state really works out to:

(a) People expect generic algorithms that operate on Comparable types to
work correctly with floating-point types
(b) Generic algorithms that operate on Comparable types don't work
correctly with floating-point types unless the author is very, very careful
(c) People shouldn't have to be very, very careful to write working
generic algorithms that work with floating-point types

Which, in turn, really boils down to:

(d) People expect floating-point types not to have numerous unintuitive
(but learnable) properties, including NaN being unordered
(e) Floating-point types have numerous unintuitive (but learnable)
properties, including NaN being unordered

The reason I'm writing to swift-dev (rather than evolution) is that my
interest is in fixing the standard library. I'm not even convinced that
this problem you state is fixable, at least on your terms. In the interest
of not increasing the API surface area, you would propose to blow away (e)
in the generic but not concrete context. Now, while it's true that an
alternative to increasing the API surface area is to have the same API
exhibit context-specific behaviors, that certainly isn't any less
complicated conceptually, as we would then be faced with the notion that
floating-point types both have and do not have numerous unintuitive
properties, depending on the context in which they are used.

arbitrary duplication in Set/Dictionary etc.

(I disagree that it's arbitrary. If NaN != NaN, then every NaN is
properly unique.)

The important point to take from this is not “how do we fix the Standard
Library?” but rather “these errors are easy to make” by anyone writing
generic code using standard protocols. If the Standard Library can’t get
these right, how can we expect others to? There are potentially far worse
bugs that could result. A differently-written sorting algorithm could
corrupt elements (because it relied on substitutability). Other sorting or
searching algorithms could easily go into an infinite loop. These problems
exist because the code relies on the documented behavior of the protocol,
because if you can’t, then what is the point in documenting that behavior?

It's not that the standard library *can't* get these right, but that it
currently *doesn't*, because it documents one set of semantics but
implements another, then relies on documented semantics that it knows it
does not implement. We both agree that this needs to be fixed.

The question here is whether it is to be fixed by sticking to the
documented semantic guarantees of `==` and bringing all implementations
into proper conformance, or alternatively sticking to the implemented
behavior of `==` and aligning the documented semantic guarantees to that.

I don’t support solutions such as adding a property indicating
“containsExceptionalValues” (whatever that means), and expecting every
author of a generic algorithm that uses Equatable to remember to call it,
and craft custom paranoid behavior (if there is any reasonable behavior)
based on it. With recursive conformance landed on master, we finally have a
generics system where writing algorithms against Collection can be
considered approachable by ordinary users. You no longer have to know
things like how Collection.SubSequence needs to be constrained to also be a
Collection – it just is. We would be negating this good work to now
introduce a whole new set of gotchas that we expect people to know (without
the type system even helping them in this case) about how some types,
including standard library types, flout the documented rules for Equatable
and Comparable, and that you need to use one of a handful of properties to
hack in special cases to handle it.

The gotchas aren't new; they arise when using floating point values,
originate with the IEEE definition of floating point equivalence, and exist
in some form in every language that has implemented collections of floating
point values. Crucially, they exist today in Swift; only, we haven't
documented it.

And as a user of algorithms, what should you do? If a generic algorithm

doesn’t document how it handles these special cases, should you assume it
doesn’t? Check the code? Experiment to find out?

This problem also spreads, virus-like, once we have conditional
conformance that makes containers equatable when their elements are.
[Double] would need to propagate it’s elements’ “exceptionality", to
avoid problems with [Double]. Double? will have to do the same.

This isn't a _problem_. In fact, I consider this to be a key _feature_.
Naturally, every protocol conformance (conditional or not) must implement
all protocol requirements, so if we add additional requirements they must
be implemented. What I'm saying here is that *it may be desirable* to have
some protocol-based API to distinguish partial from full equivalence
relations. If you accept that premise, then it is the logical consequence
that if you conditionally conform `Array` to `Equatable`, you will have to
implement any new APIs, and in so doing, document how equivalence of arrays
of floating point values relates to floating point equivalence. For me,
this is a _good thing_: it documents _in code_ something that today is
muddled through.

The explanation that a method on `Float` is a "floating-point context"

but a method on `[Float]` is *not a "floating point context"* is, IMO,
indefensible.

Nevertheless, I will attempt to defend it :)

I find it odd that violating the documented requirements of a protocol
is considered defensible, but expecting types comply with those
requirements is indefensible. A principled stance would be to say that
Float shouldn’t conform to Equatable (because… it doesn’t!) and requiring
all calls to supply a predicate (and maybe amending types like Dictionary
to allow you to supply one). That won’t fly though – users would complain –
so instead we are in this murky ground.

I don't think we should defend violating the documented requirements of a
protocol. Either (a) Float should not conform to Equatable (agree, this is
a non-starter); (b) how Float conforms to Equatable should be brought into
conformance with documented semantics (your stance); or (c) what semantics
are documented should be brought into alignment with how conformance is
actually implemented (my stance). Naturally, in the last case, additional
APIs should be added as needed to make such reduced semantic guarantees
useful for generic algorithms.

Later in the thread, you mention a possible fix for sort:

`sort()` is problematic, but not if a custom predicate is supplied.

So, we are potentially trading off one subtlety (that < behaves
differently in generic and non-generic contexts) for another (that you need
to know that you need to pass in a special predicate for sorting, or you
get nonsense results). Knowing when an algorithm requires you to supply a
predicate (like sort) vs when handling for the special case is built in
(like equatable) seems far worse complication to me than knowing one rule:
that generically when constrained to Comparable, Float adheres to the
requirements of Comparable. Always. That is a consistent rule that you need
to learn once and that doesn’t vary depending on which algorithm you’re
using.

I would argue that Float should _always_ adhere to the requirements of
Comparable, in all contexts. The question is, rather: what can be the
requirements of Comparable such that Float can always adhere to them?

Another alternative proposed in previous threads is to give Comparable an

additional operator (<=> or .compare(to:) that will always enforce a total
ordering, and sort can use that. This is, afaict, C#’s solution –
double.NaN < 1.0, 1.0 < double.NaN and double.NaN == double.NaN all return
false, but Comparer<double>.Default.compare returns -1, 1 and 0
respectively.

This is, essentially, the endpoint of what I'm proposing.

Equatable would vend (modulo bikeshedding):
`==`, a partial equivalence relation
`~`, a full equivalence relation
`containsExceptionalValues` (yes, this is a deliberately terrible name,
because it's meant to go through bikeshedding), a Boolean value to indicate
whether `==` is the same as `~`

Comparable would vend (modulo bikeshedding):
`<`, `>`, <=`, `>=`, defined as now
`<=>`, as in C# `compare` (or maybe, to emphasize the point, `<~>`)
`containsExceptionalValues`, inherited from `Equatable`, to document the
relationship between `<` (etc.) and the spaceship operator

This looks to me to be an absurd mess of operations, none of which will
have much hope of being used in a coherent fashion by most people. Should I
use == or ~ here? What are the rules again? Will people remember to not use
< when they really need <=>? Probably not. Did the author of this framework
I’m using remember? Dunno.

The syntax here is not the point (or if it is, it can be bikeshedded). The
point I'm trying to make is that what you're criticizing as _incoherent_ is
also _inescapable_. Floating-point types have a notion of equivalence that
isn't full equivalence. For certain use cases (both concrete and generic),
we want that partial equivalence, while for other use cases (both concrete
and generic), we truly want full equivalence. To work with floating-point
types correctly, a user must know that there is a difference between the
two. If there is no hope of "most people" understanding this distinction
when one relation is named `==` and the other is named `~`, then _a
fortiori_ there is no hope of "most people" understanding the distinction
when they're conflated into one operator `==` that has different behaviors
in different contexts.

The C# model of compare works because < is not available generically.

There is no choice between < and <=>, and so the model is simple and easily
understood by both algorithm implementors and users. And if you need a
different ordering, you can supply your own custom comparator. As far as I
can tell, it’s a good model and users are happy with it. Swift is
different, since the concrete < *is*exposed to the generic
implementation, but having two possibilities and expecting users to pick is
IMO a bad idea. Hence the proposed fix that Float’s Comparable.< is
required to be a total order, per the requirements of Comparable,
essentially giving us the C# model.

A true C# model would be fine, but the key point of that model to my mind
is that partial equivalence and full equivalence are spelled differently
(that is, `==` and `Equals`, respectively). It would not work with IEEE
`==` being spelled the same way as Comparable `==`. If we were to rename
the IEEE operation `&==` instead, then we'd functionally have a design
that's broadly similar to the earlier version, only with different names:

Equatable would vend `==`, a full equivalence relation (and `!=`)
Comparable would vend `<`, `>`, `<=`, `>=`, now operators that reflect a
total order over the set of all values; and maybe `<=>`
Floating point would additionally vend `&==` and `&<` (and `&!=`, `&<`,
`&>`, `&<=`, `&>=`)

One key difference here would be that the partial equivalence relation
would now only be found on floating-point types, and it would not be
possible to write a generic algorithm that operates on any partially
equatable or equatable type. But the other--and major--issues would be (a)
that all concrete uses of floating-point comparison operators would have to
be migrated to append an extra `&`; and (b) this syntax suggests that most
users want to use `==` *instead of* `&==`, which I'm not sure is the
case--and certainly isn't the case if they're trying to do the same things
they're used to doing with floating-point values in other languages.

What about having the protocol hierarchy look like this? (All names
subject to bikeshedding, of course)

Please see earlier replies summarizing core team members' persuasive
arguments as to why not multiple protocol hierarchies like this.

protocol MaybeEquatable {

···

On Wed, Oct 25, 2017 at 1:24 AM, David Sweeris <davesweeris@mac.com> wrote:

On Oct 24, 2017, at 9:06 PM, Xiaodi Wu via swift-dev <swift-dev@swift.org> > wrote:
On Tue, Oct 24, 2017 at 10:08 PM, Ben Cohen <ben_cohen@apple.com> wrote:

On Oct 24, 2017, at 6:48 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:
On Tue, Oct 24, 2017 at 1:55 PM, Ben Cohen <ben_cohen@apple.com> wrote:

On Oct 19, 2017, at 4:29 PM, Xiaodi Wu via swift-dev < >>> swift-dev@swift.org> wrote:

static func ?== (lhs: Self, rhs: Self) -> Bool?
}
protocol MostlyEquatable : MaybeEquatable {
static func == (lhs: Self, rhs: Self) -> Bool
}
extension MostlyEquatable {
static func ?== (lhs: Self, rhs: Self) -> Bool? { return lhs == rhs } //
allows a `MostlyEquatable` or `Equatable` to function as a `MaybeEquatable`
without any extra code
}
protocol Equatable : MostlyEquatable {} // purely a semantic difference,
no extra syntax

protocol MaybeComparable : MaybeEquatable {
static func ?< (lhs: Self, rhs: Self) -> Bool?
// plus the rest of them
}
protocol MostlyComparable : MaybeComparable, MostlyEquatable {
static func < (lhs: Self, rhs: Self) -> Bool
// plus the rest of them
}
extension MostlyComparable {
static func ?< (lhs: Self, rhs: Self) -> Bool? { return lhs < rhs } //
allows a `MostlyComparable` or `Comparable` to function as a
`MaybeComparable` without any extra code
// plus the rest of them
}
protocol Comparable : MostlyComparable, Equatable {} // purely a semantic
difference, no extra syntax

extension Double : MostlyComparable {
static func ?== (lhs: Double, rhs: Double) -> Bool? {
return lhs.isNaN || rhs.isNaN ? nil : lhs == rhs
}
static func ?< (lhs: Double, rhs: Double) -> Bool? {
return lhs.isNaN || rhs.isNaN || (lhs.isInfinite == true && rhs.isInfinite
== true && lhs.sign == rhs.sign) ? nil : lhs < rhs
}
static func == (lhs: Double, rhs: Double) -> Bool {
// whatever current impl is
}
static func < (lhs: Double, rhs: Double) -> Bool {
// whatever current impl is
}
}

This would let people easily switch between the two kinds of "correct"
generic comparisons (returning a `Bool?` for types that could have invalid
comparisons and a `Bool` for types that can't), as well as easily opting
into using IEEE-754's current "arguably incompatible with sane generic
programming" behavior (by constraining T to "MostlyComparable" instead of
"Comparable") if that's what they really want.

`Collection` could have different implementations of `==` depending on
whether `Element` conforms to "MaybeEquatable" or
"MostlyEquatable/Equatable", solving the "a = [Double.nan]; print(a == a)
// prints true" issue.

This doesn't involve trapping on anything, so dunno what the performance
implications would be. All the extra work would be contained in the "?*"
functions, though, so at least existing code wouldn't sudden get slower.

- Dave Sweeris

There is something interesting here. I like that ‘Bool?’ really represents what we need from the relation on Float. I think it solves the objection that Xiaodi mentioned with a PartialEq protocol. If everyone just switches to using PartialEq in generic contexts because they want it to work with Floats (which is the outcome he was worried about)… then they will be forced to deal with the optional! This brings Swift’s strengths into play, and we will have gained correctness.

The issue is migration though. Let’s say we name the partial equivalence relation ‘~’ and then we remove Float’s Equivalence conformance (no more == for Float). To migrate, we would need to replace any ‘a == b’ where there is a Float with ‘(a ~ b) == true’. Not pretty, but not impossible.

The trouble spot will be where someone has written their own method which is generic on Equatable, and they are passing Floats to it. We need to be careful about the warning given, so that it sends them in the right direction. The good news is that if they do rewrite the function to use ‘~' it should actually cause them to fix bugs which were there (or at least consider the case of NaN) when dealing with Floats. So, overall good things, but a bit annoying while transitioning.

We should keep thinking/discussing. I am convinced there is an interesting thread to pull on here...

···

On Oct 24, 2017, at 11:25 PM, David Sweeris via swift-dev <swift-dev@swift.org> wrote:

On Oct 24, 2017, at 9:06 PM, Xiaodi Wu via swift-dev <swift-dev@swift.org <mailto:swift-dev@swift.org>> wrote:

On Tue, Oct 24, 2017 at 10:08 PM, Ben Cohen <ben_cohen@apple.com <mailto:ben_cohen@apple.com>> wrote:

On Oct 24, 2017, at 6:48 PM, Xiaodi Wu <xiaodi.wu@gmail.com <mailto:xiaodi.wu@gmail.com>> wrote:

On Tue, Oct 24, 2017 at 1:55 PM, Ben Cohen <ben_cohen@apple.com <mailto:ben_cohen@apple.com>> wrote:

On Oct 19, 2017, at 4:29 PM, Xiaodi Wu via swift-dev <swift-dev@swift.org <mailto:swift-dev@swift.org>> wrote:

Differing behavior in generic and concrete contexts is simply too subtle to be understandable to the reader.

Hardly more subtle then the current “Equatable works like this, with these strong guarantees. Oh, except for some cases it doesn’t, in which case ¯\_(ツ)_/¯”

I'm not saying that the status quo is a superior alternative.

However, one option is to _weaken_ the guarantees of Equatable such that it guarantees only partial equivalence for `==`. From the perspective of documented semantics, it's not subtle at all but a giant hammer of a change. However, from an actual what-does-the-implementation-do standpoint, it would be acknowledging what is already true. Only code that is already broken when used with floating-point values would become formally "incorrect" in the sense of relying on semantics that are then no longer guaranteed.

Such a solution would avoid, as you might say, perpetuating the ¯\_(ツ)_/¯ approach to floating point.

I realize that Comparable admits an exception for FP. This is, IMO, a serious mistake and needs to be reversed. Equatable has no such exception and rightly so.

The clearest demonstrations of how flawed this approach is can be found in the Standard Library. You can throw a brick at it and hit an example of something that’s broken by the presence of .nan: random sort orders, == implementations that differ based on the identify of the buffer,

In my view, if a sort algorithm cannot accommodate NaN, it's entirely acceptable to trap on NaN--and that is a trivial change.

I think this would be user-hostile. This isn’t like out-of-bounds subscript where it’s just not possible to reasonably proceed. NaNs crop up and people don’t expect them to trap when you sort – they expect them to sort to one end, like in Excel.

Honestly, I don't know that most users have thought about this possibility at all. Sure, a sort that matches IEEE total order _might_ be justifiable. But users are as likely to expect that the last item in the sorted collection will be the greatest and that the first item in the sorted collection will be the smallest. Now, you can say that NaN compares larger than everything, everywhere. But the moment that they try to plug that last element into, say, an AppKit UI function, they're toast.

I certainly disagree with ideas of trapping on NaN inside `==` or similar functions, but I really do think that an argument can be made that it is not reasonable to proceed with sorting an array that contains NaN.

After all, NaN is unordered with respect to everything and one cannot sort the unsortable. And, as shown, the `Array.==` implementation is trivially fixable. The entire standard library can be made NaN-safe in like manner.

My point was, it’s not about what we can do in the standard library. The std lib only has a handful of methods and sure, we can fix them one by one. It’s about whether the standard library defines types and protocols such that it’s reasonable for programmers to use them to write and use generic algorithms correctly. I’m citing the existing std lib implementations as proof that it’s easy to make mistakes. And I think a more complicated approach, with more operators, more properties, more rules, won’t fix this problem.

Well, to my mind, this problem you state really works out to:

(a) People expect generic algorithms that operate on Comparable types to work correctly with floating-point types
(b) Generic algorithms that operate on Comparable types don't work correctly with floating-point types unless the author is very, very careful
(c) People shouldn't have to be very, very careful to write working generic algorithms that work with floating-point types

Which, in turn, really boils down to:

(d) People expect floating-point types not to have numerous unintuitive (but learnable) properties, including NaN being unordered
(e) Floating-point types have numerous unintuitive (but learnable) properties, including NaN being unordered

The reason I'm writing to swift-dev (rather than evolution) is that my interest is in fixing the standard library. I'm not even convinced that this problem you state is fixable, at least on your terms. In the interest of not increasing the API surface area, you would propose to blow away (e) in the generic but not concrete context. Now, while it's true that an alternative to increasing the API surface area is to have the same API exhibit context-specific behaviors, that certainly isn't any less complicated conceptually, as we would then be faced with the notion that floating-point types both have and do not have numerous unintuitive properties, depending on the context in which they are used.

arbitrary duplication in Set/Dictionary etc.

(I disagree that it's arbitrary. If NaN != NaN, then every NaN is properly unique.)

The important point to take from this is not “how do we fix the Standard Library?” but rather “these errors are easy to make” by anyone writing generic code using standard protocols. If the Standard Library can’t get these right, how can we expect others to? There are potentially far worse bugs that could result. A differently-written sorting algorithm could corrupt elements (because it relied on substitutability). Other sorting or searching algorithms could easily go into an infinite loop. These problems exist because the code relies on the documented behavior of the protocol, because if you can’t, then what is the point in documenting that behavior?

It's not that the standard library *can't* get these right, but that it currently *doesn't*, because it documents one set of semantics but implements another, then relies on documented semantics that it knows it does not implement. We both agree that this needs to be fixed.

The question here is whether it is to be fixed by sticking to the documented semantic guarantees of `==` and bringing all implementations into proper conformance, or alternatively sticking to the implemented behavior of `==` and aligning the documented semantic guarantees to that.

I don’t support solutions such as adding a property indicating “containsExceptionalValues” (whatever that means), and expecting every author of a generic algorithm that uses Equatable to remember to call it, and craft custom paranoid behavior (if there is any reasonable behavior) based on it. With recursive conformance landed on master, we finally have a generics system where writing algorithms against Collection can be considered approachable by ordinary users. You no longer have to know things like how Collection.SubSequence needs to be constrained to also be a Collection – it just is. We would be negating this good work to now introduce a whole new set of gotchas that we expect people to know (without the type system even helping them in this case) about how some types, including standard library types, flout the documented rules for Equatable and Comparable, and that you need to use one of a handful of properties to hack in special cases to handle it.

The gotchas aren't new; they arise when using floating point values, originate with the IEEE definition of floating point equivalence, and exist in some form in every language that has implemented collections of floating point values. Crucially, they exist today in Swift; only, we haven't documented it.

And as a user of algorithms, what should you do? If a generic algorithm doesn’t document how it handles these special cases, should you assume it doesn’t? Check the code? Experiment to find out?

This problem also spreads, virus-like, once we have conditional conformance that makes containers equatable when their elements are. [Double] would need to propagate it’s elements’ “exceptionality", to avoid problems with [Double]. Double? will have to do the same.

This isn't a _problem_. In fact, I consider this to be a key _feature_. Naturally, every protocol conformance (conditional or not) must implement all protocol requirements, so if we add additional requirements they must be implemented. What I'm saying here is that *it may be desirable* to have some protocol-based API to distinguish partial from full equivalence relations. If you accept that premise, then it is the logical consequence that if you conditionally conform `Array` to `Equatable`, you will have to implement any new APIs, and in so doing, document how equivalence of arrays of floating point values relates to floating point equivalence. For me, this is a _good thing_: it documents _in code_ something that today is muddled through.

The explanation that a method on `Float` is a "floating-point context" but a method on `[Float]` is *not a "floating point context"* is, IMO, indefensible.

Nevertheless, I will attempt to defend it :)

I find it odd that violating the documented requirements of a protocol is considered defensible, but expecting types comply with those requirements is indefensible. A principled stance would be to say that Float shouldn’t conform to Equatable (because… it doesn’t!) and requiring all calls to supply a predicate (and maybe amending types like Dictionary to allow you to supply one). That won’t fly though – users would complain – so instead we are in this murky ground.

I don't think we should defend violating the documented requirements of a protocol. Either (a) Float should not conform to Equatable (agree, this is a non-starter); (b) how Float conforms to Equatable should be brought into conformance with documented semantics (your stance); or (c) what semantics are documented should be brought into alignment with how conformance is actually implemented (my stance). Naturally, in the last case, additional APIs should be added as needed to make such reduced semantic guarantees useful for generic algorithms.

Later in the thread, you mention a possible fix for sort:

`sort()` is problematic, but not if a custom predicate is supplied.

So, we are potentially trading off one subtlety (that < behaves differently in generic and non-generic contexts) for another (that you need to know that you need to pass in a special predicate for sorting, or you get nonsense results). Knowing when an algorithm requires you to supply a predicate (like sort) vs when handling for the special case is built in (like equatable) seems far worse complication to me than knowing one rule: that generically when constrained to Comparable, Float adheres to the requirements of Comparable. Always. That is a consistent rule that you need to learn once and that doesn’t vary depending on which algorithm you’re using.

I would argue that Float should _always_ adhere to the requirements of Comparable, in all contexts. The question is, rather: what can be the requirements of Comparable such that Float can always adhere to them?

Another alternative proposed in previous threads is to give Comparable an additional operator (<=> or .compare(to:) that will always enforce a total ordering, and sort can use that. This is, afaict, C#’s solution – double.NaN < 1.0, 1.0 < double.NaN and double.NaN == double.NaN all return false, but Comparer<double>.Default.compare returns -1, 1 and 0 respectively.

This is, essentially, the endpoint of what I'm proposing.

Equatable would vend (modulo bikeshedding):
`==`, a partial equivalence relation
`~`, a full equivalence relation
`containsExceptionalValues` (yes, this is a deliberately terrible name, because it's meant to go through bikeshedding), a Boolean value to indicate whether `==` is the same as `~`

Comparable would vend (modulo bikeshedding):
`<`, `>`, <=`, `>=`, defined as now
`<=>`, as in C# `compare` (or maybe, to emphasize the point, `<~>`)
`containsExceptionalValues`, inherited from `Equatable`, to document the relationship between `<` (etc.) and the spaceship operator

This looks to me to be an absurd mess of operations, none of which will have much hope of being used in a coherent fashion by most people. Should I use == or ~ here? What are the rules again? Will people remember to not use < when they really need <=>? Probably not. Did the author of this framework I’m using remember? Dunno.

The syntax here is not the point (or if it is, it can be bikeshedded). The point I'm trying to make is that what you're criticizing as _incoherent_ is also _inescapable_. Floating-point types have a notion of equivalence that isn't full equivalence. For certain use cases (both concrete and generic), we want that partial equivalence, while for other use cases (both concrete and generic), we truly want full equivalence. To work with floating-point types correctly, a user must know that there is a difference between the two. If there is no hope of "most people" understanding this distinction when one relation is named `==` and the other is named `~`, then _a fortiori_ there is no hope of "most people" understanding the distinction when they're conflated into one operator `==` that has different behaviors in different contexts.

The C# model of compare works because < is not available generically. There is no choice between < and <=>, and so the model is simple and easily understood by both algorithm implementors and users. And if you need a different ordering, you can supply your own custom comparator. As far as I can tell, it’s a good model and users are happy with it. Swift is different, since the concrete < isexposed to the generic implementation, but having two possibilities and expecting users to pick is IMO a bad idea. Hence the proposed fix that Float’s Comparable.< is required to be a total order, per the requirements of Comparable, essentially giving us the C# model.

A true C# model would be fine, but the key point of that model to my mind is that partial equivalence and full equivalence are spelled differently (that is, `==` and `Equals`, respectively). It would not work with IEEE `==` being spelled the same way as Comparable `==`. If we were to rename the IEEE operation `&==` instead, then we'd functionally have a design that's broadly similar to the earlier version, only with different names:

Equatable would vend `==`, a full equivalence relation (and `!=`)
Comparable would vend `<`, `>`, `<=`, `>=`, now operators that reflect a total order over the set of all values; and maybe `<=>`
Floating point would additionally vend `&==` and `&<` (and `&!=`, `&<`, `&>`, `&<=`, `&>=`)

One key difference here would be that the partial equivalence relation would now only be found on floating-point types, and it would not be possible to write a generic algorithm that operates on any partially equatable or equatable type. But the other--and major--issues would be (a) that all concrete uses of floating-point comparison operators would have to be migrated to append an extra `&`; and (b) this syntax suggests that most users want to use `==` *instead of* `&==`, which I'm not sure is the case--and certainly isn't the case if they're trying to do the same things they're used to doing with floating-point values in other languages.

What about having the protocol hierarchy look like this? (All names subject to bikeshedding, of course)

protocol MaybeEquatable {
  static func ?== (lhs: Self, rhs: Self) -> Bool?
}
protocol MostlyEquatable : MaybeEquatable {
  static func == (lhs: Self, rhs: Self) -> Bool
}
extension MostlyEquatable {
  static func ?== (lhs: Self, rhs: Self) -> Bool? { return lhs == rhs } // allows a `MostlyEquatable` or `Equatable` to function as a `MaybeEquatable` without any extra code
}
protocol Equatable : MostlyEquatable {} // purely a semantic difference, no extra syntax

protocol MaybeComparable : MaybeEquatable {
  static func ?< (lhs: Self, rhs: Self) -> Bool?
  // plus the rest of them
}
protocol MostlyComparable : MaybeComparable, MostlyEquatable {
  static func < (lhs: Self, rhs: Self) -> Bool
  // plus the rest of them
}
extension MostlyComparable {
  static func ?< (lhs: Self, rhs: Self) -> Bool? { return lhs < rhs } // allows a `MostlyComparable` or `Comparable` to function as a `MaybeComparable` without any extra code
  // plus the rest of them
}
protocol Comparable : MostlyComparable, Equatable {} // purely a semantic difference, no extra syntax

extension Double : MostlyComparable {
  static func ?== (lhs: Double, rhs: Double) -> Bool? {
    return lhs.isNaN || rhs.isNaN ? nil : lhs == rhs
  }
  static func ?< (lhs: Double, rhs: Double) -> Bool? {
    return lhs.isNaN || rhs.isNaN || (lhs.isInfinite == true && rhs.isInfinite == true && lhs.sign == rhs.sign) ? nil : lhs < rhs
  }
  static func == (lhs: Double, rhs: Double) -> Bool {
    // whatever current impl is
  }
  static func < (lhs: Double, rhs: Double) -> Bool {
    // whatever current impl is
  }
}

This would let people easily switch between the two kinds of "correct" generic comparisons (returning a `Bool?` for types that could have invalid comparisons and a `Bool` for types that can't), as well as easily opting into using IEEE-754's current "arguably incompatible with sane generic programming" behavior (by constraining T to "MostlyComparable" instead of "Comparable") if that's what they really want.

`Collection` could have different implementations of `==` depending on whether `Element` conforms to "MaybeEquatable" or "MostlyEquatable/Equatable", solving the "a = [Double.nan]; print(a == a) // prints true" issue.

This doesn't involve trapping on anything, so dunno what the performance implications would be. All the extra work would be contained in the "?*" functions, though, so at least existing code wouldn't sudden get slower.

- Dave Sweeris
_______________________________________________
swift-dev mailing list
swift-dev@swift.org <mailto:swift-dev@swift.org>
https://lists.swift.org/mailman/listinfo/swift-dev

I don’t know about Brent, but I was only speaking about the semantics/behavior of == with respect to NaN. Trap on NaN for ‘==‘, but restore full IEEE behavior by using ‘&==‘. For ‘Float?’ we could treat NaN as nil for ‘==‘. ‘&==‘ would still have the IEEE behavior.

This restores reflexivity and we can count on == being a true equivalence relation in our generic algorithms. Lack of proper handling of NaN is almost a stereotypical cause of bugs. Looking at code written for Floats, seeing &== will be a sign that the original programmer considered the possibility of NaN.

Thanks,
Jon

···

On Oct 20, 2017, at 7:36 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Fri, Oct 20, 2017 at 07:21 David Zarzycki <dave@znu.io <mailto:dave@znu.io>> wrote:

On Oct 20, 2017, at 07:51, Xiaodi Wu via swift-dev <swift-dev@swift.org <mailto:swift-dev@swift.org>> wrote:

On Fri, Oct 20, 2017 at 1:22 AM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:
+1 for trapping unless using &==. In the case of ‘Float?’ we could also map to nil.

This is probably a more appropriate discussion for evolution though...

On Oct 19, 2017, at 9:48 PM, Brent Royal-Gordon via swift-dev <swift-dev@swift.org <mailto:swift-dev@swift.org>> wrote:

On Oct 19, 2017, at 4:29 PM, Xiaodi Wu via swift-dev <swift-dev@swift.org <mailto:swift-dev@swift.org>> wrote:

D) Must floating-point IEEE-compliant equivalence be spelled `==`?

In my view, this is something open for debate. I see no reason why it cannot be migrated to `&==` if it were felt that `==` *must* be a full equivalence relation. I believe this is controversial, however.

I actually got partway through writing up a pitch on this yesterday, but my opinion is that NaNs are so exceptional, and so prone to misuse, that we ought to treat them like integer arithmetic overflows: trap when they're detected, unless you use an `&` variant operator which indicates you know what you're doing.

I strongly suspect that, in practice, most float-manipulating code is not prepared to handle NaN and will not do anything sensible in its presence. For example, Apple platforms use floating-point types for geometry, color components, GPS locations, etc. Very little of this code will do anything sensible in the presence of a NaN. Arguably, it'd be better to exclude them through the type system, but I don't think that's a realistic possibility—we would need to have done that in a more source-break-friendly era. But that doesn't have to mean we're completely stuck.

Built-in floating point operators, as well as libc/libm math functions, are designed to propagate NaN correctly. This is not meant to be a thread about NaN, and we need to be cautious to define the scope of the problem to be solved from the outset. The tendency of having ever-expanding discussion where issues such as method names turn into discussions about the entire standard library go nowhere.

The question here is about `==` specifically and how to accommodate partial equivalence relations. For sanity, we start with the premise that NaN will forever be as it is.

I support Jonathan’s argument. If Swift wants to trap on NaN to improve self-consistency and simplicity, then the tradeoff might be worth it. The alternative, teaching the Equality protocol about NaNs, feels like “the tail wagging the dog".

In short: what IEEE requires of floating-point hardware is separable from IEEE’s opinions about language/library design.

IEEE 754 requires certain behaviors of conforming implementations and is meant to allow for portable use of floating point. Swift’s floating point facilities are conformant to that standard, and breaking IEEE 754 conformance has been repeatedly stated to be a non-starter.

The essential idea behind quiet NaN is that it can be used as input in operations that take floating point values without raising errors. Since breaking IEEE 754 conformance is a non-starter, trapping on NaN is outside the realm of available solutions.

It has been pointed out, however, that IEEE does not require a specific syntax for floating point equivalence, hence the question of whether it can be spelled differently. However, trapping on NaN is emphatically not an option.

But I really don’t want to discuss this at all here. The topic of this thread is about the semantics of Equatable.

+1 for trapping unless using &==. In the case of ‘Float?’ we could also map to nil.

This is probably a more appropriate discussion for evolution though...

D) Must floating-point IEEE-compliant equivalence be spelled `==`?

In my view, this is something open for debate. I see no reason why it cannot be migrated to `&==` if it were felt that `==` *must* be a full equivalence relation. I believe this is controversial, however.

I actually got partway through writing up a pitch on this yesterday, but my opinion is that NaNs are so exceptional, and so prone to misuse, that we ought to treat them like integer arithmetic overflows: trap when they're detected, unless you use an `&` variant operator which indicates you know what you're doing.

I strongly suspect that, in practice, most float-manipulating code is not prepared to handle NaN and will not do anything sensible in its presence. For example, Apple platforms use floating-point types for geometry, color components, GPS locations, etc. Very little of this code will do anything sensible in the presence of a NaN. Arguably, it'd be better to exclude them through the type system, but I don't think that's a realistic possibility—we would need to have done that in a more source-break-friendly era. But that doesn't have to mean we're completely stuck.

Built-in floating point operators, as well as libc/libm math functions, are designed to propagate NaN correctly. This is not meant to be a thread about NaN, and we need to be cautious to define the scope of the problem to be solved from the outset. The tendency of having ever-expanding discussion where issues such as method names turn into discussions about the entire standard library go nowhere.

The question here is about `==` specifically and how to accommodate partial equivalence relations. For sanity, we start with the premise that NaN will forever be as it is.

I support Jonathan’s argument. If Swift wants to trap on NaN to improve self-consistency and simplicity, then the tradeoff might be worth it. The alternative, teaching the Equality protocol about NaNs, feels like “the tail wagging the dog".

In short: what IEEE requires of floating-point hardware is separable from IEEE’s opinions about language/library design.

IEEE 754 requires certain behaviors of conforming implementations and is meant to allow for portable use of floating point. Swift’s floating point facilities are conformant to that standard, and breaking IEEE 754 conformance has been repeatedly stated to be a non-starter.

The essential idea behind quiet NaN is that it can be used as input in operations that take floating point values without raising errors. Since breaking IEEE 754 conformance is a non-starter, trapping on NaN is outside the realm of available solutions.

It has been pointed out, however, that IEEE does not require a specific syntax for floating point equivalence, hence the question of whether it can be spelled differently. However, trapping on NaN is emphatically not an option.

But I really don’t want to discuss this at all here. The topic of this thread is about the semantics of Equatable.

On the other hand, changing the spelling of IEEE `==` was option in your opening post represented by demand 4 and question D.

4) IEEE-compliant floating-point equivalence must be spelled `==` and not some alternative such as `&==`.
D) Must floating-point IEEE-compliant equivalence be spelled `==`?

In my view, this is something open for debate. I see no reason why it cannot be migrated to `&==` if it were felt that `==` *must* be a full equivalence relation. I believe this is controversial, however.

What you call the "proximate issue” goes away if the solution to what you call the “distal issue” involves requiring `Equatable` conforming types to provide an implementation of `==` that represents a full equivalence relation. It looks to me like many people are suggesting that this is exactly what we should do.

I not a numerics expert by any means and am therefore cautious to offer my opinion, but rejecting demand 4 appears to be the best option to me. One advantage of this approach in addition to providing a full equivalence relation for floating point’s equatable conformance is that it makes it clear when an algorithm was written with IEEE semantics in mind or when the programmer did not desire or may not be aware of IEEE semantics.

IMO, the more general issue of partial equivalence relations should be orthogonal. I don’t believe NaN should drive the semantic requirements of `Equatable` and I don’t believe IEEE should prevent floating point types from providing an Equatable conformance. Perhaps this is a sign of my ignorance of numerics, but that’s the solution this seems to make the best tradeoffs. I am happy to let others decide whether `NaN == NaN` should trap or return `true`.

···

On Oct 20, 2017, at 9:36 AM, Xiaodi Wu via swift-dev <swift-dev@swift.org> wrote:
On Fri, Oct 20, 2017 at 07:21 David Zarzycki <dave@znu.io <mailto:dave@znu.io>> wrote:

On Oct 20, 2017, at 07:51, Xiaodi Wu via swift-dev <swift-dev@swift.org <mailto:swift-dev@swift.org>> wrote:
On Fri, Oct 20, 2017 at 1:22 AM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

On Oct 19, 2017, at 9:48 PM, Brent Royal-Gordon via swift-dev <swift-dev@swift.org <mailto:swift-dev@swift.org>> wrote:

On Oct 19, 2017, at 4:29 PM, Xiaodi Wu via swift-dev <swift-dev@swift.org <mailto:swift-dev@swift.org>> wrote:

_______________________________________________
swift-dev mailing list
swift-dev@swift.org
https://lists.swift.org/mailman/listinfo/swift-dev

Ah, about == specifically. This is an interesting idea. However, trapping
on NaN doesn't meet the semantic guarantee for Equatable that for *any*
value a, a == a. This means that you continue to have issues with methods
like Array.== that rely on that guarantee, since it would compare equal
sometimes and trap sometimes.

(It’s also the case that this would go against the spirit and probably the
letter of IEEE 754, since you’re vending a comparison operation that traps
on a quiet NaN.)

···

On Fri, Oct 20, 2017 at 10:05 Jonathan Hull <jhull@gbis.com> wrote:

On Oct 20, 2017, at 7:36 AM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Fri, Oct 20, 2017 at 07:21 David Zarzycki <dave@znu.io> wrote:

On Oct 20, 2017, at 07:51, Xiaodi Wu via swift-dev <swift-dev@swift.org> >> wrote:

On Fri, Oct 20, 2017 at 1:22 AM, Jonathan Hull <jhull@gbis.com> wrote:

+1 for trapping unless using &==. In the case of ‘Float?’ we could also
map to nil.

This is probably a more appropriate discussion for evolution though...

On Oct 19, 2017, at 9:48 PM, Brent Royal-Gordon via swift-dev < >>> swift-dev@swift.org> wrote:

On Oct 19, 2017, at 4:29 PM, Xiaodi Wu via swift-dev < >>> swift-dev@swift.org> wrote:

D) Must floating-point IEEE-compliant equivalence be spelled `==`?

In my view, this is something open for debate. I see no reason why it
cannot be migrated to `&==` if it were felt that `==` *must* be a full
equivalence relation. I believe this is controversial, however.

I actually got partway through writing up a pitch on this yesterday, but
my opinion is that NaNs are so exceptional, and so prone to misuse, that we
ought to treat them like integer arithmetic overflows: trap when they're
detected, unless you use an `&` variant operator which indicates you know
what you're doing.

I strongly suspect that, in practice, most float-manipulating code is
not prepared to handle NaN and will not do anything sensible in its
presence. For example, Apple platforms use floating-point types for
geometry, color components, GPS locations, etc. Very little of this code
will do anything sensible in the presence of a NaN. Arguably, it'd be
better to exclude them through the type system, but I don't think that's a
realistic possibility—we would need to have done that in a more
source-break-friendly era. But that doesn't have to mean we're completely
stuck.

Built-in floating point operators, as well as libc/libm math functions,
are designed to propagate NaN correctly. This is not meant to be a thread
about NaN, and we need to be cautious to define the scope of the problem to
be solved from the outset. The tendency of having ever-expanding discussion
where issues such as method names turn into discussions about the entire
standard library go nowhere.

The question here is about `==` specifically and how to accommodate
partial equivalence relations. For sanity, we start with the premise that
NaN will forever be as it is.

I support Jonathan’s argument. If Swift wants to trap on NaN to improve
self-consistency and simplicity, then the tradeoff might be worth it. The
alternative, teaching the Equality protocol about NaNs, feels like “the
tail wagging the dog".

In short: what IEEE requires of floating-point hardware is separable from
IEEE’s opinions about language/library design.

IEEE 754 requires certain behaviors of conforming implementations and is
meant to allow for portable use of floating point. Swift’s floating point
facilities are conformant to that standard, and breaking IEEE 754
conformance has been repeatedly stated to be a non-starter.

The essential idea behind quiet NaN is that it can be used as input in
operations that take floating point values without raising errors. Since
breaking IEEE 754 conformance is a non-starter, trapping on NaN is outside
the realm of available solutions.

It has been pointed out, however, that IEEE does not require a specific
syntax for floating point equivalence, hence the question of whether it can
be spelled differently. However, trapping on NaN is emphatically not an
option.

But I really don’t want to discuss this at all here. The topic of this
thread is about the semantics of Equatable.

I don’t know about Brent, but I was only speaking about the
semantics/behavior of == with respect to NaN. Trap on NaN for ‘==‘, but
restore full IEEE behavior by using ‘&==‘. For ‘Float?’ we could treat NaN
as nil for ‘==‘. ‘&==‘ would still have the IEEE behavior.

This restores reflexivity and we can count on == being a true equivalence
relation in our generic algorithms. Lack of proper handling of NaN is
almost a stereotypical cause of bugs. Looking at code written for Floats,
seeing &== will be a sign that the original programmer considered the
possibility of NaN.