Swift is “designed by committee” the same way all these IEEE standards are written. Saying the IEEE standard is infallible because it was written by a panel of floating point experts is as ridiculous as saying Swift is infallible because it was designed by a panel of API method naming experts.
Appealing to the standard one way or another isn't very useful in this case, IMO; IEEE only defines these operations, it doesn't require them to be spelled any specific way, and in fact, the comparison operators are defined as essentially returning Optional<Bool>
("unordered" is a distinct condition from less than, greater than, or equal); it's C tradition that folds the "nil" (unordered) and false results together, not IEEE. Nonetheless, I'm willing to take Steve's word that that tradition could well be too entrenched to challenge.
No, IEEE doesn't require any operator to be spelled in any way.
What I'm saying is that justifying a change in Swift syntax on the basis that we will 'fix' what IEEE got 'wrong' is rather a large claim: there is no reason to believe a priori that any design proposed here will achieve a result better for Swift than the IEEE 754 semantics we currently adopt in the syntax that we've adopted it (which is common to many languages). On the other hand, there's plenty of reason to be skeptical, on the basis that we have extensive experience and wide adoption across an array of languages of a standard created by floating-point experts on the one hand, and a newly suggested, never-implemented modification proposed by non-floating-point experts on the other.
On the other hand, the proposal here is working from the starting point that the task is to reconcile a type with IEEE 754 semantics for concrete comparison with total equivalence relations for Equatable
conformance; it's found a way that this can happen without claiming, as Nevin does, that 'there is literally no circumstance in which [IEEE] behavior is useful' and simply shuffling IEEE semantics off into a dark corner.
Nobody is saying that IEEE got anything wrong - only that Swift got it wrong by trying to shoehorn Comparable/Equatable
operators to do what IEEE specified.
I maintain that experts and their algorithms which depend on this behaviour are in the minority. There is a clear and concise syntactic option which can take the place of the currently-borked operators.
The proposed changes, while they would fix some use-cases, are absolutely not clear at the point-of-use.
(Apologies for delayed responses to this thread.)
I'm flattered by the hyperbole, but I don't know that expertise in floating-point math is the most relevant thing here, as this is fundamentally a language design question rather than a floating-point question.
The basic shape of the problem, as I see it, is as follows:
- We want to have conformance to IEEE 754. While I won't argue that floating-point gets everything right, there are a bunch of things that it very much does get right, and those are worth keeping. See Notes defending floating-point below. This constraint requires that IEEE 754 comparisons be available, but does not place any requirement on how they are expressed in the language.
- We want to have easily-accessible totally-ordered comparisons for all basic numeric types. Designing a language in a vacuum, the obvious decision, then, would be to use the "usual" comparison operators for the total order, and use
&==
and&<
or whatever crazy thing we invent for IEEE 754 comparisons. - We are not designing a language in a vacuum. "Swift is a pragmatic language," and cannot ignore that there is years worth of Swift code that would be broken by such a change, and this would be a departure from essentially every other mainstream programming language.
There is no solution that satisfies all of these points (or even really just the last two). I think that several people on this thread are hoping for a perfect solution. It does not exist. There are tradeoffs, and we have to pick. Some options:
- Leave things as they are. This is basically the status quo for most languages, and while it's fairly annoying, people are at least used to it, and while it causes some bugs, it has not proven to be totally catastrophic. This throws out the second and third concerns.
- Completely overhaul the design of
FloatingPoint
and the concrete types, to eliminate NaN and/or unify the concepts of NaN andOptional
. A really fun project to consider prototyping. Would require sweeping changes to Integer arithmetic as well in order to do in a principled manner, and good to keep in mind for a future major breaking revision of Swift or for Your Next Language Project. Wildly out of scope at this point. This throws out pragmatism entirely. - Make the comparison operators total, move the IEEE 754 comparisons under new names. A significant breaking change. It's tempting to assume that any programs that depend on the behavior of IEEE comparisons with NaN have bugs, but that is a mistake. There is real code that depends on using
x != x
to test for NaN (especially in any of several languages that bind IEEE 754 arithmetic but do not provide a complete library with anisNaN
predicate). There is real code that depends on.NaN < x
orx > .NaN
always evaluating to false to avoid out-of-range table lookups. People are going to port algorithms from other languages, and deviation from this behavior, however you feel about it aesthetically, will introduce bugs. This also mostly throws out pragmatism, though possibly less than (2) would. - This proposal, or something like it. Not a totally satisfactory resolution to the second concern, but better than doing nothing. I think we can find something that mostly works by tweaking around the edges, but there is undeniably a certain amount of sweeping things under the rug. Personally, I am generally delighted to make things "work" for simple use cases and require some nuance for library writers and other experts, but I also understand the ick factor.
Notes defending floating-point
There's a lot of FUD around floating-point generally and IEEE 754 specifically. Most of this is based on misunderstanding, and overlooks two absolutely critical points in favor of floating-point:
- The arithmetic is closed and well-defined. There is no undefined behavior, and almost[†] no implementation-defined behavior. Because of this, despite floating-point's undeserved reputation for unpredictability, under minimal assumptions on the language model and a well-behaved compiler, it is possible to write floating-point algorithms that are fully portable across both languages and architectures. The difficulty of doing so with integer arithmetic is, surprisingly, significantly higher.
- The arithmetic is widely and efficiently implemented. Until quite recently, the highest-throughput option for computation on almost any platform was floating-point. This has changed somewhat with the rise of very low-precision "neural" accelerators, but the basic point still stands. With the exception of the smallest embedded CPUs, you can depend on the arithmetic being available and fast.
In short, the IEEE 754 standard mostly does not "need fixing". Frankly it works more dependably than almost anything else in computing. There are a number of decisions that I, or the committee, would probably not make if starting from scratch today, but on the whole it has been an astounding success.
† There are a few corners of implementation-defined behavior, such as the encoding of decimal values and NaNs, which are annoying from a language / standard-library implementor's point of view, but--with well-considered abstractions provided by the aforementioned individuals--are utterly inconsequential from the user's perspective, because the representable values and the arithmetic are fully defined.
There are also minor corners of implementation-defined behavior around the handling of flags ("exceptions" in the parlance of IEEE 754, but they do not correspond to what anyone today calls an exception, so let's avoid that word). Specifically: whether or not .nan.addingProduct(0, .inf)
sets the invalid flag, and whether or not multiply and fused multiply-add results that fall into a 1/4-ulp window just below the smallest normal number raise the underflow flag. In practice, no users actually look at the flags, so this is a non-issue. My preference would be for the committee to define a IEEE 754-lite that removes flags entirely and thereby remove this issue, but that's unlikely to occur anytime soon for a variety of reasons.
All of these examples seem quite easy for the compiler to detect and offer fix-its for (basically, add an ampersand), and for users to discover and learn - so I'm not sure it's quite as impractical as you suggest. It would certainly be different, but possibly also better. As you say, it's the "obvious" solution.
NaN does not necessarily appear as a literal value in these expressions, so detecting them and offering a fix-it would be hard or impossible in some cases. It would at least need to be a runtime sanitizer, as well, and would depend on users hitting those cases in their tests. There is no trivial solution.
It would be a blunt solution, but wouldn’t it be possible to have an automatic source migrator convert all code that currently uses comparison operators on values of floating point types to use the & version of the comparison operator? Wouldn’t that approach have an identical semantic impact to that your current pitch has?
In a green field, I might make a case for option 3, using &<
, &==
, etc. Not with years of existing Swift code, though.
However, I want to re-up this illuminating message from @Joe_Groff. It assuaged a lot of the angst I felt about option 4, and might assuage it for others:
As I understand Joe’s sketch, this says that option 4 can secretly evolve to also become option 3 in disguise:
We get two operator families for “total” and “floatal” comparison. They’re distinguished by Equatable.
/ Numeric.
namespaces instead of a &
prefix, but they're still distinct operators.
We can still specify which one we want to use in a particular spot in the code, applying a total ordering to Float
s and not just Comparable
s. Yes, Equatable.==(x, y)
is not exactly pretty or concise, but it’s arguably clearer than x == y
vs x &== y
.
Most importantly, we are not creating a bizarre hole in the type system for floating point numbers — well, not in the long term, anyway. We’re just treating floatal and total comparison as a name collision, and using language features that disambiguate when names collide.
If the language is heading in that direction, I'm both totally and floatally fine with adopting this proposal now and letting the language evolve around it later.
i’m voting against this proposal just because of this sentence
I blame Joe for starting it.
As @xwu has pointed out Kotlin does this. However the details are different, in particular in Kotlin:
-
NaN
is considered equal to itself -
NaN
is considered greater than any other element includingPOSITIVE_INFINITY
-
-0.0
is considered less than0.0
Suggest that the Kotlin convention is adopted rather than something Swift specific since there is a strong precedent from a similar language and many people use both.
This proposal has NaN equal to itself. Obviously you either order NaN less than -inf or greater tha. +inf; the choice is essentially arbitrary, so we could do either.
Making -0 < +0 is a worse choice in most regards. It can’t be implemented as efficiently, and it introduces a divergence with the IEEE oredering for finite values; they would agree otherwise. The only good thing about it is that it means that (up to encodings and NaN payloads), equality is substitability, which sounds compelling, but also holds for non-exceptional arithmetic results if you choose -0 == +0.
I'd like to point out one more option. Rename FloatingPoint
to FloatingPoint754
, leave Float and Double as is, and create a new set of types with a new name which replaces the idea of Nan with optionals, etc.... Then we encourage the use of the new types over Floats/Doubles unless 754 behavior is required.
Edit: Basically version 2, but instead of overhauling things, we just move them to the side and provide a new option.
Doing this in a principled fashion would, to my mind, require giving integers the same treatment; one would replace integer protocols aside, define all arithmetic on optionals, make division by zero and integer overflow produce nil
instead of trapping, etc, and then you'd be able to actually fuse NaN with Optional.
It's an interesting project, and one worth prototyping, but wildly out of scope and something that I personally would want to see baked for a couple years before it was considered as a replacement for what we have now.
I might agree if we were starting from scratch, but I would argue at this point we should just match what Swift does for Integers now, i.e. trap on divide by zero.
That said, if we really do want to overhaul Integers as well, then I think the cleanest way to do it is to define Throws!
and Throws?
which allow a function to throw, but act like there is an implicit try!
or try?
in front of the call to it. What this allows, is for us to continue to have the current trapping behavior by default (i.e. no source breakage), but also lets us coax out the optional behavior when desired: e.g. let c = b/a
would trap when a is zero, but let c = try? b/a
would instead have an optional c
because try?
overrides the implicit try!
.
The only problem I predict with that approach is that this might basically mean nothing but "we'll never actually try this out" - because in three or four years, people will say "it's an interesting project, but, you know, our system may have its quirks, but it's there and we can't break compatibility anymore".
I really wish Swift was a little bit bolder sometimes... but it's rather uncommon that an established language introduces avant-garde concepts (that might turn out to just not work), and that's something for experimental ecosystems.
Maybe it's worth spinning up a separate thread for dreaming up about how math in Swift could look like without the chains of short-term pragmatism?
Speaking only for myself, I don’t think it would be unreasonable for the language to take significant breaking changes on something like a once-a-decade pace, given a reasonable migration and compatability path. On the assumption that something like that might be possible, the next year or two would be ther right timeline to begin prototyping such changes.
Also, if someone wanted to prototype their own library with different semantics on their own, there shouldn't be that much they depend on from the compiler or standard library to define their own number types—the existing literal protocols are all user-extensible and can have their default behavior overridden with scoped typealiases.
Does… does this mean that we can have the spaceship operator? Please? Figuring out ordering with 'a single operation' would be lovely.