Comparable and FloatingPoint types

I believe IEEE.== should be renamed to "&==", and .nan == .nan by Equatable.==, and similarly for "<", ">", "<=" and ">=".

This is because, whether it is desirable or not, to an almost exclusive degree, "==" in Swift means Equatable.==, and despite the name, Equatable actually means "substitutable in value".
In addition, I believe even in IEEE, but certainly for Swift, .nan payloads are not semantically part of their value, and thus, there is only one .nan value, and it must, by reflexivity, be Equatable.== (substitutable with) itself, even if it is not mathematically.==, or IEEE.== to itself.

4 Likes

With the notable exception that "==" means IEEE floating-point equality for floating-point types.

So the question for this thread is: given that changing this now is a nonstarter, what design changes, if any, are best? It doesn't really advance the discussion about this design problem to restate that you'd prefer this constraint not to exist.

I don’t think this word has a place in Swift Evolution discussions. It is unhelpful, and objectively false. The mere fact that a discussion *has started* proves that any and every idea brought up for discussion on these forums *is* a “starter”.

If a person wishes to argue against a suggested change to the language, they should do so on merits. Attempting to shut down discussion by fiat or appeal to authority is not how the Evolution process operates.

5 Likes

Every discussion here is geared towards solving a particular design problem. The author who kicked off the discussion outlined the constraints of the design problem; what I'm reminding participants is that the aim here is to discuss design options within those constraints. It's fair to bring up other topics, but they belong in other threads.

I, too, wish that we were not so constrained in design options. And as many have said, we would have been less constrained if we were designing for Swift 1 or 2. But restating that doesn't advance the discussion and is unhelpful. Of course everything is a topic of conversation somewhere, but it's entirely fair to remind ourselves that certain things are, in this time and place, not within the bounds of this design problem.

now being the key word in this sentence. Perhaps there will be a window of opportunity to do so later in a “breaking” release. The liklihood of that possibility has a direct bearing on what changes are wise to make right now. It appears as if there is some support for this idea of infrequent breaking releases that would allow for changes such as this.

It would be nice to have some sense of how willing the core team is to entertain (and perhaps even plan for) such releases. If they are absolutely unwilling to do so then you are correct and the point is moot. If they are willing to do so then the conversation changes significantly. Perhaps it would be best to make no immediate changes, but instead start exploring the prototype of a larger breaking change as @scanon mentioned upthread.

Personally, I want Swift to be the best language it can possibly be. Getting there is a process of continuous refinement and improvement. It can only happen if we occasionally have the opportunity to make improvements that require a nontrivial breaking change. The only alternative is to carry suboptimal baggage indefinitely into the future.

I don’t think breaking changes like this should be taken lightly. However, I think that all of these are extremely problematic (especially as a long-term final design):

  • allowing floating point to violate the reflexivity semantic requirement of Equatable and the strict total order semantic requirement of Comparable
  • allowing the behavior of an operator to be different when used in concrete and generic contexts
  • Removing the conformances of floating point type to Equatable and Comparable

This leads me to the firm conclusion that the best approach is to take advantage of the room provided in the IEEE spec to choose names for the required floating point comparison operations (with & prefixed operators being the obvious choice). I do not agree that providing IEEE behavior for the conventional comparison operators is more important than the problems it leads to (as listed above). I also do not agree that source stability is more important than preserving the ability to solve these kinds of design problems at the appropriate time as the language continues to evolve.

Would anyone disagree with this choice were it not for the issue of breaking changes? If people generally agree on this, then let’s try to find the right time to make this change.

8 Likes

Admittedly, I meant now in contradistinction to before, but it is an interesting thought to consider if rare future breaks (Ă  la Python 2 to 3) might be possible. Many languages do go though such transitions, but as we all know these have their own challenges.

I would hesitate to agree, and for the reasons that @scanon has outlined. Specifically:

...despite floating-point's undeserved reputation for unpredictability, under minimal assumptions on the language model and a well-behaved compiler, it is possible to write floating-point algorithms that are fully portable across both languages and architectures. The difficulty of doing so with integer arithmetic is, surprisingly, significantly higher.

Nearly every language, and all commonly used languages, expose == as the spelling for IEEE floating-point equality; breaking from this is a drawback independent of source compatibility from prior versions of Swift, which must be weighed carefully if/when such a proposal comes up.

...And also:

The arithmetic is widely and efficiently implemented. Until quite recently, the highest-throughput option for computation on almost any platform was floating-point.

As a result, the existing ==, which can rely on hardware implementations on all platforms, is very fast. Comparison to NaN in the non-floating-point-math setting is a tiny subset of uses of ==, and making the default == that we promote to users a slower one is a performance hit that may or may not be justifiable (cf. "safe" indexing vs. trapping for array subscripts).

I understand that this is a drawback. But so is floating point types violating semantic constraints of Equatable and Comparable, allowing them to conform with different behavior than the concrete operator, or simply not having them conform. IMO there is a good solution if we are willing to do something different than other languages here and incur the breaking change. There is no good solution if we are not willing to do that. I want Swift to be the best language it can be so I am advocating for the best long-term solution that appears to be available.

Re: performance, this is certainly unfortunate. On the other hand, nobody is advocating taking the fast comparison away. Only that it not be used when it does not meet the semantic requirements of an operator which is clearly the case for Equatable and Comparable. And secondarily, that providing different semantics in a concrete context than a generic context, or in different generic contexts depending on constraints, is highly problematic.

Also, I can’t help but wonder if the total order comparison semantics could be made fast on Apple’s platforms if Apple decided to make that a priority. I don’t expect any public comment on that by Apple employees, but it does not seem beyond the realm of possibility. (Probably not as fast as IEEE, but certainly much faster than it is today.)

Certainly, if/when such a proposal comes up, we can discuss if it’s the best possible solution, but I would argue that any solution which breaks from almost all other languages, breaks source, and is less performant is not a good solution. We need to come to terms with the fact that there is no good solution.

I am beginning to come around to the idea of introducing a new type for general-purpose calculations, that simply wraps Double and provides intuitive behavior for comparison operators.

I would suggest naming this type Num to align with Int, and updating the documentation to prefer using Num instead of Double.

Then we can leave the existing floating-point types as-is, with broken conformances and all.

3 Likes

This can be answered without commenting on Apple's plans at all, but rather from basic hardware design principles: such a comparison operation can be made exactly as fast as IEEE 754 comparison on any CPU architecture; it would require a negligible amount of opcode space, and the physical implementation is no more complex.

5 Likes

But the download-times for hardware-updates are still terrible, no? ;-)
I don't think it's realistic that Swift alone will influence CPU design, but imho todays chips are fast enough to prefer correctness over speed — and afaics, no one suggested to remove the IEEE operations, so those who need them for whatever reason could still use them.

3 Likes

A key part of this to me, is that we guarantee this new type will never be Nan. It shouldn't even have a concept of Nan, but instead use both trapping and optionals as appropriate.

If we tell the compiler that it uses Double's Nan bit patterns to represent 'nil' when it is optional, then we should be able to keep full speed in most operations (with the exception of ==).

4 Likes

How many people actually know about, much less depend on IEEE floating-point behavior anyway? It seems to me most people write code expecting == to be reflexive, so such a change would on balance eliminate more bugs than it causes.

4 Likes

I think this would need significant work in both the standard library and the compiler builtins to make this fast, but the benefits would be so worth it. I’m tired of chasing down NaNs just because they propogate silently with IEEE floats. If it were integrated into Optionals (which i feel like it certainly could be with little performance impact, since test is pretty cheap and we would expect the NaN branch to almost never be taken) it would be natural to catch and handle the NaN right when it gets generated, instead of at the end of the algorithm.

2 Likes

I've thought about this before, I would certainly be a fan. Although I would like an "escape hatch" to let me write nan prone code. If I'm doing tons of linear algebra, I wouldn't want to litter my code with a bunch of force unwraps, conditional bindings, map calls, etc. to handle Optionals and I know wouldn't be nil.

1 Like

surely there is precedent for &== and &/ considering we have &<< and &>> for “hardware-natural” shifts

A minor tangent: &>> and &<< are not "hardware-natural" shifts. They are masking shifts, which happen to be natural for x86 scalars and arm64, but are unnatural for other platforms like arm32 which masks all shift counts to 8 bits and even for some SSE instructions like PSRAD which use 64 bits for the shift count(?!)

Not sure what the broader implications of this are, but while we’re brainstorming: any chance of throwing operators? If we used throws instead of optionals, the you could have a single try at the beginning of a multi-step computation with a single catch for the .NaN.

seems like a pretty heavyweight solution for something that could be solved with optionals

okay but in general we use & for useful but “not mathematically correct” behaviors like &+ which wraps instead of trapping. i think &/ and &== would be natural extensions of that, and we can make / and == do the “mathematically right” thing by default