What's `FloatingPointSign` used for?

I only learn of FloatingPointSign reading Tiny pitch: FloatingPointSign.negate(). I have no idea what's it used for.

Isn't all you need are x > 0 or x < 0?

Can anyone please give me one or two example of whatt this enum is used for?

@Nevin ?

1 Like

It’s the return type of the FloatingPoint.sign method, and is also used in the manual construction of new floating point values, as a parameter to FloatingPoint.init(sign:exponent:significand:).

I’m sure there are edge cases that come up that the sign method handles properly, where naive comparison against 0 would give an incorrect result. One that comes to mind is -Float.nan, which is neither less than, equal, nor greater than 0.


It basically binds the required IEEE 754 isSignMinus operation, which is supposed to give you a way to extract the signbit that works for zeros and NaNs, where ‘x < 0’ would not.


Given a chance to do it over, would you have preferred FloatingPoint.isSignMinus instead of FloatingPoint.sign?

isNegative, no?

isNegative can be misleading since the sign can be negative even if the value is zero (-0.0) or you are working with a NaN.


I personally don't see a reason why FloatingPoint.isSignMinus would be a better name than FloatingPoint.sign. Plenty of Swift's FloatingPoint APIs use different names than the IEEE 754 standard. For example, instead of minNumMag(_:_:), Swift uses FloatingPoint.minimumMagnitude(_:_:), which is much clearer in my opinion. Instead of roundToIntegralTowardPositive(_:), Swift uses value.rounded(.up), which is also clearer.

value.sign == .plus is similarly much clearer than !value.isSignMinus.

This could cause subtle bugs. From the docs:

Don’t use this property to check whether a floating point value is negative. For a value x, the comparison x.sign == .minus is not necessarily the same as x < 0. In particular, x.sign == .minus if x is -0, and while x < 0 is always false if x is NaN, x.sign could be either .plus or .minus.