I tried but couldn't find any previous discussion about this, so:
BinaryInteger types have this method:
func signum() -> Self
(which returns -1 if this value is negative and 1 if it’s positive; otherwise, 0.)
Is there a reason why this method shouldn't be available on floating point types too?
(Note that it's not the same as FloatingPoints existing sign property.)
I guess things like -0 and +0, and nans with the sign bit set (or unset), infinity etc complicates it a bit, but what if it worked like the free function sign(x) that we get when importing simd:
I didn't add it when I wrote the FloatingPoint protocols because:
(a) I've never found it to be very useful.
(b) It's not an IEEE 754 required operation.
(c) It's easily confused with the sign property, which is a required operation and has distinct semantics.
I didn't consider this, but it's also trivially implementable as an extension, and there's not much optimization opportunity. So it seems like a tough sell to me, unless there's a really compelling use.
It exists for simd because it's common in shader languages and somewhat more useful for vector programming than for scalar.