Tiny pitch: FloatingPointSign.negate()


FloatingPointSign is an enum with two cases: .plus and .minus.


When working with a FloatingPointSign, one sometimes wants to flip it to the opposite sign.

Proposed solution

Add a mutating negate() method to FloatingPointSign, as well as a non-mutating negated().

Detailed design

extension FloatingPointSign {
  mutating func negate() {
    self = self.negated()
  func negated() -> FloatingPointSign {
    switch self {
    case .plus: return .minus
    case .minus: return .plus

Source compatibility


ABI / API / Alternatives / Future directions


1 Like

toggle would seem more natural, to correspond to the Bool function. There's no toggled as that's taken care of by !. You could use unary minus in the same way for this.

1 Like

As far as naming is concerned, one could say “negate the number” but it seems like “toggle the sign” reads more natural than “negate the sign”.

I don’t think I’ve ever heard “toggle the sign”. “Flip” it, “invert” it, or even “negate” it, sure.


Ah yes, “flip” is much better. I was trying to say that “negate the sign” is not so common.

"invert the sign" sounds better to me :man_shrugging:

But I agree that "negate the sign" sounds strange.

"Negate the sign" sounds like it means "make the sign negative", which would just be an effective no-op if the sign was already negative.


I still don’t like “negate” for FloatingPointSign but I don’t think it means “making something negative” though, e.g. FloatingPoint.negate().

It can be argued that FloatingPointSign.negate() is named for consistency with FloatingPoint.negate(). So it’s a tradeoff between fluency and consistency.

Could the bitwise NOT operator be used, to invert the sign bit?

extension FloatingPointSign {
  public static prefix func ~ (_ rhs: Self) -> Self {
    (rhs == .minus) ? .plus : .minus

var sign = FloatingPointSign.minus
sign = ~sign

Should this be pitched for Swift Numerics, rather than the standard library?

Can you give an example of how you actually intend to use this API?

1 Like

Sure, suppose you’re keep track of a sign with a variable outside a loop, and conditionally negating it within the loop as necessary.

It’s nicer to write:

if foo { sign.negate() }

rather than:

if foo { sign = (sign == .plus) ? .minus : .plus }

or the much longer equivalent switch.

Usually in this situation you have an actual value and not a sign, though, so you would use a = -a.

You’re building the value with one of the initializers that takes sign, exponent, and significand.

The exponent and significand are constructed within the loop, and the sign tells whether the new term gets added to or subtracted from the accumulated result.

Negate the significand.

Sorry, I’m going to need to see an actual example.

(FWIW, I think making sign a formal enum was basically a doofy mistake, and I’m wary of building up more API around it.)


Say the sign needs to be remembered across loop iterations, because it affects all subsequent terms.

If you wanted to negate the significand, then you’d need to store a var isNegative: Bool outside the loop, and have two conditional tests:

if isNegative { newTerm.negate() }

if foo { isNegative.toggle() }

instead of just one.

Be that as it may, unless a breaking change to remove FloatingPointSign is in order, the type exists.

Swift has chosen the word negate to mean “switch the sign of”, as exemplified by the SignedNumeric protocol and all its conforming integer and floating-point types.

The FloatingPointSign type embodies the distilled essence of signedness, and seeing as we must work with that type, it stands to reason that the type should be useful to work with.

The one meaningful operation on a sign, is to negate it.

If we acknowledge that negating a sign is useful (cf. SignedNumeric), then it follows that the type which represents a sign should have that operation available.


Sure, “negate” doesn’t mean (in this context) “make something negative” but it sure can plausibly sound like it means that. I certainly agree that it is confusable enough to be best avoided.

I would not attempt to reach for “consistency” in naming here with floating-point values. There’s no negated as the non-mutating counterpart because it’s spelled with prefix -, and I don’t think we’d want to use an operator for FloatingPointSign. That, and the general feeling you and others have mentioned that “negate the sign” just sounds weird, are pretty good hints I think that naming manipulation of a sign in the same way as manipulation of a number isn’t the way to go.

Instinctively, I do like “invert” better, but it could be confused with other mathematical operations too. “Flipping” the sign or “switching” the sign are perfectly good terms, but may seem a little weird without the word “sign” next to it. Ultimately, “toggle” seems the best balance of not confusable with an operation it’s not and already precedented in the language.

Because of the way init(sign:exponent:significand:) has been documented, folks are generally unaware that you can create a negated value from another by negating the significand without changing the sign parameter.

In all other cases, too, where I’ve had to deal with changing sign, I’ve been better served by avoiding operations on FloatingPointSign as much as possible. I agree with @scanon that a concrete example would be helpful. It would be useful, for instance, to know the broader use case you have in mind where the sign specifically needs to be remembered rather than the original floating-point value across loop iterations. My experiences with this type lead me to the opinion that, even though it semantically represents signedness, making it “useful to work with” by adding more APIs—even if they are meaningful ones—is likely to be worse than the status quo which steers users more frequently towards alternatives.

However, if you have a concrete scenario where using operations on FloatingPointSign is really the way to go, it would be useful to see. I am just quite convinced that there exist virtually none that would not be better off rewritten to avoid it.

Say you have a recursive formula along the lines of:

f(x) = p(x) + (-1)q(x) f(g(x))

where p, q, and g are easy to calculate.

In order to find f(x), you could write a recursive function, but for performance reasons you might prefer to implement it with an imperative loop.

The loop accumulates p(x) values and updates x to g(x), until some stopping condition is reached. But it needs to know whether to add or subtract each p(x), and that’s not as simple as looking at the parity of q(x).

If you expand out the formula, it becomes:

f(x) = p(x) + (-1)q(x) ( p(g(x)) + (-1)q(g(x)) ( … ) )

And we see that the sign applied to any particular value of p is the combined parity of all previous q values, not just the parity of the most recent value of q.

So you need to keep a sign variable outside the loop, and the parity of q(x) tells you whether or not to negate that stored sign value on each iteration.

If p(x) itself is initialized from a sign, exponent, and significand, then it makes sense to use the stored sign directly, rather than introduce an extraneous conditional test to negate p (or its significand).

• • •

That’s a separate issue, about the (lack of) clarity of that initializer.

However it’s worth noting that if you’re using a raw significand, then you can’t simply negate the significand.

Right, but raw significands generally do not come up in the sort of use case that you're describing (for that matter, it's pretty rare to be working with separate exponent and signficands in that sort of scenario, though not unheard of).

I don't understand this; you would usually reverse the order of accumulation in the imperative loop. This doesn't require tracking the sign, and doesn't require explicit recursion, and usually has better numerical stability, since when you're doing this sort of summation it's generally with a series that eventually decays.

As I said, I'm not really opposed to adding this API, but it's actual use seems extremely narrow to me.

At the risk of derailing this thread…how would you reverse the order?

You don’t know the next x value until you calculate g(x), so you’d have to allocate an array of [x, g(x), g(g(x)), …], and even then you wouldn’t necessarily know how far to go since the stopping condition might depend on p(x).

• • •

Frankly I’m surprised to see so much pushback against this. Or indeed any pushback at all, aside from bikeshedding.

It seemed to me like an obvious missing API.

Ok, I misinterpreted what you wrote, but honestly that only makes me think this is more niche, because I’ve never once had to evaluate an expression like that.

The reason for my pushback is that I’m wary of adding API to manipulate signs like this, because manipulating the sign of a floating-point number as an abstract thing, rather than just using negation, copy sign, and absolute value on the number itself, is uncommon, and it feels like an attractive nuisance that will lead people into overly convoluted code that repeatedly takes apart and reassembles floating-point representations instead of just operating on the numbers themselves.

I’m not opposed to it, but we’re 20 posts in and you haven’t yet posted a code example showing the use case, just a handwavy “it would be good when you have a weird expression like this”. What is the actual motivating example?