Rejection of Negative Bases; Bug or Deliberate Design?

I just switched some code over from tgmath.h to SwiftNumerics.

When doing so, I was surprised by a cascade of test failures that could all be traced back to pow(Self, Self)’s outright rejection of all negative bases:

XCTAssertEqual(pow(-3.0, 2.0), 9) // Failure: .nan

But looking at the source code, I cannot really tell if it is an accident or a deliberate design decision, because (1) the rejection is explicit and (2) there is a separate function for integral exponents. The source looks like this:

@_transparent public static func pow(_ x: Float, _ y: Float) -> Float {
  guard x >= 0 else { return .nan } // [1]
  return libm_powf(x, y)
}

@_transparent public static func pow(_ x: Float, _ n: Int) -> Float { // [2]
  // [...]
}

I worked around it on the client side by wrapping it like this:

@inlinable public static func ↑ (base: Self, exponent: Self) -> Self {
  if base >= 0 {  // SwiftNumerics refuses to do negatives.
    return Self.pow(base, exponent)
  } else if let integer = Int(exactly: exponent) {
    return Self.pow(base, integer)
  } else {
    // Allow SwiftNumerics to decide on the error:
    return Self.pow(base, exponent)
  }
}

So now I’m basically asking if a pull request to change it to the following would be welcome, or if it is intentionally not supposed to work that way for some reason:

@_transparent public static func pow(_ x: Float, _ y: Float) -> Float {
  guard x >= 0 else {
    guard let integer = Int(exactly: y) { return .nan }
    return pow(x, integer)
  }
  return libm_powf(x, y)
}

@scanon

Deliberate. This is as proposed in SE-0246. There's some notes here.

2 Likes

Interesting. Seems reasonable.


Since my wrapper needs to successfully handle everything the C function had in the past, this comment of yours scared me:

What is pow(-243, 0.2) ?

I had written the workaround based on the guard and the API and had neglected to put my math hat on. Your comment made me realize even the workaround didn’t actually handle everything in the mathematical definition. Not knowing knowing exactly what the semantics of the C function were, I was suddenly worried I’d broken more things without realizing it.

So for anyone else who stumbles across this and also wonders what other differences might be hiding in wait, I found this in the C documentation of pow:

For finite values of x < 0, and finite non‐integer values of y, a domain error shall occur and either a NaN (if representable), or an implementation‐defined value shall be returned.

That means that things like pow(-243, 0.2) never worked before—which I also verified with an experiment. So the above wrapper should be sufficient.

But I won’t be held liable if you stake your life on it. I was too lazy to thoroughly think through the entire long list of corner cases.

2 Likes

In C, pow operates on double, so 0.2 is not exactly representable, so this isn't an issue.

In languages that allow generics and overloading, we need to consider decimal types, where 0.2 will be exactly representable =)

So your comment applies to every single x < 0 and y ∉ Z argument pair, not just the ones from the example, right?

My (very rusty, somewhat hesitant) reasoning was this: Every non‐integral, rational number that is exactly representable in base 2 will have a denominator that is a power of 2. Placed in the exponent position, such a number represents a power‐of‐2th root. Since powers of 2 are necessarily even, the degree of the root must be even, and thus it can never operate on a negative radicand without leaving leaving the territory of real numbers. That means no such argument pair exits for x < 0 and y ∉ Z where the arguments and return value are all exactly representable in base 2.

1 Like

Right, which is why this is a problem that really only comes up when you’re designing with consideration for radix-10 types.