Generalizing floating point math operators over the BinaryFloatingPoint protocol

Hi,

I recently started porting a project to Swift for Tensorflow, but while adding support for sampling from a Beta distribution, I bumped into an issue that I'm not sure how to best handle.

I can create a Beta distribution over Float or Double, for example, but when I try to generalize it over BinaryFloatingPoint, I cannot. The reason is I need to use the log function when sampling and it's not defined over BinaryFloatingPoint , but over Float , Double , and Float80 separately.

So, why are floating point operations not generalized over the BinaryFloatingPoint protocol, and what is a good/standard way to deal with this kind of situation?

Thank you,
Anthony

because we have no way of evaluating those functions for a generic floating point type. i looked into this a while ago, and software impementations of “advanced” math functions generally rely on magic bit-twiddling that’s specific to that particular floating point bitwidth. if you don’t know the layout, you can’t compute the function.

Adding them as un-defaulted requirements to the protocol would also make it unreasonably difficult to conform custom types to them, especially if you only care about + - * / and squareRoot().

You can always just define your own protocol that inherits from BinaryFloatingPoint and conform Float and Double to it using the Glibc/Darwin implementations. i think that’s the most common solution.

Serious question: do we care how hard it is to add a new BinaryFloatingPoint type?

1 Like

Serious question: do we care how hard it is to add a new BinaryFloatingPoint type?

Yes, I think we do care. That said, we also want a better story for generic math functions. Personally, I think that there's a missing protocol here, which should be relatively straightforward to define, though the limitations of the system math libraries on some platforms are a minor complication.

Addressing the actual question here:

So, why are floating point operations not generalized over the BinaryFloatingPoint protocol, and what is a good/standard way to deal with this kind of situation?

Because BinaryFloatingPoint models the requirements of IEEE 754 binary arithmetic, which doesn't include those operations. Defining a protocol with the operations you need and retroactively conforming Float and Double is an adequate short-term solution, but we should have that for you in the standard library at some future point.

5 Likes

Julia doesn't avoid this problem in any particularly clever way; instead, they define default implementations of math operations on Real (the Julia equivalent of FloatingPoint) that simply convert to Float64 first. Someone implementing a new floating point type would then have to overload those fundamental math operations if they wanted to provide additional accuracy or performance, but the fact that fallbacks exist means that people can define higher-level operations generically.

Yeah, we've deliberately avoided that approach because it leaves you with default implementations that are really quite bad for most types someone might implement (and no compiler error or warning to clue the user in that they're bad).

Thanks for all the responses!

Thanks, this does indeed seem to be the best temporary solution, but I agree that this should probably be part of the standard library so that general abstractions over floating point numbers could be built. One possible reference may be the Spire library in Scala. It's a bit more general than what we're talking about here, but they have a nice way of dealing with number types.

I didn't know that, but I agree with @scanon that this may lead to unexpected behavior and would at least warrant a compiler warning. I would prefer a solution where there are several well-defined number types, each with their supported operations.

@Chris_Lattner3 You mentioned in an old thread that type disjunctions (i.e., "or"-types) should not be supported in Swift. May I ask why? Because this could be one possible solution to this.

Like this, right?

protocol BinaryFloatingPointWithGenericMath : BinaryFloatingPoint {
  static func _log(_ x: Self) -> Self
  static func _sin(_ x: Self) -> Self
  // ...
}
extension Float80 : BinaryFloatingPointWithGenericMath {
  static func _log(_ x: Float80) -> Float80 { return log(x) }
  static func _sin(_ x: Float80) -> Float80 { return sin(x) }
  // ...
}
extension Double : BinaryFloatingPointWithGenericMath {
  static func _log(_ x: Double) -> Double { return log(x) }
  static func _sin(_ x: Double) -> Double { return sin(x) }
  // ...
}
extension Float : BinaryFloatingPointWithGenericMath {
  static func _log(_ x: Float) -> Float { return log(x) }
  static func _sin(_ x: Float) -> Float { return sin(x) }
  // ...
}

func log <T: BinaryFloatingPointWithGenericMath> (_ x: T) -> T {
  return T._log(x)
}
func sin <T: BinaryFloatingPointWithGenericMath> (_ x: T) -> T {
  return T._sin(x)
}
// ...