Why isn't BitPattern an associated type of BinaryFloatingPoint?

Currently, both Float and Double individually declares the property bitPattern and the initializer init(bitPattern: UInt32/UInt64).

How come BinaryFloatingPoint doesn’t have an associated type BitPattern, and the above property and initializer as requirements, ie something like this:

public protocol BinaryFloatingPoint : ExpressibleByFloatLiteral, FloatingPoint {
    /.../
    associatedtype BitPattern: FixedWidthInteger, UnsignedInteger
    public var bitPattern: Self.BitPattern { get }
    public init(bitPattern: Self.BitPattern)
    /.../
}

?

Note that BinaryFloatingPoint already has eg:

public init(sign: FloatingPointSign, exponentBitPattern: Self.RawExponent, significandBitPattern: Self.RawSignificand)

Edit:
Ah, of course, Float80 doesn’t have the bitPattern property and the init(bitPattern:) initializer … sigh:

// Workaround for Float80-inflicted limitations of BinaryFloatingPoint:
protocol UsableBinaryFloatingPoint : BinaryFloatingPoint {
    associatedtype BitPattern: FixedWidthInteger, UnsignedInteger
    var bitPattern: BitPattern { get }
    init(bitPattern: BitPattern)
}
extension Float: UsableBinaryFloatingPoint { }
extension Double: UsableBinaryFloatingPoint { }

You seem to have figured it out already, but (at present) there’s no guarantee that an integer type capable of representing the bit pattern actually exists for an arbitrary type conforming to BinaryFloatingPoint.

There’s also a more general problem that IEEE 754 does not specify the encoding of non-interchange types, so the fields might not be laid out contiguously, or may not have fixed size (e.g. an arbitrary-precision floating-point number), and even if they are contiguous and fixed some bit patterns might be invalid.

The sign, exponentBitPattern, significandBitPattern fields are a generic notion of bitPattern, suitable for use with any BinaryFloatingPoint type. Specific types can opt into a single bitPattern with defined layout, as Float and Double do.

1 Like