Overall, appreciate the careful study of the existing protocols. Some points:
We don't expose this for binary floating-point types, and I'm not certain that it's necessary here either; the bias is mostly useful for internal use and, even when it's needed elsewhere for useful generic algorithms (which is our guiding star for what goes into a protocol), can be ascertained without a dedicated API.
If we were to add it, we would want to consider this holistically for all the floating-point protocols. (Which is to say, for a DecimalFloatingPoint protocol built on top of Swift-as-it-is, my feedback would be to leave it out.)
Note that Swift offers no user-facing control over how floating-point rounding mode (i.e., which of the two closest representations of a notionally infinite-precision result is returned from a computation); the FloatingPointRoundingRule is for rounding to integers and isn't meant for this use (there are options for rounding mode which would never make sense for rounding to an integer).
If we were to add an API it would, as above, have to be considered holistically for binary and decimal floating-point.
Two issues here:
First, the bitPattern type shouldn't be RawSignificand, which is documented on BinaryFloatingPoint (and in your proposed design here) to be wide enough only for the significand bits—see, for example, Float80 where this makes a difference. Even if, in practice, it wouldn't be an issue for the few concrete types contemplated here, we don't want to burn a requirement into the design, as the first API does here, that RawSignificand be wide enough to represent all the bits of the entire value.
We also don't have an API on BinaryFloatingPoint to initialize the whole value from a bit pattern; if it's broadly useful (it may be—I'd be open to considering it), we should evaluate this holistically and separately for all floating-point types, or at least for both BinaryFloatingPoint and DecimalFloatingPoint in parallel.
Second, the Boolean here—particularly as a property co-equal to, say, the floating-point sign—is surfacing essentially an implementation detail at the level of the protocol. To top it off, it's literally labeling one of two IEEE-approved significand encodings as the true encoding, with the other being called the false encoding (a bit harsh, no?).
Where it matters at the level of working with decimal floating-point values generically is specifically where the significand bit pattern is being supplied as an argument or accessed as a property—namely, the following APIs:
init(sign: FloatingPointSign, exponentBitPattern: RawExponent, significandBitPattern: RawSignificand)
var significandBitPattern: RawSignificand { get }
If—if—it's agreed that users being able to access and supply significand bit patterns encoded in either of these ways is important for all conforming types, then the APIs here could be most appropriately something like:
init(sign: FloatingPointSign, exponentBitPattern: RawExponent, significandBinaryIntegerBitPattern: RawSignificand)
var significandBinaryIntegerBitPattern: RawSignificand { get }
init(sign: FloatingPointSign, exponentBitPattern: RawExponent, significandDenselyPackedBitPattern: RawSignificand)
var significandDenselyPackedBitPattern: RawSignificand { get }
In this manner, the underlying storage would (as it should) be entirely an implementation detail of the type, and both encodings would be equally available to the user as needed.
There are, from memory, a few IEEE-required operations useful for all decimal floating-point types, specified in the standard, which ought to be in this protocol; mostly to do with the fact that there are multiple equivalent representations of the same value. I do not have the standard open in front of me at present; it is not extremely onerous as a list but it does belong here.