The following seems to be true for all standard Float
- FloatX(1.0).exponentBitPattern) == (1 << (FloatX.exponentBitCount - 1) - 1)
- FloatX.greatestFiniteMagnitude > (1 << FloatX.significandBitCount)
- FloatX.significandBitCount > FloatX.exponentBitCount
So, there could more hidden requirements in the FloatingPoint protocol implementation.
Making the protocol implementation to support all possible variants of exponentBitCount, significandBitCount and _exponentBias would likely be unwise as only the one supported by the hardware are really useful in real life (IMHO). Which is why I did not blame the random() implementation.