Hi!
Are there any particular reason why the .random(in: using:) for floating point types are using the lowest rather than the highest bits?
AFAIK for most pseudorandom generators, it is the case that lower bits have lower quality (are less "random"), so the higher bits should be preferred (because they'd often be more "random").
I'm no expert, so please correct me if I'm wrong.
But if it is a valid assumption, I wonder if instead of masking out and using the lowest bits, as in the current implementation:
let significandCount = Self.significandBitCount + 1
let maxSignificand: Self.RawSignificand = 1 << significandCount
// Rather than use .next(upperBound:), which has to work with arbitrary
// upper bounds, and therefore does extra work to avoid bias, we can take
// a shortcut because we know that maxSignificand is a power of two.
rand = generator.next() & (maxSignificand - 1)
it would be better to shift down and use the highest bits, like this:
let shifts = Self.RawSignificand.bitWidth - (Self.significandBitCount + 1)
rand = generator.next() >> shifts
?