Excellent thoughts Jens, I’m glad you brought this up. I’ve actually been working on an implementation, and the first draft is done and working. Once I finish adding documentation, I’m going to post it and ask for feedback (especially from Steve).

And yes, I had many of the same thoughts as you regarding the implementation details. I intend to start an Evolution Discussion thread to ask for input.

My first-draft only handles `Range`

not `ClosedRange`

, though as you note the latter could be built from the former if we choose compatible semantics. It currently behaves as, “Pick a real number in the range and round down to the next representable value”. And it does indeed work with an upper bound of `infinity`

(treating it as one `ulp`

beyond `greatestFiniteMagnitude`

).

As for small ranges, the test case that really made me think was a range just 2 ulps wide, entirely within a single binade, such as `1.5.nextDown ..< 1.5.nextUp`

. The current documentation and implementation in the Standard Library make the middle value (1.5) occur twice as often as the lower bound, ie. probabilities of [1/3, 2/3]. (And for a closed range, the middle value also occurs twice as often as the upper bound, ie. [1/4, 1/2, 1/4].)

I think that is unintuitive and generally not what people want or expect. There are 3 “simple” alternatives we could choose:

- p(x) is proportional to
`x.ulp`

- p(x) is proportional to
`x.nextUp - x`

- p(x) is proportional to
`(x.nextUp - x.nextDown) / 2`

For the vast majority of numbers, these are all equivalent. The differences only show up at the boundaries of binades, specifically for non-zero values with 0 significand.

Now, #1 has the advantage that all values with the same raw exponent, have the same probability of occurring. It seems conceptually the simplest to explain.

#2 is the “obvious” answer for half-open ranges, namely choosing a real number at random then rounding down to a representable value. However it makes negative numbers with 0 significand half as likely as their positive counterparts.

And #3 embodies the spirit of “round to nearest”, while avoiding edge effects by treating each bound as if its whole “basin of attraction” were included. However, it makes all non-zero numbers with 0 significand 75% as likely as the other numbers with the same raw exponent.

In every case, we could treat `infinity`

as one `ulp`

beyond `greatestFiniteMagnitude`

, and also never produce negative zero.