Steve has shown the kind of harm this would create:
So we'd like zero not to be a special case. We want a continuous fonction.
But a function that returns a boolean can not be continuous, and floating points are not continuous either!
Let's try harder, and consider two functions of real numbers: the bounds of the interval of numbers considered approximatively equal to a given number:
For all (x, y), y.isAlmostEqual(to:x) iff y is inside [minBound(x), maxBound(x)].
To preserve scalability of floating points, minBound and maxBound are linear... until we enter the zero region.
But when do they change behavior? The answer is in the proposal:
the common numerical analysis wisdom that if you don't know anything about a computation, you should assume that roughly half the bits may have been lost to rounding.
So let's have minBound and maxBound become non-linear in the neighborhood of a well-picked small number Z which has half the bits of zero.
We'd have minBound and maxBound behave like this:
- minBound(x) and maxBound(x) are linear for all values far enough from zero (relative tolerance)
- minBound(0) = -Z, maxBound(0) = Z (absolute tolerance).
A definition of maxBound can be: constant around zero, then linear:
| x
| x
|xxx
|
|
•-----
For minBound, it is less easy. It must be linear for nearly all values, but end on -Z on 0. So let's make it this way (two linear regions):
| xx
| xx
| xx
| x
| x
•--x--------
| x
|x
x
|
This looks odd, isn't it? But it has some requested qualities :-)