Why no fraction in FloatingPoint?

It's not an uncommon situation to have a floating point value x and wanting to get the fractional part, a value in the half open range [0, 1).

The solution will probably often be:

let fraction = x - x.rounded(.down)

But as I just learned (here), there is a serious problem with doing it this way, and it is the one mentioned in the documentation of the simd family of fract functions:

/// `x - floor(x)`, clamped to lie in the range [0,1).  Without this clamp step,
/// the result would be 1.0 when `x` is a very small negative number, which may
/// result in out-of-bounds table accesses in common usage.
public func fract(_ x: float2) -> float2

So given that it's quite common to want the fraction [0, 1) of a floating point number, and that the straight forward / naive solution has this "hidden danger", why isn't there a proper fractional property in the FloatingPoint protocol?

If we were going to expose a fractional part, I would define it to be x - x.rounded(.towardZero) for finite x and 0 for inf or NaN to avoid these issues. Conveniently, this is exactly the modf function defined in the system C module.

(Note that the modf binding isn't great; it should be mapped into Swift in a Swiftier way--the name is pretty opaque and the result should have named components, etc, but the functionality is present).

1 Like

Ok, I agree. What I'm after is something that should have a different name and I guess not be in the standard library after all.

But in my experience, I've almost never needed the variant that is "mirrored around zero", ie x - x.rounded(.towardZero). It's the same thing with integers, I always find myself needing true modulo rather than remainder, %, (f)mod(f). For ranges [0, N) where N is a power of two, we can always do x & mask, which will work (the way I need) for both signed and unsigned integers, but for the general case, we have to work around the "mirrored around zero"-aspect of remainder, %, mod, and I guess, if implemented, a future "fraction".

I may be totally wrong but my guess is that a lot of times when people use remainder, %, (f)mod(f), they will be just fine as long as they work with only non-negative values, but their approach wouldn't actually work if their use case was supposed to work also for negative values. This is something similar to people instinctively/naively thinking closed ranges are the "the right/normal range", and half open ranges are "special", when in practice half open ranges turns out to make more sense most of the time (because successive ranges don't overlap, etc), sort of ... Sorry for not being able to express this more clearly.

I understand exactly what you're getting at, but the function you're describing for floating-point can't really exist (because the result isn't representable for tiny negative numbers). For some use cases (graphics, mainly), that's fine--that's why it exists in simd. But for the same reason, it's something of an attractive nuisance in a more general contexts, which is why I would hesitate to add it to the stdlib.

It's a lot like "round to n digits after the decimal point" in this regard; a function that people often think they want in floating-point, but which cannot actually exist in a generally useful fashion.

Happily, it's also pretty easy to implement the approximate one, even fully generically, if needed:

func mod1<T>(_ x: T) -> T where T: FloatingPoint { 
    if x.isFinite { 
        let provisional = x - x.rounded(.down) 
        return min(provisional, 1 - .ulpOfOne/2) 
    } 
    return 0 
} 

Long-term I think it makes good sense to provide this somewhere, but I don't know where that somewhere is now.

1 Like

Thanks.
Perhaps it could be a property named something like .wrappedInUnitRange, ie the value restricted by periodic boundary conditions [0, 1).

I'd like to better understand what exactly you mean by this:

Assuming that the function/property is implemented like your example, isn't the problem just the usual one of not being able to represent any number in the binary floating point system?

I mean, couldn't something very similar to the above quotation be said about eg the existing .remainder(dividingBy:):

let x = 3.0
let remainder = x.remainder(dividingBy: 1.0 / 3.0)
print(remainder == 0.0) // false
print(remainder > 0.0) // true

Or about any binary floating point arithmetic operation (at least when performed using decimal literals):

print(0.1 + 0.2 == 0.3) // false

?

There's some discussion about this here, but I'm not sure if it answers your question, exactly. The part about remainder moving the discontinuity away from 0 might be relevant.

remainder and truncatingRemainder have the desirable property that the result is always exact; in your example the rounding error is only due to the imprecision of the input 1/3.0--you have computed the exact remainder of 3.0 divided by 0.333333333333333314829616256247390992939472198486328125.

Why does this matter when most floating-point operations round? The difficulty with remainders specifically is that people often want to use them to reduce a function argument into some fundamental range--e.g. we might use your hypothetical .wrappedInUnitRange to implement argument reduction for a hypothetical sin(τTimes:) function, but if we implement it this way, there would be catastrophic loss of accuracy for small negative numbers: sin(τTimes: -.ulpOfOne/4) would be 0 instead of the correct value -τ*.ulpOfOne/4. Using a remainder operation that either rounds toward zero or towards nearest (the two modes provided in the stdlib today) fixes this issue, because both are always exact.

I'm not actually opposed to providing this operation, I just think that one needs to be careful about how it gets exposed.

2 Likes