Why no fraction in FloatingPoint?

Ok, I agree. What I'm after is something that should have a different name and I guess not be in the standard library after all.

But in my experience, I've almost never needed the variant that is "mirrored around zero", ie x - x.rounded(.towardZero). It's the same thing with integers, I always find myself needing true modulo rather than remainder, %, (f)mod(f). For ranges [0, N) where N is a power of two, we can always do x & mask, which will work (the way I need) for both signed and unsigned integers, but for the general case, we have to work around the "mirrored around zero"-aspect of remainder, %, mod, and I guess, if implemented, a future "fraction".

I may be totally wrong but my guess is that a lot of times when people use remainder, %, (f)mod(f), they will be just fine as long as they work with only non-negative values, but their approach wouldn't actually work if their use case was supposed to work also for negative values. This is something similar to people instinctively/naively thinking closed ranges are the "the right/normal range", and half open ranges are "special", when in practice half open ranges turns out to make more sense most of the time (because successive ranges don't overlap, etc), sort of ... Sorry for not being able to express this more clearly.