I am a complete novice just learning swift basics. I can’t figure out why when I convert a double to an integer this happens:
var v = Double(5.8)
var vv = v // 5.8
vv.round(.down) // 5.0
let vvv = v - vv // .8
let vvvv = vvv * 60.0 // 48.0
let v5 = Int(vvvv) // 47
let v6 = Int(round(vvvv)) // 48
When I create v5, swift playground takes 48.0 & returns 47.0.
I’m not looking for ways to make this code better, just why does v5 not return 48?
Appreciate any help.
... where "down" means "towards zero", per the "Basics" section of the Swift Programming Language Documentation
Floating-point values are always truncated
when used to initialize a new integer value in this way.
This means that 4.75 becomes 4, and -3.9 becomes -3.
Yes! :) But isn't that really about (limitations of) float literals defaulting to Double, not rounding to Int?
$ echo 'print(47.999999999999996)' | swift -
47.99999999999999
$ echo 'print(47.999999999999997)' | swift -
48.0
Though Swift float literals are said to be of "infinite precision" pending type-assignment, the type (Double) does have to be selected for the value to be used, so the value as such ends up with Double precision.
Unless of course it's a bug, or a consequence of the implementation that isn't folded back into the docs, or the implementation tracking floating-point rules like those @Pippen quoted...
they have unspecified precision, which is not the same thing as infinite precision. as far as i’m aware, they are currently capped to a max precision of Float80.
Ah, but this isn't relevant to the example given above: Because Int is not a floating-point type, Int(47.999999999999997) is a conversion operation from the default floating-point type—which is Double, not some unspecified type—to Int. This is not a bug or an implementation defect.
To be clear, you can contrive certain expressions where the current implementation defects of floating-point literals leak through—and this is tracked as one or more bugs—but this example is not one of them.
It seems the literal will be preserved until the point where it has to be given the default type Double [corrected below] for the target architecture Double to be sure for x86, but perhaps f32 in wasm or some kind of float16 in an alternative reality.
I'm honestly not trying to be pedantic, but separating the notion of literals from their typed values might help new developers sort through questions like this.
which would be callable with a string without quotes:
Int(alternativeRealityFloatLiteral: 47.99999999999999999999999...99999)
// 47 still
This would give the "infinite precision" semantics...
However. We don't have this in Swift, and I don't think there's any pressure to introduce it. The quoted paragraph of the documentation could use better language to describe the current behaviour.
this would be even better expressed with an expression macro, that’s really the only way to get “infinite” precision float literals since they will get clipped to 80 bits (or fewer) at run time.
It could be useful if the fidelity could be checked at compile time and a diagnostic/warning would be emitted if the literal value was truncated/cannot be represented as the given literal.