Double to Integer

I am a complete novice just learning swift basics. I can’t figure out why when I convert a double to an integer this happens:

var v = Double(5.8)
var vv = v // 5.8
vv.round(.down) // 5.0
let vvv = v - vv // .8
let vvvv = vvv * 60.0 // 48.0
let v5 = Int(vvvv) // 47
let v6 = Int(round(vvvv)) // 48

When I create v5, swift playground takes 48.0 & returns 47.0.
I’m not looking for ways to make this code better, just why does v5 not return 48?
Appreciate any help.

vvvv is actually not 48:

print(vvvv)

// 47.999999999999986

so when converting to Int it will "round down" to 47. You are facing quirks with floating point numbers:

... where "down" means "towards zero", per the "Basics" section of the Swift Programming Language Documentation

Floating-point values are always truncated 
when used to initialize a new integer value in this way. 
This means that 4.75 becomes 4, and -3.9 becomes -3.

Thanks, I now understand.

Welcome to floating point world.

print(Int(47.999999999999996))  // 47
print(Int(47.999999999999997))  // 48
3 Likes

Thanks, I now understand.

Yes! :) But isn't that really about (limitations of) float literals defaulting to Double, not rounding to Int?

$ echo 'print(47.999999999999996)' | swift -
47.99999999999999
$ echo 'print(47.999999999999997)' | swift -
48.0

Though Swift float literals are said to be of "infinite precision" pending type-assignment, the type (Double) does have to be selected for the value to be used, so the value as such ends up with Double precision.

https://docs.swift.org/swift-book/documentation/the-swift-programming-language/lexicalstructure#Floating-Point-Literals

Unless of course it's a bug, or a consequence of the implementation that isn't folded back into the docs, or the implementation tracking floating-point rules like those @Pippen quoted...

1 Like

they have unspecified precision, which is not the same thing as infinite precision. as far as i’m aware, they are currently capped to a max precision of Float80.

1 Like

Ah, but this isn't relevant to the example given above: Because Int is not a floating-point type, Int(47.999999999999997) is a conversion operation from the default floating-point type—which is Double, not some unspecified type—to Int. This is not a bug or an implementation defect.

To be clear, you can contrive certain expressions where the current implementation defects of floating-point literals leak through—and this is tracked as one or more bugs—but this example is not one of them.

Thank you - that's what I take it to mean, though not what's said, in the discussion of literals (emphasis mine):

Instead, a literal is parsed as having infinite precision and Swift’s type inference attempts to infer a type for the literal

https://docs.swift.org/swift-book/documentation/the-swift-programming-language/lexicalstructure#Literals

It seems the literal will be preserved until the point where it has to be given the default type Double [corrected below] for the target architecture Double to be sure for x86, but perhaps f32 in wasm or some kind of float16 in an alternative reality.

I'm honestly not trying to be pedantic, but separating the notion of literals from their typed values might help new developers sort through questions like this.

1 Like

Note that the default floating-point type is not architecture dependent: it is Double.

2 Likes

In principle you could have an initialiser that takes a string instead of a float literal:

Int(floatString: "47.99999999999999999999999...99999")
// 47

From here it's just a short step to some hypothetical:

init(alternativeRealityFloatLiteral: StringWithoutQuotes)

which would be callable with a string without quotes:

Int(alternativeRealityFloatLiteral: 47.99999999999999999999999...99999)
// 47 still

This would give the "infinite precision" semantics...

However. We don't have this in Swift, and I don't think there's any pressure to introduce it. The quoted paragraph of the documentation could use better language to describe the current behaviour.

this would be even better expressed with an expression macro, that’s really the only way to get “infinite” precision float literals since they will get clipped to 80 bits (or fewer) at run time.

It could be useful if the fidelity could be checked at compile time and a diagnostic/warning would be emitted if the literal value was truncated/cannot be represented as the given literal.

2 Likes