How exactly did you get eg Float(0.001)
to print as 0.00100000005
? The only way I can see it is in the resulting values reported by the REPL:
$ swift
Welcome to Apple Swift version 4.2 (swiftlang-1000.0.36 clang-1000.0.4). Type :help for assistance.
1> let x: Float = 0.001
x: Float = 0.00100000005
2> print(x.debugDescription)
0.001
3> dump(x)
- 0.001
$R0: Float = 0.00100000005
But note that its debugDescription
(and dump) is still "0.001"
.
And this command line app:
let x: Float = 0.001
print(x)
print(x.debugDescription)
dump(x)
print(String(describing: x))
will print:
0.001
0.001
- 0.001
0.001
EDIT: Aha, turns out that it is a matter of compiler versions, the above is the default toolchain of Xcode 10 beta 6, that is:
$ swiftc --version
Apple Swift version 4.2 (swiftlang-1000.0.36 clang-1000.10.44)
Target: x86_64-apple-darwin17.7.0
But if I instead compile the above program with:
$ swiftc --version
Apple Swift version 4.1.2 (swiftlang-902.0.54 clang-902.0.39.2)
Target: x86_64-apple-darwin17.7.0
It prints:
0.001
0.00100000005
- 0.00100000005
0.001
So, Swift 4.2 changed the way it prints float values, and I now remember noticing a similar change before and mentioning in another thread, the answer being:
The REPL seems to be using the old algorithm still though, printing a non-minimal-length decimal value instead of the minimal-length decimal value that parses back into the original floating-point value.