I noticed that the following program will print a different result depending on if I use the default toolchain of Xcode 9.3.1 or a recent development snapshot:
let x = 25.000099999999996
With the default toolchain it prints:
But with development snapshot 2018-05-22 it prints:
I guess this might be because something has been fixed, but I didn't find any information about it.
The precision hasn't changed. Float printing now uses a better algorithm (Grisu2) that minimizes the number of digits needed to print an accurate decimal approximation of the floating-point value.
I'm probably missing something obvious (perhaps the correct meaning of the word precision?), but could you please clarify what you mean when you say that the precision has not changed, when the Double value with bit pattern:
changed from being printed like this:
Also, I noticed that:
Double("25.0001") == 25.000099999999996 // false
Double("25.000099999999996") == 25.000099999999996 // true
And I'm guessing that by
you mean that the following is true for all finite
Double(String(x)) == x
Ah, I mentally reversed your examples. It looks like the previous printing behavior was in fact inaccurate. The new algorithm should always print a minimal-length decimal value that parses back into the original floating-point value, so
Double(String(x)) always produces