# Why has the precision of printed Double values changed in recent snapshot?

I noticed that the following program will print a different result depending on if I use the default toolchain of Xcode 9.3.1 or a recent development snapshot:

``````let x = 25.000099999999996
print(x)
``````

With the default toolchain it prints:

``````25.0001
``````

But with development snapshot 2018-05-22 it prints:

``````25.000099999999996
``````

I guess this might be because something has been fixed, but I didn't find any information about it.

See the changelog for 4.2, this bug report, this pull request, etc.

2 Likes

The precision hasn't changed. Float printing now uses a better algorithm (Grisu2) that minimizes the number of digits needed to print an accurate decimal approximation of the floating-point value.

3 Likes

I'm probably missing something obvious (perhaps the correct meaning of the word precision?), but could you please clarify what you mean when you say that the precision has not changed, when the Double value with bit pattern:

``````0b01000000_00111001_00000000_00000110_10001101_10111000_10111010_11000110
``````

changed from being printed like this:

``````25.0001
``````

to this:

``````25.000099999999996
``````

?

Also, I noticed that:

``````Double("25.0001") == 25.000099999999996 // false
Double("25.000099999999996") == 25.000099999999996 // true
``````

And I'm guessing that by

you mean that the following is true for all finite `Double` (and `Float` and `Float80`) values `x`:

``````Double(String(x)) == x
``````

?

Ah, I mentally reversed your examples. It looks like the previous printing behavior was in fact inaccurate. The new algorithm should always print a minimal-length decimal value that parses back into the original floating-point value, so `Double(String(x))` always produces `x`.

9 Likes