JSONEncoder / Encodable: Floating point rounding error

FWIW... I tested this in JavaScript and it seems to work:

const testResponse = `{"price":4.1799999999999997}`; // "{\"price\":4.1799999999999997}"
const asObject = JSON.parse(testResponse); // {price: 4.18}

So, perhaps it's a non-issue. My only concern is that it might still look odd in Postman... though possible Postman also applies similar formatting in it's pretty-print view.

Perhaps at the risk of running OT, though I'd argue that the actual topic/issue seems to be about understanding IEEE-754 floating point in general (including conversion to and from decimal strings in Swift, JSON/Javascript, etc.):

I'm curious about what you mean by this. AFAIK:

  • Any Float value is exactly representable as a Double value, ie Float(Double(floatValue)).bitPattern == floatValue.bitPattern. (I'm using .bitPattern just to handle the fact that nans are not equal to any other nan, including themselves.)

  • Float(d) == Float(String(d)) for any Double value d where Float(d).isFinite && Float(d) != 0.

  • Float(doubleValue) is the Float value closest to doubleValue, ie doubleValue is "rounded" towards the closest representable Float value. (I guess I'm wrong here? If so, can you give an example?)

I wish I could find the example that came up in the past, but so far I haven't found it.

Even though someFloat and Double(someFloat) might be the exact same floating point value, there's no guarantee that String(someFloat) and String(Double(someFloat)) are the same string. The strings aren't actual values, but just recipes for constructing values.

The troublesome case is probably a conversion like Float(Double(String(someFloat)). In that case, the recipe intended for a Float is being used to cook up a Double, which might not work out exactly as intended.

1 Like

I don't see how that should be troublesome, ie it seems to me like:

Float(Double(String(someFloat))!) == someFloat

should hold, assuming someFloat.isFinite.

1 Like

Example of a number where someFloat == Float(Double(String(someFloat))!) isn't true: 7.038531e-26

(found by bruteforce)

1 Like

I wonder if this is by design (and if so, why?) or because of a bug, perhaps the one discussed here.

EDIT: AFAICT it seems to be that bug.

Demonstration program here.
func concrete(_ value: Double) -> Float {
  return Float.init(value) // Will call intrinsic
func generic<T: BinaryFloatingPoint>(_ value: T) -> Float {
  return Float.init(value) // Will call ._convert(from:)
extension String {
  func leftPadded(to minCount: Int, with char: Character=" ") -> String {
    return String(repeating: char, count: max(0, minCount-count)) + self
extension BinaryFloatingPoint {
  var segmentedBinaryString: String {
    let e = String(exponentBitPattern, radix: 2)
    let s = String(significandBitPattern, radix: 2)
    return [self.sign == .plus ? "0" : "1", "_",
            e.leftPadded(to: Self.exponentBitCount, with: "0"), "_",
            s.leftPadded(to: Self.significandBitCount, with: "0")].joined()
func test() {
  print("Please wait …")
  let startFloat = (7.038531e-26 as Float).nextDown
  let endFloat = (7.038531e-26 as Float).nextUp
  let endDouble = Double(endFloat)
  var d = Double(startFloat)
  let step = d.ulp
  var mc = 0
  while d <= endDouble {
    let a = concrete(d)
    let b = generic(d)
    if a != b {
      print("Found mismatched conversion (after \(mc) matching conversions):")
      print(" Double:  ", d.segmentedBinaryString, d)
      print(" concrete:", a.segmentedBinaryString, a)
      print(" generic: ", b.segmentedBinaryString, b)
      mc = 0
    } else {
      mc &+= 1
    d += step

I've only tested it with the default toolchain of Xcode 12.1 (12A7403), but when I do, it prints:

Please wait …
Found mismatched conversion (after 805306368 matching conversions):
 Double:   0_01110101011_0101110010000111111110110000000000000000000000000000 7.038531e-26
 concrete: 0_00101011_01011100100001111111110 7.0385313e-26
 generic:  0_00101011_01011100100001111111101 7.038531e-26

Shouldn't this bug fix be in Xcode 12.1? @scanon @xwu

It's pretty easy to show what's going on in @cukr's example. The two closest representable Float values are:

7.0385306918512091208591880171403069741059913000... x 10**-26
7.0385313081487913247746609950532486012827332322... x 10**-26

and the two closest representable Double values are:

7.0385309999999990748732225312066332869750876350... x 10**-26
7.0385310000000002228169245060967777876943622661... x 10**-26

The string in question ("7.038531e-26") is just a tiny bit closer to the lower value in Float, so Float("7.038531e-26") returns that value. However it's closer to the upper value in Double, and that Double is closer to the upper Float value than the lower one, so it rounds up when converted to Float. This phenomenon is so common in floating-point arithmetic that it has a name ("double rounding"), and it's why conversions should always be done in a single step when possible. It can happen in almost any chain of conversions A -> B -> C where both steps round (String -> Double -> Float or Double -> Float -> Float16 are the two most common). It can be avoided in most cases by doing the first conversion in a special rounding mode ("round to odd"), which is something we might think about providing eventually.


So this has nothing to do with the bug I mentioned above? Note that the demonstration program in the details (which was originally used to demonstrate SR-12312) reports @cukr's example value as a mismatched conversion.
(And the fix of SR-12312 doesn't seem to be in Swift 5.3 / Xcode 12.1.)

How could it have anything to do with that bug? There's no generics involved.

1 Like


A related question, if I may: Why is eg Float("1e-46") == nil rather than 0?

That's a long-standing bug that @tbkka just fixed; if you check master it will produce a sensible value.


Interestingly, the original issue has been fixed 2 years ago in swift-corelibs-foundation since the closing of SR-7195.

Nowadays it's only the closed Foundation implementation on Apple platforms where the JSON encoding could do better, as demonstrated in this Swift playground example:

Screenshot comparing the outputs of Foundation.JSONEncoder().encode(4.18) and SwiftFoundation.JSONEncoder().encode(4.18); the latter produces the UTF-8 data of "4.18" with no extra digits.


I try your code in Xcode 12.2 beta 3 and got compile error:

import Foundation
import SwiftFoundation

String(data: try! Foundation.JSONEncoder().encode(4.18), encoding: .utf8)!
String(data: try! SwiftFoundation.JSONEncoder().encode(4.18), encoding: .utf8)!

.xcplaygroundpage:9:19: error: module 'SwiftFoundation' has no member named 'JSONEncoder'
String(data: try! SwiftFoundation.JSONEncoder().encode(4.18), encoding: .utf8)!
^~~~~~~~~~~~~~~ ~~~~~~~~~~~

You need to clone GitHub - apple/swift-corelibs-foundation: The Foundation Project, providing core utilities, internationalization, and OS independence and then build and make its SwiftFoundation module available to your Swift playground, e.g. by creating the playground within the same workspace. I don't think SwiftFoundation is available to import otherwise (unless you're running on Linux where it's imported as Foundation).

I don't know why but when I typed the code and run, the playground did not complain about import SwiftFoundation. And the error message says:

module 'SwiftFoundation' has no member named...

which reads like it has the SwiftFoundation module, but...

when I close and open my playground project and it now show:

No such module 'SwiftFoundation'

But wait, am I still confused or does it actually have to do with SR-12312 after all?

On my machine (without the bug fix (despite having the latest Xcode (see comments in SR-12312))):

$ swiftc --version
Apple Swift version 5.3 (swiftlang-1200.0.29.2 clang-1200.0.30.1)
Target: x86_64-apple-darwin19.6.0 <---

I'll see this:

let someFloat = 7.038531e-26 as Float
print(someFloat == Float(Double(String(someFloat))!))
// Prints false <---
print(someFloat == Float._convert(from: Double(String(someFloat))!).value)
// Prints true <---

(Those should both print true if SR-12312 is fixed, shouldn't they? Ie, there is generics involved, behind the scenes.)

But (and now I'm guessing) on eg @xwu's machine (with the bug fix):

$ swiftc --version
Apple Swift version 5.3 (swiftlang-1200.0.29.2 clang-1200.0.30.1)
Target: x86_64-apple-darwin20.1.0 <---

I think they'll see this:

let someFloat = 7.038531e-26 as Float
print(someFloat == Float(Double(String(someFloat))!))
// Prints true <---
print(someFloat == Float._convert(from: Double(String(someFloat))!).value)
// Prints true

Would you mind checking this @xwu?

Why do you think the behavior of the concrete initializers would change? Correcting SR-12312 aligns generic conversions to match the behavior of the concrete initializers:

let someFloat = 7.038531e-26 as Float
print(someFloat == Float(Double(String(someFloat))!))
// false
print(someFloat == Float._convert(from: Double(String(someFloat))!).value)
// false

This behavior demonstrates the concept of double rounding exactly as @scanon outlines above.

:man_facepalming::man_facepalming: (I think I mixed up which one of the two was generic ... Thank you both for helping me straighten this out, I'm finally no longer confused, I think. :)

1 Like

This is a good example of a gripe I have with how entangled the Swift toolchain is with Xcode, etc. on Macs, as discussed on this thread. IMO, it would be better if the Swift toolchain for Macs looked a lot more like the one for Linux (no Xcode/Apple platform development stuff... all that could be an "extension" added on by Xcode). :man_shrugging:

1 Like