Convert Double-String to Int64 after multiplying by 100 gives wrong value

Hi,

I want to convert a double-string (like "0.29") to Int64 after multiplying by 100 using the following function.

 func convertAmount(amountString: String?) -> Int64 {
        guard let amountText = amountString else {
            return 0
        }
        guard let amount = Double(amountText) else {
            return 0
        }
        return Int64(amount * 100.0)
    }

amountString will always have an amount with 2 decimal points only
Examples of amountString: "0.00", "2.03", "10.10", "1000.01", "30.00"

Sample input and output

1.
input: "0.29"
output: 28
expected output: 29
2.
input: "0.58"
output: 57
expected output: 58
3.
input: "16.08"
output: 1607
expected output: 1608
4.
input: "1025.12"
output: 102511
expected output: 102512
1 Like
  1. Are you sure you shouldn't be using Decimal? It sounds like you're maybe doing what Decimal is for.

  2. Int64's initializer doesn't round. It truncates.

Swift has unit tests that look like this:

@Test func test() {
  #expect(convertAmount("0.29") == 29)
  #expect(convertAmount("0.58") == 58)
  #expect(convertAmount("16.08") == 16_08)
  #expect(convertAmount("1025.12") == 1025_12)
}

If you're using Int64, I don't know that NSDecimalNumber is available to you. If it is…

func convertAmount(_ amount: String?) -> Int64 {
  guard let decimal = amount.flatMap({ Decimal(string: $0) })
  else { return 0 }

  return .init(truncating: decimal * 100 as NSDecimalNumber)
}

If not,

func convertAmount(_ amount: String?) -> Int64 {
  guard let double = amount.flatMap(Double.init)
  else { return 0 }

  return .init((double  * 100).rounded())
}
2 Likes

As @Quedlinbug hinted already, the issue you're having is that Double(amountString) rounds to the nearest representable Double, which might be smaller than the number you intend to represent. The conversion to Int64 then truncates, resulting in a smaller result than you expect. Working one of your examples:

let amountText = "0.29"
let amount = Double(amountText)
// The two closest Double values are:
// 0.289999999999999980015985556747182272374629974365234375
// 0.29000000000000003552713678800500929355621337890625
// Inspecting them, we see that the smaller one (the first)
// is a bit closer to "0.29", so that's the value we get.

// Multiplying by 100 gives us:
// 28.999999999999996447286321199499070644378662109375
let scaled = 100 * amount!

// and converting that to integer truncates to 28.
let result = Int64(scaled)

Since, in your case, you know that:

you can resolve this by rounding to an integer after multiplying by 100, and before converting to integer:

Int64((amount * 100).rounded())

Using Decimal is also viable, and is a more generally applicable solution.

4 Likes

Hi! what we did long ago (when your C-compiler fit on a single 3 1/2" floppy) was to add 0.5 before casting into integer. Maybe that would help here, too.

#include <stdio.h>

int main(int argc, char** argv) {
        double d = 0.29;
        int i = (int)(d * 100 + 0.5);
        printf("%f -> %d\n", d, i);
}

which gives

% ./round            
0.290000 -> 29
1 Like

This works great right up until you get "-0.29" and suddenly the result is "-28" (or when you get 4503599627370497 and adding 0.5 rounds up to 4503599627370498). Better to use the standard library API that handles all these cases for you: .rounded().

5 Likes

you're right, it's probably better to use .rounded() or something similar here.
What I was pointing at, though, is you can't just cast to an integer type. Maybe I wasn't clear enough.

Still,
This works great right up until you get "-0.29" and suddenly the result is "-28" —isn't this expected? I mean, it's not sudden. rounded() behaves the same

  7> (Double("-0.29")! * 100.0 + 0.5).rounded() 
$R6: (Double) = {
  _value = -28
}

and

% ./round2 
-0.290000 -> -28 -> -28

@scanon meant using .rounded() instead of adding 0.5.

5 Likes

Seems to me like you could also just remove the decimal point at the string level, then parse the result as an integer. Semantically it’s the same thing as multiplying by 100.

Then you don’t go through floating-point numbers at all, and don’t need to care about rounding or unrepresentable values.

Simplicity has value, too.

8 Likes

I see now. Anyway, just using 0.5 or -0.5 before casting to integer depending on sign achieves the same. Then again, not the point of my answer which was that just casting to integer will usually not what you want to do.

Simplicity has a lot of value in this case, because even though the approach using .rounded() works with “regular” amounts, it will fail for (very) large enough amounts where the nearest representable value will be more than 0.01 away from the real value.
So, I agree with you: it’s probably better to avoid floating-point numbers for this conversion. Either modify the string as you said, or parse two integers and combine them.

2 Likes

I recall someone proposed to use Decimal rather than Double.

 13> NSDecimalNumber(decimal: Decimal(string:"0.29")! * Decimal(string: "100")!).decimalValue
$R7: Decimal = 29.000000

 14> NSDecimalNumber(decimal: Decimal(string:"-0.29")! * Decimal(string: "100")!).decimalValue 
$R8: Decimal = -29.000000

 15> NSDecimalNumber(decimal: Decimal(string:"0.29")! * Decimal(string: "100")!).int64Value
$R9: Int64 = 29

 16> NSDecimalNumber(decimal: Decimal(string:"-0.29")! * Decimal(string: "100")!).int64Value 
$R10: Int64 = -29

I believe this would save the original issue. The observed behavior seems to be the same in C/C++.

Note that big numbers (~ 15 decimal digits big) won’t work correctly even if rounded is used, eg:

"4503599627370497.00" or "922337203685477.58" will result into 450359962737049728 and 92233720368547760 correspondingly.

The first big number that doesn’t work is
"35184372088832.02", it’s getting rounded to 3518437208883203. Notably the mantissa is 2^45

int64Value is the same as this "casting", used above.

let wouldRoundTo5 = 4.567 as NSDecimalNumber
#expect(wouldRoundTo5.int64Value == .init(wouldRoundTo5)) // 4

The spelling was changed in Swift 4 to .init(truncating:). Based on this thread existing, this added clarity seems like it was a good idea and maybe should be required for all conversions to integers.

(Decimal is not magic; it's just got a 128-bit mantissa.)