As @Quedlinbug hinted already, the issue you're having is that Double(amountString) rounds to the nearest representable Double, which might be smaller than the number you intend to represent. The conversion to Int64 then truncates, resulting in a smaller result than you expect. Working one of your examples:
let amountText = "0.29"
let amount = Double(amountText)
// The two closest Double values are:
// 0.289999999999999980015985556747182272374629974365234375
// 0.29000000000000003552713678800500929355621337890625
// Inspecting them, we see that the smaller one (the first)
// is a bit closer to "0.29", so that's the value we get.
// Multiplying by 100 gives us:
// 28.999999999999996447286321199499070644378662109375
let scaled = 100 * amount!
// and converting that to integer truncates to 28.
let result = Int64(scaled)
Since, in your case, you know that:
you can resolve this by rounding to an integer after multiplying by 100, and before converting to integer:
Int64((amount * 100).rounded())
Using Decimal is also viable, and is a more generally applicable solution.
Hi! what we did long ago (when your C-compiler fit on a single 3 1/2" floppy) was to add 0.5 before casting into integer. Maybe that would help here, too.
#include <stdio.h>
int main(int argc, char** argv) {
double d = 0.29;
int i = (int)(d * 100 + 0.5);
printf("%f -> %d\n", d, i);
}
This works great right up until you get "-0.29" and suddenly the result is "-28" (or when you get 4503599627370497 and adding 0.5 rounds up to 4503599627370498). Better to use the standard library API that handles all these cases for you: .rounded().
you're right, it's probably better to use .rounded() or something similar here.
What I was pointing at, though, is you can't just cast to an integer type. Maybe I wasn't clear enough.
Still, This works great right up until you get "-0.29" and suddenly the result is "-28" —isn't this expected? I mean, it's not sudden. rounded() behaves the same
Seems to me like you could also just remove the decimal point at the string level, then parse the result as an integer. Semantically it’s the same thing as multiplying by 100.
Then you don’t go through floating-point numbers at all, and don’t need to care about rounding or unrepresentable values.
I see now. Anyway, just using 0.5 or -0.5 before casting to integer depending on sign achieves the same. Then again, not the point of my answer which was that just casting to integer will usually not what you want to do.
Simplicity has a lot of value in this case, because even though the approach using .rounded() works with “regular” amounts, it will fail for (very) large enough amounts where the nearest representable value will be more than 0.01 away from the real value.
So, I agree with you: it’s probably better to avoid floating-point numbers for this conversion. Either modify the string as you said, or parse two integers and combine them.
int64Value is the same as this "casting", used above.
let wouldRoundTo5 = 4.567 as NSDecimalNumber
#expect(wouldRoundTo5.int64Value == .init(wouldRoundTo5)) // 4
The spelling was changed in Swift 4 to .init(truncating:). Based on this thread existing, this added clarity seems like it was a good idea and maybe should be required for all conversions to integers.