Not optimal, but you could initialize with an Int and divide by 10 until you get the result you want.
Or if you know the ‘scale’ of the number you want you could round to that scale after initializing from a Double in order to get rid of the inaccuracy.
I have a roundToScale(scale, mode) in my code for a similar purpose.
Thanks, though I agree with you that it is not optimal. I started writing an init that does what I expected Decimal(string:) to do, that way I could at least serialize to and from String in a reliable way (assuming "\(decimal)" works as expected). But Decimal is not conforming to LosslessStringConvertible for some reason and I don't trust this type at all anymore.
Given these and other issues (eg JSONDecoder doesn't decode JSON to Decimal reliably), it seems Decimal is creating more problems than it solves.
Does anyone know if there are any plans to add an arbitrary precision decimal type like BigDecimal to Swift (and fix JSONDecoder and JSONEncoder if necessary)?
I haven’t heard about plans for arbitrary precision types in the language. I do remember some talks about the possibility of a decimal literal and an ExpressibleByDecimslLiteral type at some point.
I use doubles for serializing my decimals, but they always a scale of 6 decimals or less, so I ‘cap’ to that to avoid Double precision errors.
I think Decimal is and should be expressible by float literal, but the problem is that float literals in Swift are always converted to Double on their way to eg a Decimal.
let string = "-17.01"
XCTAssertNotEqual("\(-17.01 as Decimal)", string)
XCTAssertEqual("\(Decimal(integerAndFraction: -17.01))", string)
public extension Decimal {
/// A `Decimal` version of a number, with extraneous floating point digits truncated.
init<IntegerAndFraction: BinaryFloatingPoint>(
integerAndFraction: IntegerAndFraction,
fractionalDigitCount: UInt = 2
) {
self.init(
sign: integerAndFraction.sign,
exponent: -Int(fractionalDigitCount),
significand: Self(Int(
(integerAndFraction
* IntegerAndFraction(Self.radix.toThe(fractionalDigitCount))
).rounded()
))
)
}
}
public extension Numeric {
/// Raise this base to a `power`.
func toThe<Power: UnsignedInteger>(_ power: Power) -> Self
where Power.Stride: SignedInteger {
power == 0
? 1
: (1..<power).reduce(self) { result, _ in result * self }
}
}
EDIT: Oh nvm, I missed <IntegerAndFraction: BinaryFloatingPoint>
Interesting partial workaround! But eg: Decimal(integerAndFraction: 1234567890.0123456789, fractionalDigitCount: 10)
will result in a runtime crash:
significand: Self(Int( // <-- Thread 1: Fatal error: Double value cannot be converted to Int because it is outside the representable range
and a non-crashing example:
let a = Decimal(string: "1234567890.1234567")!
let b = Decimal(integerAndFraction: 1234567890.1234567, fractionalDigitCount: 7)
print(a) // 1234567890.1234567
print(b) // 1234567890.1234568
I've been working on some financial accounting software, I didn't want to think about rounding errors from floating points. I don't need super fast arithmetic; I just want it to the stay exact. I used the BigInt swift package here and made myself a BigDecimal class. I don't know if I did it the way you are "supposed to", but I did it and moved on. I store two integers (p,q) to make a rational number. A gcd function is useful in reducing it to a canonical form. I constrain the denominator to be a power of ten, so I don't get things like 1/3 with an infinite decimal expansion. Now I can have numbers like "100,000,000,000,000.00000123" without wondering about IEEE floating point details.
Currently, init from string the only option for now. As you wrote, let dec1 = Decimal(string: "3.133")! gives the expected result: 3.133.
As for let s = "12å3.456äö" – you can write a SwiftLint rule for this. It is not ideal solution, but it works. SwiftLint rule is rather easy to implement with simple regular expression, only 0-9 and '.' symbols are allowed.
Further, you can write special initializer, like init(validatedLiteral: String) { self.init(string: validatedLiteral)! }
and improve SwiftLint rule in a way, that it allows creation of Decimal using this initializer and throw an error if trying to use init(string:)
For example:
let decimal = Decimal(string: "3.133") // Error
fun someFuction(aString: String) {
let decimal = Decimal(validatedLiteral: aString) // Error
}
let decimal = Decimal(validatedLiteral: "3.133") // Ok, swiftLint check the literal value with regex
That’s a really neat trick, to create a String representation of the Double and using that to initialize the Decimal.
Do you know what mechanism makes the Double 3.133 (actually 3.132999999999999488 as you write in your documentation) render as the String “3.133”?
I could be misremembering, but I believe that when rendering to a String, Double will use the minimum number of digits required for the value to be recovered losslessly. E.g., since there's no valid Double value closer to 3.133 than 3.132999999999999488, we don't need more precision to recover the exact same value.
Couldn’t / shouldn’t the trick that @davdroman ’s neat library uses - or some other implementation that exploits the same knowledge - be used in the actual Double initializer for Decimal? And for decoding too?
Or even better, have built-in language support for Decimal literals. The JSON coding issue is caused by the internal implementation of JSONSerialization (and JSONDecoder by proxy), which does Data -> Double -> Decimal conversion instead of direct Data -> Decimal conversion.