How to initialize Decimal?

Can't use literals (because Swift has only double precision floating point literals):

let x = Decimal(123.456)
print(x) // 123.45599999999997952

Can't use Decimal(string:) because it has this unexpected and undocumented behavior:

let s = "12å3.456äö"
if let x = Decimal(string: s) {
    print(x)
} else {
    print("Failed to initialize Decimal value from String")
}

Will print
12

let s = "-hello"
if let x = Decimal(string: s) {
    print(x)
} else {
    print("Failed to initialize Decimal value from String")
}

Will print
0

let s = ".world"
if let x = Decimal(string: s) {
    print(x)
} else {
    print("Failed to initialize Decimal value from String")
}

Will also print
0

let s = "uh"
if let x = Decimal(string: s) {
    print(x)
} else {
    print("Failed to initialize Decimal value from String")
}

Will print
Failed to initialize Decimal value from String


Is there a recommended non-problematic way?

7 Likes

Not optimal, but you could initialize with an Int and divide by 10 until you get the result you want.
Or if you know the ‘scale’ of the number you want you could round to that scale after initializing from a Double in order to get rid of the inaccuracy.
I have a roundToScale(scale, mode) in my code for a similar purpose.

Thanks, though I agree with you that it is not optimal. I started writing an init that does what I expected Decimal(string:) to do, that way I could at least serialize to and from String in a reliable way (assuming "\(decimal)" works as expected). But Decimal is not conforming to LosslessStringConvertible for some reason and I don't trust this type at all anymore.

Given these and other issues (eg JSONDecoder doesn't decode JSON to Decimal reliably), it seems Decimal is creating more problems than it solves.

Does anyone know if there are any plans to add an arbitrary precision decimal type like BigDecimal to Swift (and fix JSONDecoder and JSONEncoder if necessary)?

I haven’t heard about plans for arbitrary precision types in the language. I do remember some talks about the possibility of a decimal literal and an ExpressibleByDecimslLiteral type at some point.

I use doubles for serializing my decimals, but they always a scale of 6 decimals or less, so I ‘cap’ to that to avoid Double precision errors.

Not sure how you mean, there will always be errors if going via floating point:

let dec1 = Decimal(string: "3.133")!
let dbl: Double = NSDecimalNumber(decimal: dec1).doubleValue
let dec2 = Decimal(dbl)
print(dec1)
print(dec2)

Will print:

3.133
3.132999999999999488

I cap the value using my roundToScale. This basically multiplies by 10^scale, rounds using rounding mode and divides back down by 10^scale.

Again not an optimal solution, but it’s ok for my use case.

Edit: I do this in a type that wraps Decimal so that I can fix up the values during decoding.

Decimal conforming to ExpressibleByFloatLiteral should be considered an issue that is actively harmful.

8 Likes

I think Decimal is and should be expressible by float literal, but the problem is that float literals in Swift are always converted to Double on their way to eg a Decimal.

See
https://bugs.swift.org/browse/SR-7124
https://bugs.swift.org/browse/SR-920

I agree it could have made sense to not let Decimal conform to ExpressibleByFloatingPointLiteral until these issues have been solved.

Users must now keep in mind to not use float literals together with Decimal values, unless they know about this.

8 Likes

I do it like this. Can you break it?

let string = "-17.01"
XCTAssertNotEqual("\(-17.01 as Decimal)", string)
XCTAssertEqual("\(Decimal(integerAndFraction: -17.01))", string)
public extension Decimal {
  /// A `Decimal` version of a number, with extraneous floating point digits truncated.
  init<IntegerAndFraction: BinaryFloatingPoint>(
    integerAndFraction: IntegerAndFraction,
    fractionalDigitCount: UInt = 2
  ) {
    self.init(
      sign: integerAndFraction.sign,
      exponent: -Int(fractionalDigitCount),
      significand: Self(Int(
        (integerAndFraction
          * IntegerAndFraction(Self.radix.toThe(fractionalDigitCount))
        ).rounded()
      ))
    )
  }
}
public extension Numeric {
  /// Raise this base to a `power`.
  func toThe<Power: UnsignedInteger>(_ power: Power) -> Self
  where Power.Stride: SignedInteger {
    power == 0
    ? 1
    : (1..<power).reduce(self) { result, _ in result * self }
  }
}

What is the IntegerAndFraction type?

EDIT: Oh nvm, I missed <IntegerAndFraction: BinaryFloatingPoint> :man_facepalming:


Interesting partial workaround! But eg:
Decimal(integerAndFraction: 1234567890.0123456789, fractionalDigitCount: 10)
will result in a runtime crash:

            significand: Self(Int(       // <-- Thread 1: Fatal error: Double value cannot be converted to Int because it is outside the representable range

and a non-crashing example:

let a = Decimal(string: "1234567890.1234567")!
let b = Decimal(integerAndFraction: 1234567890.1234567, fractionalDigitCount: 7)
print(a) // 1234567890.1234567
print(b) // 1234567890.1234568
2 Likes

Perhaps you’re right. I remember this discussion from when the ‘ExpressibleByXxx’ were called XxxConvertible:

And this:

1 Like

I've been working on some financial accounting software, I didn't want to think about rounding errors from floating points. I don't need super fast arithmetic; I just want it to the stay exact. I used the BigInt swift package here and made myself a BigDecimal class. I don't know if I did it the way you are "supposed to", but I did it and moved on. I store two integers (p,q) to make a rational number. A gcd function is useful in reducing it to a canonical form. I constrain the denominator to be a power of ten, so I don't get things like 1/3 with an infinite decimal expansion. Now I can have numbers like "100,000,000,000,000.00000123" without wondering about IEEE floating point details.

2 Likes

Hi.

Currently, init from string the only option for now. As you wrote, let dec1 = Decimal(string: "3.133")! gives the expected result: 3.133.

As for let s = "12å3.456äö" – you can write a SwiftLint rule for this. It is not ideal solution, but it works. SwiftLint rule is rather easy to implement with simple regular expression, only 0-9 and '.' symbols are allowed.
Further, you can write special initializer, like
init(validatedLiteral: String) { self.init(string: validatedLiteral)! }
and improve SwiftLint rule in a way, that it allows creation of Decimal using this initializer and throw an error if trying to use init(string:)
For example:

let decimal = Decimal(string: "3.133") // Error

fun someFuction(aString: String) {
  let decimal = Decimal(validatedLiteral: aString) // Error
}

let decimal = Decimal(validatedLiteral: "3.133") // Ok, swiftLint check the literal value with regex
1 Like

I wrote this library to address this shortcoming.

1 Like

That’s a really neat trick, to create a String representation of the Double and using that to initialize the Decimal.
Do you know what mechanism makes the Double 3.133 (actually 3.132999999999999488 as you write in your documentation) render as the String “3.133”?

I could be misremembering, but I believe that when rendering to a String, Double will use the minimum number of digits required for the value to be recovered losslessly. E.g., since there's no valid Double value closer to 3.133 than 3.132999999999999488, we don't need more precision to recover the exact same value.

5 Likes

Thank you for the explanation! That always seemed magic to me. :grin:

Right. More details here: https://github.com/apple/swift/blob/main/include/swift/Runtime/SwiftDtoa.h

2 Likes

Couldn’t / shouldn’t the trick that @davdroman ’s neat library uses - or some other implementation that exploits the same knowledge - be used in the actual Double initializer for Decimal? And for decoding too?

Or even better, have built-in language support for Decimal literals. The JSON coding issue is caused by the internal implementation of JSONSerialization (and JSONDecoder by proxy), which does Data -> Double -> Decimal conversion instead of direct Data -> Decimal conversion.

1 Like