# How to initialize Decimal?

I cap the value using my roundToScale. This basically multiplies by 10^scale, rounds using rounding mode and divides back down by 10^scale.

Again not an optimal solution, but it’s ok for my use case.

Edit: I do this in a type that wraps Decimal so that I can fix up the values during decoding.

Decimal conforming to ExpressibleByFloatLiteral should be considered an issue that is actively harmful.

7 Likes

I think Decimal is and should be expressible by float literal, but the problem is that float literals in Swift are always converted to Double on their way to eg a Decimal.

I agree it could have made sense to not let Decimal conform to ExpressibleByFloatingPointLiteral until these issues have been solved.

Users must now keep in mind to not use float literals together with Decimal values, unless they know about this.

8 Likes

I do it like this. Can you break it?

``````let string = "-17.01"
XCTAssertNotEqual("\(-17.01 as Decimal)", string)
XCTAssertEqual("\(Decimal(integerAndFraction: -17.01))", string)
``````
``````public extension Decimal {
/// A `Decimal` version of a number, with extraneous floating point digits truncated.
init<IntegerAndFraction: BinaryFloatingPoint>(
integerAndFraction: IntegerAndFraction,
fractionalDigitCount: UInt = 2
) {
self.init(
sign: integerAndFraction.sign,
exponent: -Int(fractionalDigitCount),
significand: Self(Int(
(integerAndFraction
).rounded()
))
)
}
}
``````
``````public extension Numeric {
/// Raise this base to a `power`.
func toThe<Power: UnsignedInteger>(_ power: Power) -> Self
where Power.Stride: SignedInteger {
power == 0
? 1
: (1..<power).reduce(self) { result, _ in result * self }
}
}
``````

What is the `IntegerAndFraction` type?

EDIT: Oh nvm, I missed `<IntegerAndFraction: BinaryFloatingPoint>` Interesting partial workaround! But eg:
`Decimal(integerAndFraction: 1234567890.0123456789, fractionalDigitCount: 10)`
will result in a runtime crash:

``````            significand: Self(Int(       // <-- Thread 1: Fatal error: Double value cannot be converted to Int because it is outside the representable range
``````

and a non-crashing example:

``````let a = Decimal(string: "1234567890.1234567")!
let b = Decimal(integerAndFraction: 1234567890.1234567, fractionalDigitCount: 7)
print(a) // 1234567890.1234567
print(b) // 1234567890.1234568
``````
2 Likes

Perhaps you’re right. I remember this discussion from when the ‘ExpressibleByXxx’ were called XxxConvertible:

And this:

1 Like

I've been working on some financial accounting software, I didn't want to think about rounding errors from floating points. I don't need super fast arithmetic; I just want it to the stay exact. I used the `BigInt` swift package here and made myself a `BigDecimal` class. I don't know if I did it the way you are "supposed to", but I did it and moved on. I store two integers (p,q) to make a rational number. A `gcd` function is useful in reducing it to a canonical form. I constrain the denominator to be a power of ten, so I don't get things like `1/3` with an infinite decimal expansion. Now I can have numbers like "100,000,000,000,000.00000123" without wondering about IEEE floating point details.

1 Like

Hi.

Currently, init from string the only option for now. As you wrote, `let dec1 = Decimal(string: "3.133")!` gives the expected result: `3.133`.

As for `let s = "12å3.456äö"` – you can write a SwiftLint rule for this. It is not ideal solution, but it works. SwiftLint rule is rather easy to implement with simple regular expression, only 0-9 and '.' symbols are allowed.
Further, you can write special initializer, like
`init(validatedLiteral: String) { self.init(string: validatedLiteral)! }`
and improve SwiftLint rule in a way, that it allows creation of Decimal using this initializer and throw an error if trying to use `init(string:)`
For example:

``````let decimal = Decimal(string: "3.133") // Error

fun someFuction(aString: String) {
let decimal = Decimal(validatedLiteral: aString) // Error
}

let decimal = Decimal(validatedLiteral: "3.133") // Ok, swiftLint check the literal value with regex
``````
1 Like

I wrote this library to address this shortcoming.

1 Like

That’s a really neat trick, to create a `String` representation of the `Double` and using that to initialize the `Decimal`.
Do you know what mechanism makes the `Double` 3.133 (actually 3.132999999999999488 as you write in your documentation) render as the `String` “3.133”?

I could be misremembering, but I believe that when rendering to a `String`, `Double` will use the minimum number of digits required for the value to be recovered losslessly. E.g., since there's no valid `Double` value closer to 3.133 than `3.132999999999999488`, we don't need more precision to recover the exact same value.

5 Likes

Thank you for the explanation! That always seemed magic to me. Right. More details here: swift/SwiftDtoa.h at main · apple/swift · GitHub

2 Likes

Couldn’t / shouldn’t the trick that @davdroman ’s neat library uses - or some other implementation that exploits the same knowledge - be used in the actual `Double` initializer for `Decimal`? And for decoding too?

Or even better, have built-in language support for `Decimal` literals. The JSON coding issue is caused by the internal implementation of JSONSerialization (and JSONDecoder by proxy), which does `Data` -> `Double` -> `Decimal` conversion instead of direct `Data` -> `Decimal` conversion.

1 Like

Both the behavior of `Decimal` initializers from `Double` and JSON decoding fall under Foundation, and the choice of maintaining backwards compatibility with existing code versus aligning with Swift string representation is up to Apple-internal processes.

Swift language support for decimal floating-point literals is tracked by the bugs listed above by @Jens and is for sure a key improvement that will need to be made in this area.

7 Likes

It addresses some of the shortcomings but it will still be limited by `Double` (or the way Swift converts doubles to and from strings), ie:

``````let a = PreciseDecimal(  1234567890.0123456789 )
let b = Decimal(string: "1234567890.0123456789")!
print(a) // 1234567890.0123458
print(b) // 1234567890.0123456789
``````
3 Likes

That’s fair. I should probably add a disclaimer about that. It covers lots of use cases but falls short on very high precision numbers that Double simply can’t represent. We won’t get true 1:1 literal precision until Apple addresses it themselves.

2 Likes

Currently, `FloatLiteralConvertible` has another problem for this kind of use: It supports hexfloats, which also cause complications for decimal-based formats.

Because of this, I would prefer to see new protocols for "DecimalFloatLiteralConvertible" and "HexadecimalFloatLiteralConvertible" that stores the literal in a lossless form. (Maybe "InfiniteFloatLiteralConvertible" and "NanFloatLiteralConvertible" to support all possible FP literals?) There's a fair number of details to work out to make this both performant for standard types and flexible enough for arbitrary-precision constants.

2 Likes

It's simpler than you think, actually. Basically, you just need a couple of extra bits precision to identify the two "midpoints", values that are exactly halfway between your initial value and the next higher/lower Doubles. You then convert both of those midpoints to text at the same time, stopping at the first digit that differs.

In your example, the midpoint above your value starts with "3.133..." and the midpoint below it starts with "3.132...", so 3.133 is the shortest decimal that converts to exactly your value.

Most interestingly, this can be done very quickly. Because these "short, round-trip-correct" values have a known limit on their size, the arithmetic can be aggressively optimized, unlike a more general formatting routine that can produce any number of digits.

1 Like