What is the point of the Decimal type?

There is a type in the Foundation module called Decimal. It's documentation says only

A structure representing a base-10 number.

What can it do that Double can't?

It can represent a number of arbitrary precision (at the cost of lower performance in the arithmetic functions).

Decimal is not arbitrary precision.

However, it can represent 0.1 exactly, which Double cannot. This doesn't matter for most uses, but it does matter sometimes.

3 Likes

If I evaluate Decimal(1) / 3, it prints as 0.333333 (six 3's). Is there a way to control the precision - how many decimal places it uses - or is that fixed? If it is fixed, what is it?

No you can't. 1 / 3 limited to, say, 5 decimal precision is no longer 1 / 3. Of course you can format them using NumberFormatter.

Decimal has a couple deficiencies that make it hard to use.

First and foremost it does not conform to LosslessStringConvertible, unlike Double, meaning that you don't get a precise conversion to and from Strings. This is especially annoying when you get monetary values, either from user input or a backend, and what is supposed to be the more precise type can't accurately represent the values when a Double can. It's actually more accurate to go String -> Double -> Decimal and back than it is to go String -> Decimal.

Second, Decimal is not part of the same NSNumber hierarchy as Int or Double, so it doesn't benefit from automatic conversions to and from NSNumber. This is due to it converting to and from NSDecimal (which is barely used) instead of NSDecimalNumber (an NSNumber subclass). So it makes using it more awkward with Foundation. For instance, to format a Decimal you must first convert it to an NSDecimalNumber, which can then be formatted.

4 Likes

Are you sure about that second part? The docs for NSDecimalNumber say it bridges to Decimal

NSDecimalNumber may bridge into Swift as Decimal, but Decimal bridges into Obj-C as NSDecimal, which is why it can't be used directly with APIs that take NSNumber. Last I looked at least.

This is not true (quite the opposite actually) as far as I understand (and has been discussed here and here)

The only way to get to an exact Decimal is to use a String.
Going through Double is always the wrong approach.

$ swift
Welcome to Apple Swift version 5.3.1 (swiftlang-1200.0.41 clang-1200.0.32.8).
Type :help for assistance.
  1> import Foundation 
  2> let d: Decimal = 0.14159 
d: Decimal = 0.141590

  3> print("\(d)") 
0.14158999999999997952
  4> let d: Decimal = Decimal(string: "0.14159")! 
d: Decimal = 0.141590

  5> print("\(d)") 
0.14159
2 Likes

Thanks.

Playing around in the REPL, I see that the init(string:) returns and optional but oddly doesn't return nil when you give it a string it can't represent...

16> let d5 = Decimal(string: "1.1234567891234567890000000012345678900000000123456789") 
d5: Decimal? = 1.123457
17> "\(d5!)"
$R9: String = "1.12345678912345678900000000123456789"

Of course, the documentation for that initializer is empty: Apple Developer Documentation

I thinks it returns a nil if the parser finds a character that should not be there, like a letter (not sure is it accepts an "e" to represent an exponent like IEEE-754 floating point types), or punctuation ("+" and "-" excepted when first character).

1 Like

If you want better performance, you can use something like this Decimal64 struct: GitHub - dirkschreib/Decimal64

The rule of thumb, use Decimal with money.

1 Like