 # Why is Decimal.ulp greater than any decimal value itself?

Trying to find the number of decimals in a decimal number, I tried this in a REPL:

`````` 39> Decimal(0.01) > Decimal(1.0)
\$R28: Bool = false
40> Decimal(0.01).ulp
\$R29: Decimal = 0.010000
41> Decimal(0.01).ulp > Decimal(1.0)
\$R30: Bool = true
``````

This doesn't seem to work as expected. In fact, the `ulp` of even a small Decimal number, or a number with many decimal places, will still be larger than a very large Decimal number with no decimal places:

`````` 105> Decimal(string: "1.01")!.ulp > Decimal(string: "123456789123456789")!
\$R73: Bool = true
``````

According to the documentation, the `ulp` of `Decimal` is

The unit in the last place of the decimal.

and the `ulp` of eg `Double` is

The unit in the last place of this value.

Some more examples of `Decimal`'s `ulp`:

value ulp
0.0001234567 0.0000000001
0.001234567 0.000000001
0.01234567 0.00000001
0.1234567 0.0000001
1.234567 0.000001
12.34567 0.00001
123.4567 0.0001
1234.567 0.001
12345.67 0.01
123456.7 0.1
1234567 1
12345670 10
123456700 100

Note that eg:

``````  var x = Decimal(0.999)
print(x, x.ulp) // 0.999 0.001
x = x.nextUp
print(x, x.ulp) // 1 1
``````

while for eg `Double` (due to the difference in data format):

``````  var x = Double(0.999)
print(x, x.ulp) // 0.999 1.1102230246251565e-16
x = x.nextUp
print(x, x.ulp) // 0.9990000000000001 1.1102230246251565e-16
``````

Also note that for `Double` the printed decimal values above (eg 0.999) are not the exact `Double` values (they are kind of rounded to the decimal representation with as few decimal digits as possible while still being closer to the actual `Double` value than to any other `Double` value, or something like that).

I'm not sure what you mean by

the number of decimals in a decimal number

but if it is the number of digits in the fractional part then beware of things like:

``````  let x = Decimal(1.234)
print(x) // 1.234
let y = x.nextUp
print(y) // 1.235
let z = Decimal(1.235)
print(z) // 1.2350000000000002048
print(y == z) // false (because the literal `1.235` is a Double in Swift)
let w = Decimal.init(string: "1.235", locale: Locale.init(identifier: "us"))!
print(w) // 1.235
print(y == w) // true
``````

I does look like a bug that `Double(0.1).ulp`, which should be `0.1` (and prints `0.1`), ends up compare greater than `Double(1.0)`. Moreover, things like `nextUp.nextDown`* fixes that:

``````import Foundation
let a = Decimal(0.1).ulp, b = Decimal(1.0)

// ???
print(a > b) // true
// OK
print(a.nextUp.nextDown > b) // false
``````

* Ok, that may not recover the original value for some numbers, but most definitely would for `Decimal(0.1)`.

Isn’t this just a consequence of literals being interpreted as Double values in Swift? Ie, Decimal(0.1) is not 0.1, because the 0.1 literal will first be converted to a Double value, which is close to but not exactly 0.1.

I don't think so. I included a string literal-initialised example at the end of my original post as well:

Besides, the rounding errors would not be off by several magnitudes and would rather tend toward a smaller `ulp` than a larger one.

1 Like

Even with rounding, give or take, it most definitely shouldn't be greater than 100 milllion:

``````Decimal(0.1).ulp > 100_000_000 // true
``````
1 Like

Ah, yes, that's most definitely a bug, I missed the `>` - strangeness (which is not only in the REPL, but also in eg Xcode 12.5.1).

Another example program whose behavior doesn't make sense, but might give a clue about what's going wrong (I have no idea what that `_length` property is, only that it's different in the value it gets from `ulp`, and seems to cause unexpected behavior):

``````let a = Decimal.zero.nextUp
let b = a.ulp
print(a, a._length) // 1 1
print(b, b._length) // 1 8
print(a == b) // false
let x = a.nextDown.nextUp
let y = b.nextDown.nextUp
print(x, x._length) // 1 1
print(y, y._length) // 1 1
print(x == y) // true
``````

It's more than one bug, but the immediate problem can be addressed fairly straightforwardly:

1 Like

I would love to have a good equivalent of `Builtin.IntLiteral` that we could use for floating-point literals and that would persist the exact original value. I suspect that for efficiency it would always need to include a binary floating-point value. There are a lot of corner cases of expressibility, though, which is why we haven't taken it on yet.

6 Likes