#decimal(_:) macro

Creating a Decimal value from a floating-point literal is imprecise because the literal is routed through Double before becoming a Decimal.

import Foundation

let imprecise: Decimal = 3.333333333
print(imprecise) // prints 3.333333333000000512

let precise = Decimal(string: "3.333333333")!
print(precise) // prints 3.333333333

For this reason, some developers use Decimal(string:) when creating a decimal to ensure precision; however, this leads to inefficiency since the string needs to be parsed at runtime.

I propose that we add a new #decimal macro which parses a string at compile-time into a Decimal.

let precise = #decimal("3.333333333") // non-optional

/* expands to */
let precise = Decimal(_exponent: -9, _length: 1, _isNegative: 0, _isCompact: 2, _reserved: 0, _mantissa: (41301, 50862, 0, 0, 0, 0, 0, 0))

2 Likes

Better yet, you can make it take a floating point literal for compile time guaranteed input correctness:

let precise = #Decimal(3.333333333)

/* expands to */
let precise = Decimal(string: "3.333333333")!
1 Like

There was already an accepted proposal to add a StaticBigInt for a similar purpose. Seems to me like we'd just need a StaticBigFloat, or something like that.

Ideally, the syntax should just let you write:

let precise: Decimal = 3.333333333

without the literal being narrowed to a Double and losing precision.

13 Likes

Why not have that macro parse the literal at compile-time instead of expanding to the runtime string initialization?

2 Likes

This reflects a limitation of float literals which is tracked by SR-920. The Decimal type itself belongs to Foundation, with an uncertain future going forward, so any solution that's specific to that type couldn't be sunk down. Improvements to float literals would benefit existing standard library floating-point types anyway and are sorely needed, and for that reason, a macro wouldn't be the ideal solution.

10 Likes

Yeah sorry, that's what I meant :)

there is a great deal of discussion about this in the original pitch thread for SE-368.

1 Like

The correct way is indeed to improve existing literal protocols, but a macro sounds like a good idea for contexts where you need to back-deploy... because if it's like StaticBigInt, those improvements won't be available when running on older Apple platforms.

5 Likes

Indeed, having a more generic way of specifying numeric literals would also support new Decimal number types like Decimal32, Decimal64, and Decimal128. @scanon has approval for work on a Decimal64 type.

I implemented a #decimal macro pretty much as described in this thread.

If you write:

#decimal(3.333333333)

It will expand to:

Decimal(sign: .plus, exponent: -9, significand: Decimal(3333333333 as UInt64))

If you supply a literal with significant digits that won't fit into a UInt64 such as:

#decimal(0.18446744073709551616)

It will expand to:

Decimal(_exponent: -20, _length: 5, _isNegative: 0, _isCompact: 1, _reserved: 0, _mantissa: (0, 0, 0, 0, 1, 0, 0, 0))
12 Likes