Creating a Decimal value from a floating-point literal is imprecise because the literal is routed through Double before becoming a Decimal.
import Foundation
let imprecise: Decimal = 3.333333333
print(imprecise) // prints 3.333333333000000512
let precise = Decimal(string: "3.333333333")!
print(precise) // prints 3.333333333
For this reason, some developers use Decimal(string:) when creating a decimal to ensure precision; however, this leads to inefficiency since the string needs to be parsed at runtime.
I propose that we add a new #decimal macro which parses a string at compile-time into a Decimal.
There was already an accepted proposal to add a StaticBigInt for a similar purpose. Seems to me like we'd just need a StaticBigFloat, or something like that.
Ideally, the syntax should just let you write:
let precise: Decimal = 3.333333333
without the literal being narrowed to a Double and losing precision.
This reflects a limitation of float literals which is tracked by SR-920. The Decimal type itself belongs to Foundation, with an uncertain future going forward, so any solution that's specific to that type couldn't be sunk down. Improvements to float literals would benefit existing standard library floating-point types anyway and are sorely needed, and for that reason, a macro wouldn't be the ideal solution.
The correct way is indeed to improve existing literal protocols, but a macro sounds like a good idea for contexts where you need to back-deploy... because if it's like StaticBigInt, those improvements won't be available when running on older Apple platforms.
Indeed, having a more generic way of specifying numeric literals would also support new Decimal number types like Decimal32, Decimal64, and Decimal128. @scanon has approval for work on a Decimal64 type.