Change of how FloatingPointLiteral is Handled by the Compiler

there's a lot more discussion about this topic here: StaticBigInt

one of the major unsolved problems is what to do when the decimal literal cannot be represented exactly by the destination type.

for binary floating point, the behavior for something like

let _:Double = 0.1

is straightforward - it should just encode the closest representable Double value.

but for decimal floating point, we absolutely do not want to allow something like:

let _:Decimal16 = 0.000_001_5

to “just use the closest value”, this is an efficient way to go bankrupt.

it follows that allowing things like hex float literals is also tricky.

1 Like