# Change of how FloatingPointLiteral is Handled by the Compiler

That looks like it's my library

My current method of going through a string conversion works for all floating point literals with less than 15 significant digits of precision, because `Double` always rounds correctly for these values.
BTW, this is also the reason that @bbrk24's example fails: it requires 20 digits of precision.

Yes, it is unfortunate that some floating point literals just don't work. Maybe a short-term solution for that could be that the compiler could check the amount of significant digits of precision and just reject (or at least warn about) any literals that have more than 15.

But it's also unfortunate, that my literal implementation has to go through a string conversion, so I would also very much support a proper long-term solution where that isn't necessary anymore.

The ideal™ solution would probably be if we could get a type like the following to conform to the `_ExpressibleByBuiltinFloatLiteral` protocol (of which I don't know if it is possible to do in a source- and ABI-stable manner):

``````struct FloatLiteralInfo {
let significand: StaticBigInt
let exponent: Int
}
``````

Then we could use this type in the literal initializer and everyone would be happy ;)

should we add `base:Int` too?

1 Like

I would probably also add a decimal point field:

``````struct FloatLiteralInfo {
let significand: StaticBigInt
let exponent: Int
let decimalPoint: Int
let radix: Int // perhaps optional if it is assumed to be base 10
}
``````

With decimal numbers: 1.20e10 and 1.2e10 are actually different numbers with the same exponent but different significands and decimalPoints. We could always truncate trailing zeros and get rid of the `decimalPoint` field.

I don't exactly know what you mean. AFAIK, every possible value `x` that can be expressed with a floating point literal has a unique `FloatLiteralInfo`, where `x == significand * 10^exponent` (or rather `significand * radix^exponent`). That means that floating point literals that are written differently could produce the same `FloatLiteralInfo`, e.g. `1.23e2` and `12.3e1` both produce

``````FloatLiteralInfo(significand: 123, exponent: 0, radix: 10)
``````

I don't think there is any need to know, where the original decimal point in the literal was.

3 Likes

Necroing the thread, because why not.

## Truncate trailing zeros

Depends on what do you mean by "truncate trailing zeros". For example `0.00` is equal to `0.0000`, but in some cases they behave differently. Same thing for `1.00` and `1.0000000` etc.

Example in Swift with Oh-my-decimal:

``````import Decimal

let precision0 = Decimal128("0.00")!
let precision1 = Decimal128("0.000")!
let d = Decimal128("1234.5678")!

// Python has nice docs for 'quantize':
// https://docs.python.org/3/library/decimal.html#decimal.Decimal.quantize
print(d.quantized(to: precision0, rounding: .towardZero)) // 123456E-2 = 1234.56
print(d.quantized(to: precision1, rounding: .towardZero)) // 1234567E-3 = 1234.567
``````

This is a bit of a made-up case, because for `quantization` you usually use `0.01` (or `0.0001` etc.) as it is more human-readable. But it still has to be supported.

## Precision compiler warnings

Second this.

I would really love to have a compiler warning when the decimal literal is not exactly representable in a given format.

@taylorswift example is a bit unfortunate, but how about:

• I wrote the IEEE decimal library (the Oh-my-decimal thingie) -> I know how it works
• I started writing the example for the "Truncate trailing zeros" above
• I got `1234568E-3` instead of `1234567E-3` for the `0.000` case
• WTF?

Reason? I used `Decimal32` which has only 7 digits in precision, so the parsed value was `1234568E-3`. It is something that we as humans do not think about, we just assume that things work, because this is what we see (WYSIWYG rule).

This could also be a problem for `Decimal128` (34 digits in precision) as most of the users do not know that `Decimal` is a floating point. They just think that it is a big number with decimal base that is used for money. Then they paste `e` (Euler constant) calculated to 100 digits into their source code. I would really love to see a compiler warning in this case, because the compiler silently changed the value.