# Change of how FloatingPointLiteral is Handled by the Compiler

In another thread DecimalFloatingPoint Protocol Review - #3 by benrimmington @benrimmington mentions that the compiler treats `FloatingPointLiterals` as binary floating point numbers. Since I am working on a Decimal floating-point library, it is vexing to not be able to use compliance to this protocol since obviously there would be errors in its representation for decimal numbers.

Would it be possible for the compiler to treat `FloatingPointLiterals` in a radix-agnostic manner or even as a base 10 floating-point number? This seems to be closer to the original intent of this literal.

1 Like

Yes—see discussion here (and in the other linked issues):

1 Like

So still no solution accepted and implemented. Many good solutions were presented: why was nothing implemented?

My solution was to use an extended integer with decimal point and exponent fields (essentially a very basic decimal number). As was mentioned, precalculated Double and Float values could also be attached for efficient use.

Meanwhile Swift is getting macros and every feature under the sun while the compiler's basic math framework is still broken. I guess math is not sexy.

one of the major unsolved problems is what to do when the decimal literal cannot be represented exactly by the destination type.

for binary floating point, the behavior for something like

``````let _:Double = 0.1
``````

is straightforward - it should just encode the closest representable `Double` value.

but for decimal floating point, we absolutely do not want to allow something like:

``````let _:Decimal16 = 0.000_001_5
``````

to “just use the closest value”, this is an efficient way to go bankrupt.

it follows that allowing things like hex float literals is also tricky.

1 Like

More sophisticated decimal literals would be great, but there's a perfectly workable solution in the short-term in the form of strings, so it simply hasn't been a priority. There's no such short-term fix for "not having macros".

3 Likes

And you could use macros to make a better decimal initializer!

1 Like

Maybe I don't understand your definition of Decimal16 but if it is a 16-bit decimal floating point number, wouldn't this literal translate to 1.5e-6 or 15e-7? Not sure why you would go "bankrupt"?

1 Like

Tay is suggesting that a 16-bit decimal number wouldn't be able to represent this value exactly, so it would be rounded, and people who are trafficking in currency don't much like that.

There is no 16-bit decimal encoding defined by IEEE 754, FWIW; if there were it would have to have a ten-bit trailing significand field (because DPD encodes in multiples of ten bits), leaving a combination field of only five bits, and hence only three possible exponents, which is more like a lousy integer than a floating-point number.

2 Likes

So probably not the best example to illustrate a particular problem. No issues with any other Decimal formal AFAIK.

a decimal literal doesn't necessarily initialize a general purpose decimal type, it can also initialize custom types (e.g. interest rate, increments of a particular asset, etc) and these quantities might have restrictions on the number of decimal places allowed - some platforms conservatively reject orders if they are specified in a precision they don't support to avoid executing trades that are slightly different than what the client consented to.

rounding mismatches are hard to debug because we generally trust what is written in a literal, so it's motivating to be able to catch these kinds of problems as early as possible, preferably at compile time.

4 Likes

Not trying to say macros aren't important (although I'm a language purist and believe macros will make Swift harder to understand), but can a small effort now focus on getting numeric literals to work? It's almost a trivial change compared to adding macros. This is something that has been on a back burner for many years now.

Doing it in a way that preserves source and binary compatibility and doesn't regress performance for existing uses is somewhat subtle. It's totally feasible, and not a huge effort, but in some ways macros are easier, precisely because it's all new territory.

2 Likes

something that would be really interesting to me is if we were able to attach "diagnostic" macros to ExpressibleBy conformances to be able to statically assert things about literals.

this wouldn't require a full-blown compile time evaluation system (since literals are a syntactical construct), and could be added on to the existing initialization-through-protocol witness system without needing to change ABI.

At least we can have an implementation and have one of those compiler flags to mark it as an experimental feature. It can be enabled by those who are developing Decimal number types as a means of testing otherwise untestable features.

BTW, I've been investigating other Decimal number implementations in Swift, and they seem to be successful (I think) in converting from floating-point literals. I'm going to have a closer look at their implementations -- perhaps they just don't know what they're doing is not possible. Here are some of the test cases that appear to be passing:

``````        let values: [(String, Float)] = [
("1.0", 1.0),
("0.5", 0.5),
("0.25", 0.25),
("50.", 50.0),
("50000", 50000.0),
("0.001", 0.001),
("12.34", 12.34),
("0.15625", 5.0 * 0.03125),
("3.1415925", Float.pi),
("31415.926", Float.pi * 10000.0),
("94247.77", Float.pi * 30000.0)

let values = [
("1.0", 1.0),
("0.5", 0.5),
("50", 50.0),
("50000", 50000.0),
("1e-3", 0.001),
("0.25", 0.25),
("12.34", 12.34),
("0.15625", 5.0 * 0.03125),
("0.3333333333333333", 1.0 / 3.0),
("3.141592653589793", Double.pi),
("31415.926535897932", Double.pi * 10000.0),
("94247.7796076938", Double.pi * 30000.0)
``````

Can someone tell me a test case that would fail? Here are more "impossible" test cases that pass:

``````let values: [BigDecimal] = [
2.5,
0.3,
0.001,
]

let expectedSum = BigDecimal(2.801)
``````

Using `Foundation.Decimal` since that’s easier for me to verify:

``````import Foundation

let xStr = Decimal(string: "1.333333333333333333")!
let yStr = Decimal(string: "1.333333333333333334")!
print(xStr == yStr) // false

let xDbl: Decimal = 1.333333333333333333
let yDbl: Decimal = 1.333333333333333334
print(xDbl == yDbl) // true
``````
1 Like

BTW, it looks like this particular library cheats by first turning numeric literals into strings (which always seem to round perfectly) and then converting the strings into the Decimal number.

Thanks, I'll give that one a try.

Thanks @bbrk24, you win a prize . Your test case executed as you predicted.
The yDbl converted into a number with all threes which was identical to xDbl.

The take-away here is that people are struggling to make numeric literals work correctly but don't seem to realize that they are not working correctly.

1 Like

exactly.