Currently, the only way for custom types to be initializable by non-integer numeric literals is to conform to ExpressibleByFloatLiteral, which is fine if your type is using floats underneath, but an unacceptable loss of precision if you’re implementing fixed-point or fractional types for purposes like financial computations and lossless-precision math. I’d like to suggest a new compiler-recognized standard protocol like this:

Currently, any floating-point literal would be eligible by being processed by the compiler into a perfectly lossless pair of integer literals for the numerator and the denominator respectively while it’s still in string form instead of automatically converting it into a lossy floating-point form.

The ExpressibleByFloatLiteral could then inherit from ExpressibleByFractionLiteral and provide a default implementation of the fraction literal initializer in terms of the float literal initializer (which could be easily optimized away in cases of built-in floats), which would still be an open requirement. This would eliminate the ambiguity of the literals by giving ExpressibleByFractionLiteral exclusive access to the literals, while still keeping the ExpressibleByFloatLiteral perfectly usable as before.

Currently, I use the existing / operator with two integer literals to fake a vulgar fraction literal by relying on return value overload resolution to determine the appropriate type for me, but it still doesn’t help the fact that currently 0.25 Is unavoidably a lossy float, which doesn’t really solve the problem at hand.

I’d like to refer you to a recent thread about this topic:

Floating-point types use sign, exponent, and significand because it’s an efficient way to encode a fixed-precision value. To use a denominator instead would be very inefficient (you’d have to allocate 2048 bits for the denominator if your binary exponent is -2048), and therefore not an optimal design for initialing a floating-point value. As I wrote in the earlier thread, my two cents are as follows:

The main alternative base in question here is 10. However, decimal storage formats and binary storage formats share so little in common that any initializer common to both will be extremely unwieldy for one or both formats. Personally, somewhere down the road, I’d rather see Decimal64/128 become standard library types (already working on it), DecimalFloatingPoint become a standard library protocol, and 0.1 become a “decimal literal” (with Float, Double, Float80, and Decimal64/128 all conforming) as distinct from a “float literal” that we could then restrict to hexadecimal (?and binary) floating-point literals (and maybe rename accordingly).

Literals aren’t tied to any particular conforming type, but they are always tied to some group of built-in types. Until there’s a clear additional use case (e.g., [an IEEE Decimal64/128] type [Foundation.Decimal does not even conform to FloatingPoint and is rather an oddity in terms of the features it supports]), it’s pretty pointless to redesign literal protocols, because whether or not a particular way in which information about the literal value is conveyed to the initializer is ergonomic and efficient will depend on the underlying implementation of the type.

I really like that idea, because the literal in question is exactly a decimal literal and converting it to a fraction is already an indirection. Treating decimal literal as actual base-10 floating point values is perfectly lossless if the original literal is written as a base-10 string (with or without an exponent), so your idea of having a native Decimal64 and Decimal128 and using them as the literal type to initialize fixed-point or fractional types would solve the issue of precision loss. However, it would require creating a new protocol and shifting the responsibility of handling those literals to it, because if you operate on the existing ExpressibleByFloatLiteral in a generic way, you will have no way of telling whether or not it’s a lossless representation, which will invalidate the most useful use cases. I guess we’d need some sort of ExpressibleByDecimalLiteral and have ExpressibleByFloatLiteral conform to it with the added ability to handle hex float literals (which will not be permitted for ExpressibleByDecimalLiteral on its own).

On the topic of the FloatingPoint protocol, there are some big issues with built-in numeric protocols. They’re definitely better than the way they were before the revamp, but they leave no room for anything that isn’t the standard built-in integer and float types. The aforementioned fixed-point and fractional types are currently impossible to integrate into the standard protocols, because the only non-integer protocols are FloatingPoint and BinaryFloatingPoint, neither of which is what we’d need. I think in order for the numeric protocols to really be complete, there have to be much more numerous, fine-grained and abstract protocols, which can contain information such as fundamental range limits and/or precision limits, as well as provide ability to have different return types for mathematical operations (such as a custom integer type that returns a fraction type when dividing). Ideally, we’d want them to perfectly fit anything from an 8-bit unsigned integer, to an IEEE float, to an arbitrary-precision fractions.

Sure there is. See, for example, my take on how to do so in NumericAnnex.

The idea of having even more numeric protocols was discussed at length, but it is very difficult to write generic algorithms that are correct over disparate types, let alone performant. And then–how would you even test their correctness without mocking up fairly complete implementations of every exotic kind of number?

As it stands, some of the generic numeric algorithms shipping in Swift 4 actually flat-out give you the wrong answer if you use it with a type that conforms to protocol semantics but is unusual in some other way. These are incredibly hard-to-spot bugs, and it’s very difficult to even demonstrate that you’ve fixed them because there are no such types with which to test the result. This problem becomes quickly intractable with an increase in the number and complexity of such protocols.

Thanks! I didn't know about that. In this case, it seems like this issue is well on its way of being resolved, so I retract my pitch.

That's exactly what I was thinking about when I came up with what seems to be the most mathematically correct representation of numbers possible. Basically, if performance wasn't an issue, all rational numbers would be represented as an arbitrary-precision fraction, because that's the only format that guarantees perfect representation of all possible rational numbers (sans memory limitations). Irrational numbers will be stored as generalized continued fractions or infinite series, both of which are essentially interchangeable and are capable of accurately representing any irrational number. Those irrational types can be thought of Sequence types where the sequence is typically infinite (except cases like square roots, which may or may not end up being irrational) and each successive element is a more accurate approximation of the irrational number. Arithmetic operations on irrationals would combine them into new infinite series with no loss of information and would stay that way by promoting rational types into irrationals and consuming them. At any point they can be evaluated using a custom closure that determines the termination condition (which in case of fixed-width floats will merely compare the successive elements for equality, since that would mean that we've run out of floating-point precision and all further refinement is wasted). I've already implemented the computation the constants pi, e as well as the sine function, all of which are represented as a structure with GCF state, making it the most perfect representation of pi possible.

In this light, native numeric types can be thought of as special cases of arbitrary-precision fractions where sometimes they may give the wrong answer by going out of range or running out of precision or hitting rounding errors. In some cases these problems can be easily caught (e.g. integer overflow), which is already handled in the form of either optionals or fatal errors. In others cases, the answer will be straight up wrong with no way of knowing it (e.g. floating-point rounding errors). In this case, the problem fundamentally lies in the type itself and can never be solved with a protocol, unless the protocol has a static boolean variable that indicates whether or not the type is fundamentally imprecise. Basically, what I'm saying is that in my opinion all we can do is incorporate the logic of testable failure of computation (as in integer overflow) and fundamental imprecision (as in float rounding errors) in the protocol itself as a heads-up to the algorithm authors all for the purpose of optimizing away painfully slow arbitrary-precision fractions.