I'd like to refer you to a recent thread about this topic:
Floating-point types use sign, exponent, and significand because it's an efficient way to encode a fixed-precision value. To use a denominator instead would be very inefficient (you'd have to allocate 2048 bits for the denominator if your binary exponent is -2048), and therefore not an optimal design for initialing a floating-point value. As I wrote in the earlier thread, my two cents are as follows:
The main alternative base in question here is 10. However, decimal storage formats and binary storage formats share so little in common that any initializer common to both will be extremely unwieldy for one or both formats. Personally, somewhere down the road, I'd rather see
Decimal64/128
become standard library types (already working on it),DecimalFloatingPoint
become a standard library protocol, and0.1
become a "decimal literal" (withFloat
,Double
,Float80
, andDecimal64/128
all conforming) as distinct from a "float literal" that we could then restrict to hexadecimal (?and binary) floating-point literals (and maybe rename accordingly).
Literals aren't tied to any particular conforming type, but they are always tied to some group of built-in types. Until there's a clear additional use case (e.g., [an IEEE
Decimal64/128
] type [Foundation.Decimal
does not even conform toFloatingPoint
and is rather an oddity in terms of the features it supports]), it's pretty pointless to redesign literal protocols, because whether or not a particular way in which information about the literal value is conveyed to the initializer is ergonomic and efficient will depend on the underlying implementation of the type.