Typed literals for builtin numerical types?

Is there a reason Swift can't have something literals like 1.0f, 1i32 etc? Relying on type inference is not the perfect solution.
First of all, often when translating mathematical formulas, one uses long expressions involving numbers, which often slow or break compilation due to error: the compiler is unable to type-check this expression in reasonable time. Second, when you need the inferred type to be different from the default type. For example, it is not unreasonable to expect Float16 due to rice of machine learning. So often, one needs to type something like Float(1) in order to be explicit (and no, just supplying the type doesn't always have, if for example you have non-trivial expressions in your matrix entries). Allowing expressions like 1f or 1f16 etc would could significantly improve the quality of life.
As far as I am aware of, the current workarounds, are:

  1. either declare something like .f as extensions and write 1.f instead of 1f, which is ergonomic enough, however the casting doesn't occur during compilation which can cause runtime errors.
  2. Introduce short type aliases like typealias F16 = Float16, typealias F = Float, which are more ergonomic than Float(23.324)but still feel "heavy", and in addition in some corner cases can cause confusion when <F> is used as a generic parameter name.

Is there any chance that typed literals introduced in future?
If not, what are the reasons? They do improve quality of life, shouldn't break ABI and could coexist with untyped literals we have now. (let the programmer choose what style he prefers).
Also, what are supposed solutions to the problem that do not require introduction of new features? (Except breaking the expression - as these make code longer and often less intuitive, or explicitly writing the types within the expression?)

Swift has the equivalent, just spelled differently:

let x = 1.0 as Float
3 Likes

If we were to make improvements in this area, I think what I would want is a way to locally specify things like “within this scope, all integer literals are UInt64”.

I don’t know exactly what that would look like, but I’m pretty sure it’s what I’d want.

You might be able to write an expression macro that recursively walks the expression and wraps all integer literals with ... as UInt64.

Dare I suggest we change the lookup rules for the default IntegerLiteralType, etc. typealiases to respect the current scope(s) instead of only applying to the top level of the source file?

Slava's macro idea is probably better.

I am aware of that option. It's just using something like <literal> as Float within a long expression or a matrix entry is not very convenient or readable. The typealias MT = MyType hack I've suggested makes it a bit more ergonomic and readable, but it is still not perfect and less convenient than just having typed literals.