Is there a reason Swift can't have something literals like 1.0f
, 1i32
etc? Relying on type inference is not the perfect solution.
First of all, often when translating mathematical formulas, one uses long expressions involving numbers, which often slow or break compilation due to error: the compiler is unable to type-check this expression in reasonable time
. Second, when you need the inferred type to be different from the default type. For example, it is not unreasonable to expect Float16 due to rice of machine learning. So often, one needs to type something like Float(1) in order to be explicit (and no, just supplying the type doesn't always have, if for example you have non-trivial expressions in your matrix entries). Allowing expressions like 1f
or 1f16
etc would could significantly improve the quality of life.
As far as I am aware of, the current workarounds, are:
- either declare something like .f as extensions and write 1.f instead of 1f, which is ergonomic enough, however the casting doesn't occur during compilation which can cause runtime errors.
- Introduce short type aliases like
typealias F16 = Float16
,typealias F = Float
, which are more ergonomic thanFloat(23.324)
but still feel "heavy", and in addition in some corner cases can cause confusion when<F>
is used as a generic parameter name.
Is there any chance that typed literals introduced in future?
If not, what are the reasons? They do improve quality of life, shouldn't break ABI and could coexist with untyped literals we have now. (let the programmer choose what style he prefers).
Also, what are supposed solutions to the problem that do not require introduction of new features? (Except breaking the expression - as these make code longer and often less intuitive, or explicitly writing the types within the expression?)