To clarify what I mean, let's consider this example from The Swift Book:
extension Double {
var km: Double { return self * 1_000.0 }
var m: Double { return self }
var cm: Double { return self / 100.0 }
var mm: Double { return self / 1_000.0 }
var ft: Double { return self / 3.28084 }
}
let threeFeet = 3.ft
print("Three feet is \(threeFeet) meters")
As we'd like this to work for all BinaryFloatingPoint types, we change it to:
extension BinaryFloatingPoint {
var km: Self { return self * 1_000.0 }
var m: Self { return self }
var cm: Self { return self / 100.0 }
var mm: Self { return self / 1_000.0 }
var ft: Self { return self / 3.28084 }
}
let threeFeet = 3.ft
print("Three feet is \(threeFeet) meters")
This seems to work as expected.
But if we change threeFeet to be something other than the "default" floating point type Double, eg Float, we get an error:
let threeFeet: Float = 3.ft // ERROR: Value of type 'Int' has no member 'ft'
I guess this is because Swift looks at the literal 3 in isolation, decides that it has to be its preferred interpretation of 3 ie Int (had the literal been 3.0, Swift would've decided that it was a Double).
So we have to write eg the following in order to make the type checker see what we might have expected it to see above:
let threeFeet = (3 as Float).ft
And this sort of defeats the point of having these extensions.
I've run into this limitation in a number of less contrived (and less pedagogical) real world scenarios.
Question:
Would it be possible, and worth it, to improve the type inference of literals in cases like this, so that Swift uses the context of the literal rather than looking at it in isolation?