It seems to me that Swift isn’t doing ‘The Right Thing’ in a very simple mathematical test,
a = 2; b = 0.5 c = a*b # 0.5
let a = 2, b = 0.5 let c = a*b // Error!
a = 1; b = 2 c = a/b # c=0.5
while in Swift,
let a = 1, b = 2 let c = a/b // c = 0
This strikes me as a problem. I have heard many times that Swift should be safe, expressive, and complexity should be discoverable. This principle should apply here as well: 1 divided by 2 = 0.5 to most people. Another ‘discoverable’ level of complexity is learning that if you define these types as Ints, then you get a different result.
So, I would argue that the default behavior should be for Swift to adhere to normal/expected mathematics—in other words, the inferred type should be
Double. I also think this is pivotal if Swift is ever going to have any footprint in science and data analysis.