Actually, the answer is probably: "Because C".
Specifically, in C, the natural floating point type is double
and for some intermediate period in history there were CPUs that had double
registers but no float
registers. In other words, float
was a storage type, not a calculation type — hence the weird C rules that promote float
to double
in some contexts.
With a blank slate, it would have made sense for Swift's IEEE 64-bit floating point type to be called Float
and the 32-bit version Float32
, analogously to Int
vs. Int32
. I guess that was thought to be too confusing for developers coming from C or Obj-C.
Still, there's some reason to prefer that (Swift) Float
and Double
should be freestanding basic types, rather than related to each other. 32-bit floats are too limited in precision to be a good general purpose calculation type. They are generally better suited to being a compact storage type.
I'd say that's the real answer to the OP's question. Unless you are prepared to very carefully manage error accumulation due to loss of precision, Double
is a better choice because you don't usually have to sweat the details over every floating point expression.
Of course, that's all too sweeping to be really true — Float
has its uses as a calculation type too — but I think that's the flavor of why the floating point types don't follow the integer naming pattern, and why Double
is preferred.