If we're going to go all the way and say this alternate spelling is common enough that it deserves a mention in the standard library at all, we might as well just add the typealias. I don't think our goal is to force developers to use any particular style -- if they really want to write Half, let them write Half 
The way I see this is parallel to Int8/6/32/64: the explicit bitwidths are the true fundamental types, similar to how pure mathematics is as close as we can get to universal truths. And then there are types like Int, whose meaning depends very much on your environment. As a curious aside - the C data model evolved from the opposite direction, without fixed-width integers, before they were added in C99. Since SE-0191, we have a nice definition for what Int means in our data model: it's the maximum size of a Collection. No matter what kind of fancy indexes you create, the distance between them (and hence the Collection's count) cannot exceed Int.max (not UInt64.max, even though the count cannot be negative).
Now for Float. The official names in IEEE-754 are binary16/binary32/binary64. There's a brief reference to single and double precision in the opening line, but otherwise these names are used literally everywhere else. The names float/double are not universal, either: FORTRAN calls single-precision a REAL, Delphi and MATLAB call it a single, and Python/Ruby/PHP use float to refer to double-precision! We've decided to use the C names, but supposedly the C data model doesn't exactly define those sizes, either (kind of like int):
The actual size and behavior of floating-point types also vary by implementation. The only guarantee is that long double is not smaller than double , which is not smaller than float .
So what I'm getting it is: the names Float/Double are not at all universal, and Float32/Float64 are the more fundamental types. Conceptually, they should be aliases. Lots of things are not conceptually perfect, though. It's like an itch - you can understand the compulsion to scratch it, but once you start, you'll start seeing flaws everywhere and it'll never stop. Your only hope is to kind-of ignore it and hope it goes away.
EDIT: FWIW, I've actually seen Swift graphics code from people involved with Swift4Tensorflow that uses Float everywhere, when typically you'd use Double for graphics. I suspect they may be coming from Python and might not have been aware that Double exists.