I think we’re ignoring the elephant in the room: the unexamined question whether the following things are simultaneously possible:
-
A body of code that doesn’t have to consider representational issues because it knows that
DoubleandCGFloatare effectively interchangeable. -
A body of code that functions correctly in terms of representation (precision) because it handles architectures where
CGFloatisFloatas well as architectures whereCGFloatisDouble.
They're not simultaneously possible. Case 2 requires explicit conversions, otherwise its behavior is unpredictable in subtle ways. Case 1 isn’t worth having unless the conversions are implicit. But … you can’t do both of these at the same time.
Incidentally, the current conversion between CFType and NSType was mentioned up-thread. I think a better model is the current conversion between NSUInteger and Int. Swift imports Obj-C unsigned integer as a signed integer, damn the consequences! This wouldn’t really work for CGFloat though, because loss of precision happens to be a nastier problem than the loss of half of NSUInteger’s value range, but I'm using this idea as my starting point for the following.
However, I do agree with @xedin, @xwu and others that magically “fixing” this in the Obj-C importer is an attractive approach.
My suggestion is that we go a bit deeper into magical territory:
-
Force the compiler to treat
CGFloatas an uninstantiable type in Swift. That means no explicit invocations of aCGFloatinitializer, and no properties/variables declared in Swift code using typeCGFloat. -
Allow the
CGFloatdeclarations to remain in imported Obj-C declarations. When passingCGFloatparameters, auto-convert bothFloatandDoublevalues to the architecture-dependent representation ofCGFloat. When receivingCGFloatreturn values, auto-convert to eitherFloatorDoublevia type inference. -
For edges cases such as arrays of
CGFloat, require the Swift-side representation to match the architecture-dependent representation (which may require#ifto get right, for multi-architecture code).
This solution relies on the fact (well, my claim) that passing Double values into Obj-C APIs that use 32-bit CGFloat requires conversion anyway, and that the semantics of Obj-C APIs that use 32-bit CGFloat is typically tolerant of the loss of precision. (For example, view coordinates are basically small integers, or halves or thirds of integers sometimes, which just happen to be passed as floating point. Extreme precision is pointless for this.)
Unless I’m missing something important, this enables the following 3 cases:
-
A body of code only for architectures where
DoubleandCGFloatare interchangeable, that doesn’t ever mentionCGFloatand has no explicit conversions. Nothing can go wrong.
-
A body of code that carefully manages values and expressions as
FloatorDoubleon a case-by-case basis, passing some of those results to Obj-C APIs with no explicit conversions and negligible loss of precision at the API boundary. Very little can go wrong.
-
A body of code that introduces its own
typealiastoDoubleorFloataccording to architecture, that has values and expressions that look the same but use a different representation per architecture — that basically reintroduces its ownCGFloatlookalike, when that approach is deemed best for that body of code. This is more or less the status quo ante.