Pitch: Allow interchangeable use of `CGFloat` and `Double` types

I think we’re ignoring the elephant in the room: the unexamined question whether the following things are simultaneously possible:

  1. A body of code that doesn’t have to consider representational issues because it knows that Double and CGFloat are effectively interchangeable.

  2. A body of code that functions correctly in terms of representation (precision) because it handles architectures where CGFloat is Float as well as architectures where CGFloat is Double.

They're not simultaneously possible. Case 2 requires explicit conversions, otherwise its behavior is unpredictable in subtle ways. Case 1 isn’t worth having unless the conversions are implicit. But … you can’t do both of these at the same time.

Incidentally, the current conversion between CFType and NSType was mentioned up-thread. I think a better model is the current conversion between NSUInteger and Int. Swift imports Obj-C unsigned integer as a signed integer, damn the consequences! This wouldn’t really work for CGFloat though, because loss of precision happens to be a nastier problem than the loss of half of NSUInteger’s value range, but I'm using this idea as my starting point for the following.

However, I do agree with @xedin, @xwu and others that magically “fixing” this in the Obj-C importer is an attractive approach.

My suggestion is that we go a bit deeper into magical territory:

  • Force the compiler to treat CGFloat as an uninstantiable type in Swift. That means no explicit invocations of a CGFloat initializer, and no properties/variables declared in Swift code using type CGFloat.

  • Allow the CGFloat declarations to remain in imported Obj-C declarations. When passing CGFloat parameters, auto-convert both Float and Double values to the architecture-dependent representation of CGFloat. When receiving CGFloat return values, auto-convert to either Float or Double via type inference.

  • For edges cases such as arrays of CGFloat, require the Swift-side representation to match the architecture-dependent representation (which may require #if to get right, for multi-architecture code).

This solution relies on the fact (well, my claim) that passing Double values into Obj-C APIs that use 32-bit CGFloat requires conversion anyway, and that the semantics of Obj-C APIs that use 32-bit CGFloat is typically tolerant of the loss of precision. (For example, view coordinates are basically small integers, or halves or thirds of integers sometimes, which just happen to be passed as floating point. Extreme precision is pointless for this.)

Unless I’m missing something important, this enables the following 3 cases:

  1. A body of code only for architectures where Double and CGFloat are interchangeable, that doesn’t ever mention CGFloat and has no explicit conversions. Nothing can go wrong. :wink:

  2. A body of code that carefully manages values and expressions as Float or Double on a case-by-case basis, passing some of those results to Obj-C APIs with no explicit conversions and negligible loss of precision at the API boundary. Very little can go wrong. :exploding_head:

  3. A body of code that introduces its own typealias to Double or Float according to architecture, that has values and expressions that look the same but use a different representation per architecture — that basically reintroduces its own CGFloat lookalike, when that approach is deemed best for that body of code. This is more or less the status quo ante. :stuck_out_tongue_winking_eye:

2 Likes