About SE-307: Is the implicit Double to CGFloat conversion a performance hit on 64 bit platforms?

Today a conversation in my development team came up about unifying our usage of CGFloat and Double. Since SE-307 Swift does the conversion automatically.

I have two questions about that:

  1. Is there any performance impact when I pass in Double to a CGFloat API on 64 bit platforms? Sure, shrinking to Float on 32 bit platforms will probably have some performance impact, but for me the 64 bit case is the most relevant one. I have not seen any documentation on the performance implications of this implicit conversion.

  2. What's the guidance on custom UIKit classes? Should we continue to use CGFloat here or is Double the way? (I know, this is technically a question for the Apple Developer Forums, but the questions fits pretty well with the original topic in my opinion.)

1 Like

Per the documentation:

The size and precision of this type depend on the CPU architecture. When you build for a 64-bit CPU, the CGFloat type is a 64-bit, IEEE double-precision floating point type, equivalent to the Double type. When you build for a 32-bit CPU, the CGFloat type is a 32-bit, IEEE single-precision floating point type, equivalent to the Float type.

So no translation is required on 64-bit platforms.

2 Likes

Thank you, both, so that's settled then!

Remember the weird duck that is arm64_32 (Apple Watch), which is an ILP32 64-bit architecture. Since CGFloat == double only on LP64 platforms, this will result in expansion to Double Float on arm64_32.

1 Like

You mean it expands to Float on arm64_32. (Which it does, as it does on armv7k, which arm64_32 is generally required to match in terms of type sizes and so on.)

I think the right way to understand this is that it doesn't really have anything to do with the architecture and is really just a platform choice. It happens to be true on all current platforms that it follows the pointer width, and that allows a convenient implementation in terms of __LP64__ in the headers, but if Apple ever introduces another 32-bit platform, they very well might decide to make CGFloat use Double on it, and that would be a valid choice and code would simply have to adapt.

10 Likes

The answer here isn’t particularly specific to UIKit so it’s fine to answer here: you should now always use Double. There’s no downside to using Double. We’ve long since moved past the point on currently supported devices where you actually needed to care about using CGFloat because of its actual width. The main reason modern APIs or code continued to use CGFloat was because CGFloat was a different type, and so sticking with it reduced the number of annoying conversions needed. This led to a vicious cycle – more use of CGFloat begat more use of CGFloat. The beauty of SE-0307 is that it heals this problem. APIs and app authors are now able to use Double without any concern for the ergonomic issues it causes.

12 Likes

Thank you so much Ben, this is helping me a lot. It's awesome to have an authoritative answer on this topic :heart:.

2 Likes

CGFloats and Doubles are not fully interchangeable, sometimes you are (still) forced to used CGFloats explicitly:

    let color = UIColor.red
    var white: Double = 0
    var alpha: Double = 0
    color.getWhite(&white, alpha: &alpha) // Error: Cannot convert value of type 'UnsafeMutablePointer<Double>' to expected argument type 'UnsafeMutablePointer<CGFloat>'

BTW, just recently I found a case when compiler issued an illegal instruction diagnostic during compilation for some valid swift source that used doubles and CGFoats interchangeably. That crash went away once I added explicit type annotation:

var x = 1.0  →   var x: CGFloat = 1.0

This is well worth a bug report, if you haven't already done so.

1 Like

This is worth quoting, this is cool. Many thanks for clarification!

This might be worth filing a bug for.

I don't believe that this conversion is expected to work (it can't, because it would be invalid on platforms with CGFloat ~ Float).

2 Likes

I just wish this conversion had been in place before SwiftUI APIs all ended using CGFloat. I realize I can just use Double with those APIs, it's just a shame that a next generation UI framework will perpetuate CGFloat in its API for the foreseeable future.

1 Like

But you can’t “just use Double” if you target watchOS or legacy 32-bit platforms. You can mostly use Double, but if your program is sensitive to rounding error you need to understand the consequences of whether your platform uses Float or Double.

It so happens that a large number of CGFloats are assigned integral multiples of 1/2 or 1/3. But 1/3 isn’t representable as a binary float, and the rounding differs between Double and Float. This can cause problems for cross-platform serialization code.

This is precisely why you should "just use double". Double is the same on all supported platforms. If you isolate use of CGFloat confined to places where it is explicitly used in UI frameworks, you will not have these problems, because the program state that you want to serialize will be made up of Doubles instead.

I can tell you from experience there have been many places within the UI frameworks where comparing two CGFloats on 3x devices has gone awry due to one having been computed with different intermediate precision.

In retrospect, it would have been very nice to use some form of rational coordinates, but that would have come at great expense for interoperating with Quartz.

1 Like

... which is why any new code should always use Double to avoid this problem.

If you are building on watchOS, the APIs will demand you pass a Float, not a Double.

No, they will demand that you pass a CGFloat, which is where the implicit conversions in question come in. They cannot handle every case, but they can handle quite a few.

CGFloat is Float on watchOS. If you have a Double representation of 1/3, it’s going to be converted to Float somehow when passed to any API that accepts a CGFloat. That approximation of 1/3 will not necessarily be the same as an approximation of 1/3 computed directly from the literal or from some computation that exclusively used Float for intermediate values.

This has been the source of real bugs that the UIKit team has needed to fix, including infinite loops due to insufficient comparison tolerances.