Equivalence between Double and CGFloat

For what it's worth, I don't think anyone is being intentionally passive-aggressive with you, nor is anyone nay-saying for the sake of nay-saying.

To elaborate on the logic here — one part of Swift's philosophical standpoint on safety was born from years of experience with bugs in other languages which do allow implicit conversions between types. There is a trade-off present between convenience and safety here, and indeed, in many cases, it's very frustrating to have to spell out what is plainly obvious in your head: "this Double value can just fit in a CGFloat just fine, so why can't I express that!?"

The issue arises when a Double value cannot fit in a CGFloat: namely, when CGFloat is not Double, but Float. There is an enormous range of floating-point values which are representable by Double values, but not by Float values (Float being a smaller type, with less storage for information). Languages like C, which allow implicit conversion, will happily allow you to stuff a Double value into a Float, and when it doesn't fit... well, parts of the data are simply discarded. A finite value too large to fit in a Float might cause it to simply round up to infinity, and a number too small to fit in the precision of a Float might round to 0. The same goes for trying to fit a UInt16 value in a UInt8, and any number of combinations of trying to fit larger types into smaller types. (A Double can't even, necessarily, represent with enough precision Int values which are larger than 2^53.)

This implicit data loss can lead to incredibly subtle bugs that are very troublesome to resolve. Because they can be implicit in the design of an API (e.g. you might have a function taking a Float which must now be amended to take a Double), they can require a lot of work to change, or worse, might be impossible if they are found in the interface between your code and someone else's.

So, where does this lead us? In practice, the compiler cannot possibly know that your Int value holds a number that you assert to be small enough to fit in a CGFloat (e.g. 1), and so you must express that yourself by converting using a CGFloat initializer. This leads to more work, and this is indeed frustrating in many cases, but also has the benefit of documenting your intentions to future readers of the code, including yourself.

The philosophical decision that Swift made at its core is to always prefer safety over convenience when a trade-off must be made. This is one of those places.


I'll note also that from a practical perspective of actually changing something here, CGFloat is not just a Float or a Double (e.g. a typealias of either based on platform) but is instead its own struct type wrapping one of those values:

@_fixed_layout
public struct CGFloat {
#if arch(i386) || arch(arm)
  /// The native type used to store the CGFloat, which is Float on
  /// 32-bit architectures and Double on 64-bit architectures.
  public typealias NativeType = Float
#elseif arch(x86_64) || arch(arm64)
  /// The native type used to store the CGFloat, which is Float on
  /// 32-bit architectures and Double on 64-bit architectures.
  public typealias NativeType = Double
#endif
 
  <snip>

  /// The native value.
  public var native: NativeType
}

This means that:

  1. It is possible to extend CGFloat separately from Float or Double, and
  2. It is possible to overload functions by both parameter and return types on CGFloat/Float/Double (e.g. func myFoo(_:CGFloat), func myFoo(_:Double))

Changing CGFloat to be a typealias of either Float or Double for the purposes of direct assignment has the risk of being a source-breaking change due to broken extensions and overload ambiguity. There would be other ways to address this, but this specifically is likely not a productive direction.

24 Likes