Equivalence between Double and CGFloat

Hello everyone. As you probably remember, CGFloat is shorthand for Double or Float, depending on whether the operating system is 64-bit or 32-bit. There isn't anything special about CGFloat because it is just a typedef. But right now it is not interchangeable with Double in Swift. (This was not an issue in Objective-C, which just assigned the double or float value to CGFloat).

If you are writing lots of graphics calls, you already know that this is a nuisance and how often you have to convert all of your Doubles and Floats to CGFloat. All controls, like NSSlider, provide a floatValue or doubleValue. But you will have to convert these input values to CGFloat manually with the CGFloat() initializer, over and over. CGSize and CGRect all take CGFloats and there is a good chance you have lots of Doubles and Floats mixed in your code.

Since there isn't any real reason for this, I would like to suggest in the next update of Swift we make it so that a CGFloat constant or variable can be set equal to a Float or Double and the rest is handled automatically.


I definitely have felt the annoyance of this, but I don’t think that this very platform-specific problem deserves a first-class language solution.

I usually solve it by just having a variety of convenience extensions on various types taking and returning CGFloats, including initializer overloads for any CGFloat-taking graphics structs that need them.

It’s a really easy and easily-encapsulated work-around.

Maybe this should be a considered a bug fix instead of a full-blown proposal, since this was not an issue in Objective-C. It wouldn't exactly be a "feature," but just how a certain situation is handled.

But I think the topic might actually be a general language issue regarding how data is assigned between types: there is another situation like this and it is the conversion between Integer and Bool, in which the values of the two data types are not assigned to each other and require conversion. We all know that a Bool set to true would be an Integer of 1, but Swift does not have this language capability yet. So perhaps the topic that needs to be addressed would cover an array of similar situations.

There are indeed workarounds, like you described. But, I like to think of the beginners who are just getting into Swift; they aren't going to immediately set up categories (extensions) for Double and NSControl.

What you're effectively asking for is implicit type conversions between Doubles and Floats. Swift doesn't allow implicit type conversions in any circumstance; it's a philosophical difference with languages like C. For the same reason, type conversions are required between e.g. Int64 and Int on 64-bit platforms, even though they have identical underlying types.

I agree that this problem is best solved using extensions. For example, on NSControl:

extension NSControl {
    var cgFloatValue : CGFloat {
        return CGFloat(self.doubleValue)

The other alternative would be to change the definition of CGFloat to be a typealias to a Float or Double depending on the platform. I think in most cases that wouldn't be source-breaking, but it would introduce issues for people trying to write multi-platform code; code that compiles fine on 64-bit would suddenly break when compiling for 32-bit. A simple example of this:

func someFunction(_ x: CGFloat) {

let value = 5.0
someFunction(value) // would work on 64-bit but would error at compile-time on 32-bit.

I personally don't see that as an improvement.


Perhaps the proposal would be to change the philosophical position of Swift to allow implicit type conversions, since we have several examples, right here, in practice that the theory does not always work well.

I would like to hear the logic behind why a Float cannot be set to a value of 1.0 from an Integer of value 1.

It is absolutely absurd to be using the conversion of CGFloat(doubleVariable) when the CGFloat and Double are the exact same data type underneath. As someone who studied Political Science, I can say there is no philosophy going on here, just stubborn adherence to some concept, at the expense of rational behavior.

If we are all writing extensions to NSControl (I am) and Bool, then the so-called philosophy of Swift needs to be amended.

If you are interested in this information, you may search the forum with the phrase "integer promotion" to read several previous threads on the topic. You will see that there are both design and implementation reasons why Swift does not work like this today. As these forums are now very easy to search, it is not necessary (nor is it practical) to repeat all of those points here.


All of you people on this thread are very passive-aggressive.

It doesn't matter what threads have existed on this topic. It needs to be changed.

Look at how long this Stack Overflow page is on the simple question of how to convert a Boolean value to an Integer: https://stackoverflow.com/questions/40242702/converting-boolean-value-to-integer-value-in-swift-3

If you don't think there is something wrong with that, all of you are wasting time on here.

I have read through a few posts on this forum. There are tons of little nitpickers and naysayers anytime someone makes a reasonable suggestion. Who would bother pitching anything to a bunch of irrational trolls on here?

One example: someone said, "Swift is really hard to install on Windows." Then some troll came up and said, "no it isn't." And the person had to say, "um, well I find it hard to install." You people will dispute obvious points just so that you can naysay someone. There are personality problems on this forum and they are loud.

For what it's worth, I don't think anyone is being intentionally passive-aggressive with you, nor is anyone nay-saying for the sake of nay-saying.

To elaborate on the logic here — one part of Swift's philosophical standpoint on safety was born from years of experience with bugs in other languages which do allow implicit conversions between types. There is a trade-off present between convenience and safety here, and indeed, in many cases, it's very frustrating to have to spell out what is plainly obvious in your head: "this Double value can just fit in a CGFloat just fine, so why can't I express that!?"

The issue arises when a Double value cannot fit in a CGFloat: namely, when CGFloat is not Double, but Float. There is an enormous range of floating-point values which are representable by Double values, but not by Float values (Float being a smaller type, with less storage for information). Languages like C, which allow implicit conversion, will happily allow you to stuff a Double value into a Float, and when it doesn't fit... well, parts of the data are simply discarded. A finite value too large to fit in a Float might cause it to simply round up to infinity, and a number too small to fit in the precision of a Float might round to 0. The same goes for trying to fit a UInt16 value in a UInt8, and any number of combinations of trying to fit larger types into smaller types. (A Double can't even, necessarily, represent with enough precision Int values which are larger than 2^53.)

This implicit data loss can lead to incredibly subtle bugs that are very troublesome to resolve. Because they can be implicit in the design of an API (e.g. you might have a function taking a Float which must now be amended to take a Double), they can require a lot of work to change, or worse, might be impossible if they are found in the interface between your code and someone else's.

So, where does this lead us? In practice, the compiler cannot possibly know that your Int value holds a number that you assert to be small enough to fit in a CGFloat (e.g. 1), and so you must express that yourself by converting using a CGFloat initializer. This leads to more work, and this is indeed frustrating in many cases, but also has the benefit of documenting your intentions to future readers of the code, including yourself.

The philosophical decision that Swift made at its core is to always prefer safety over convenience when a trade-off must be made. This is one of those places.

I'll note also that from a practical perspective of actually changing something here, CGFloat is not just a Float or a Double (e.g. a typealias of either based on platform) but is instead its own struct type wrapping one of those values:

public struct CGFloat {
#if arch(i386) || arch(arm)
  /// The native type used to store the CGFloat, which is Float on
  /// 32-bit architectures and Double on 64-bit architectures.
  public typealias NativeType = Float
#elseif arch(x86_64) || arch(arm64)
  /// The native type used to store the CGFloat, which is Float on
  /// 32-bit architectures and Double on 64-bit architectures.
  public typealias NativeType = Double

  /// The native value.
  public var native: NativeType

This means that:

  1. It is possible to extend CGFloat separately from Float or Double, and
  2. It is possible to overload functions by both parameter and return types on CGFloat/Float/Double (e.g. func myFoo(_:CGFloat), func myFoo(_:Double))

Changing CGFloat to be a typealias of either Float or Double for the purposes of direct assignment has the risk of being a source-breaking change due to broken extensions and overload ambiguity. There would be other ways to address this, but this specifically is likely not a productive direction.


You are clearly a nice guy, taking the time to explain all of this.

I would like to take you aside, for a moment, because they have hidden this thread, and no one is going to see this post. People who contribute to Swift, like you, are unable to see the forest through the trees. I know that double is called double because it has double the precision of float. But the kind of feature you are talking about ("you will not assign a Double to a CGFloat in an otherwise 99% foolproof situation") has to be opt-in because it gets in the way of prototyping apps, just like optionals get in the way. I force-unwrap a lot of optionals while prototyping because Swift is a huge time-waster like this.

The trade-off you all want are for computer science pedants or people who are overly concerned about values being nil-- situations that might concern mission-critical applications like military and so on. That is, not everyone cares that much about writing code in an obsessively "safe" way. What you gain by preventing objects from being nil through the optionals scheme you lose in terms of the fluidity of writing code to get things done. These issues-- whether the object is nil or whether the float has been truncated are tangential to the larger issues. You can write the programming language to opt-in to these features.

Instead, you all are tyrannical, imposing your computer science ideology on everyone and it is oppressive.

I’m afraid you’re not going to make much headway with an ideological argument. Swift can’t be everything, and it’s not one of its goals to make prototyping as fast and as flexible as possible. At the risk of misquoting, I think Chris Lattner said something like the goal of Swift not being that it takes a short amount of time to write code, but that it takes a short amount of time from starting to a final, working, hopefully bug-free product. That unfortunately means that if you’re mainly interested in the first part Swift isn’t going to be the best choice.

If you want a less-strict language for Mac or iOS development, Objective-C or a scripting language may be a better fit for you. But Swift has been designed around a particular set of principles, and those principles aren’t likely to change.


I don't believe this thread is hidden in any way, but regardless... Languages have to make tradeoffs in their design, and these tradeoffs lie on a spectrum. You can take a look at even languages like C with their very loose type systems which allow implicit conversions (among other things) and consider even them "pedantic" and "tyrannical" when compared to extremely dynamic languages like Ruby or Python. A Ruby developer might look at Swift and ask "why should the compiler care if I assign a String to an Int variable? Just let me do it!"

One man's "tyranny" is another man's feature, and one man's "fluidity of writing" is another man's bug. One notorious problem in programming languages like Ruby and Python is simply not being able to guarantee that the type of an object you rely on is what you expect. Where you want an integer someone can pass you string, and depending what you're doing, things may or may not work, in very subtle ways. Things might work today, but not tomorrow; this is both a feature of these languages, and a drawback if you're hit with one of these bugs.

Some programming languages have stronger type systems: yes, you have to take the trouble to express the fact that your function accepts an Int, but the benefit is that someone won't be able to accidentally pass in a String, because the compiler can help.

Swift lies on the side of the spectrum that favors a lot of static guarantees like this in favor of avoiding the types of bugs present in extremely dynamic type systems. It defines away whole classes of bugs by making them impossible to represent. Indeed, this comes at a cost, with additional typing.

Optionals come at a similar cost, but if you spend much time in the Java world, you might become familiar with the pain that is NullPointerException and why it is so difficult to get object nullability correct. Swift tries to learn from this and brought nullability strongly into the type system (which is something that even Java 8 did).

Luckily, we live in an age where many, many languages are available to you and can live in your toolbox. Even within the context of Apple platforms, you have Objective-C, which, with its C roots, gives you the option to ignore nullability, and implicitly convert between types, and any number of things which you might consider more natural (while someone else might consider unsafe). Depending on what you are trying to write, perhaps even less restrictive languages like Ruby and Python may be a breath of fresh air to you.

That being said, Swift is not Ruby or Python, and it is highly unlikely to change to move away from the current self-consistency here.


Nobody has hidden this thread. The moderators (myself included) don't lightly hide or delete posts that haven't been made in bad faith (e.g. intentional trolling or spamming), and we certainly don't punish posters just for disagreeing with Swift's language design philosophy. You are welcome to argue that we should radically change that philosophy, although frankly that is quite unlikely to happen — Swift may be new to you, but it's a fairly established language at this point and major changes would have major costs for the hundreds of thousands of programmers who've already written Swift code.

However, I am going to insist that you make your points without calling people irrational trolls or oppressive tyrants. Take a break for a day.

Edit: Apparently Discourse automatically hid the thread when I applied the probation, so I immediately made myself a liar. I've unhidden the posts.

Edit 2: Discourse seems to keep hiding the thread for some reason.


To return to the original question from a different direction… why is CGFloat not a typedef? With 32-bit becoming largely irrelevant on Apple platforms, it seems rather unfortunate. The documentation doesn’t seem to mention this, let alone explain it.

You have a rationale for CGFloat not being a typealias just a few posts above.

Actually, I don’t. The second half of that post describes the situation and some of the consequences of it, but not the reasoning behind it. The first half is about implicit casts between numeric types, which is a separate issue.

I believe the reasoning relates to the fact you code loses portability if it’s a typealias like that.
Code that assumes one size or the other becomes invalid when compiled for the different architecture, and will fail to build, eg if you pass a CGFloat to a method requiring a Double, and then compile for 32bit, they’re different types, making the typealias assumptions invalid?


Right. If CGFloat were just a typedef, then people would write code targeting 64b platforms and then drown in compile errors when they tried to compile it for watchOS. Obviously, this creates some minor inconvenience up front, but makes it relatively easy to deal with incrementally.

The history of why CGFloat is the way it is is long and mostly irrelevant now, and most code that does non-trivial computations should convert to Double at API boundaries; having that conversion be a clear, deliberate choice is not a bad thing.

If I had a time machine, I might go back to Swift 1 and make CGFloat implicitly convert to and from Double to make this a bit less verbose, but that would have the downside of adding ambiguity to many source expressions, and it's unlikely that we would make that change today.

1 Like