Equivalence between Double and CGFloat

Today, I write all Double and Float values as CGFloat. I try not to make Floats or Doubles and I have extensions to NSControl that convert to CGFloats. Writing lots of graphics calls, this is the best strategy I have seen and I think it works.

I have to use Swift on Mac. I don't have a choice, because that is where things are headed at Apple. So I would like to kindly suggest to people on here to think of what is happening in the intervening years between now and when Swift catches on on other operating systems.

I also use the ternary operator when converting from Bool to Integer and vice-versa. All of this is relatively minor, but it looks like features that were not completed-- it does not look like part of some safety-oriented programming philosophy. Until posting here, I simply thought that implicit type conversion had not been implemented.

Unless your workplace forces the use of Swift, I’m not sure this is accurate. Just because Apple is evangelising Swift doesn’t mean they’re ditching Objective C, which has a lot of the features you’re looking for.

I think a lot of what you perceived as hostility was just us replying to what we see often here: requests for convenience functions that trade off safety or correctness. We’ve discussed at length this stuff but ultimately, despite our input being valued, this is not a democracy. The core team decides the priorities for Swift and proposals are vetted against them. We just know a proposal that won’t succeed when we see it - not that we necessarily disagree with you. I’m sorry if you felt attacked, but for every idea that does come and get accepted, many are pitched that we know won’t pass the core goals of safety and correctness for the language. Often times we don’t have the time for an extended response to give this backstory.

Whether future swift frameworks force you to use Swift? Well Objective C on the Mac never forced you to use Objective C, there are other programming languages and tools, albeit maybe not recommended.

Please remember also that your priorities are not necessarily everyone else’s priorities, and things that do irk you, like lack of implicit conversion, may not irk everyone else, or may take a back seat to frustrating hard-to-find bugs. For people who make their livelihoods working on these platforms, and build software eg banking software like I do, it mayaswell be military software as you said (I also used to do that, too, in Swift).

1 Like

Implicit conversion is one of the major sources of errors in C-language-family programs, so I'm not sure where this impression comes from; their absence is absolutely motivated by safety concerns.

Those other operating systems do not have API with the legacy baggage of CGFloat, and should simply continue to expose their API as either Float or Double, dodging the issue entirely.

3 Likes

Thank you. It’s not a very satisfying answer, but it’s good to see it written down. (It would be even better to see it mentioned in documentation, of course.)

It seems to me that people who aren't John Pratt are still finding this thread useful, so I'll leave it open.

4 Likes

When building an iOS app focused exclusively on 64bit architecture, is there any quick hack in Swift 5 to pass Double into CGFloat parameters without explicit casting? I saw mentions of typealias CGFloat = Double but this results in:

Cannot convert value of type 'CGFloat' (aka 'Double')
to expected argument type 'CGFloat'

This topic is obviously quite polarized with no immediate resolution, but in the meantime our codebase would benefit phenomenally by avoiding casts (nearly dozens of times) in each UI component that deals with UIKit's APIs that take CGFloat all over the place.

This is totally dependent on the data flow in your application, but if you know that your calculations are destined for UI components, is there a way that you can write them such that they use CGFloat throughout to begin with?

Agreed.

Generally it should be somewhat rare in your code that you ever actually want Double and CGFloat to mix a lot. If you’re handling stuff related to UI, like layout calculations, and other associated math, CGFloat is actually the correct type to use in the first place. Generally the only time when you want to mix the two is at a border between model code and view code, eg denoting progress. My recommendation is always “if it’s UI, it’s a CGFloat, if it’s model, it’s Double”. That way, you only have a couple of casts to deal with the border crossing, where it should actually be explicit because you are switching between a model state and a UI state anyway.

6 Likes

We're building a camera app with a sizable amount of model code, so yes there's frequent boundary crossing into the UI, calculations+computations, etc.

I'm not sure I agree with the statement:

CGFloat is actually the correct type to use [for UI]

If we only care exclusively about 64 bit architecture, then CGFloat is simply the equivalent of Double. So therefore if I write something like let border = 5.0 (which is implicitly a Double), it would be extremely convenient (and intuitive) in Swift to pass this value where CGFloat is required, because CGFloat will always be a Double in our app.

A more onerous case is the mixture of Double, Float and CGFloat when trying to do calculations. It's just really messy code.

In the case where we cannot practically convert everything to CGFloat, is there any hack that indeed allows Double values to be used where CGFloat is the type?

Underlying issue aside, this is a poor diagnostic and we should do better. When we fail to coerce one type to another type with an identical name, we should print the fully-qualified type names. So this would become:

Cannot convert value of type 'MyApp.CGFloat' (aka 'Double') to expected argument type 'CoreGraphics.CGFloat'

Comparing the two error messages, I think the second one makes it much more obvious what is going on (or going wrong).

6 Likes

CGFloat is CoreGraphicsFloat. It’s for the Core Graphics system, and it’s adopted by UIKit as the unit of measure for all user interface measurement. I think that alone makes it the correct unit for UI calculations on iOS.

When it comes to photos, yes, that makes things a little more interesting. That is often because you’re using photos with the core graphics system. So in this case, it’s actually in a sense unrelated to the UI discussion. It’s actually the original core reason CGFloat exists, ironically, not the adopted one. I’d try to pick one or the other for the component you’re working with. If you’re dealing with CoreGraphics, use CGFloat outright. If your math module is separate, work out what works for you and bridge at the right spots? That may also help to deal with Accellerate for faster calculations or something.

That said, you did also mention in your OP “nearly dozens of times”. Considering you’re working on a photo-based app, I’m surprised it’s that small!

Keep in mind the equivalency issue Swift is trying to solve for you by not letting this silently get through. You may be treating them as equivalent, but you never know what’s coming, and by keeping these separate, or making you cast 20 times, it makes it a lot easier for you to track down regressions if/when you want to port to a new architecture.

Also, even if this may always be a good assumption for you, if Swift allows it silent for everyone who compiles only 64 bit, what happens when they switch to watchOS for a bit of code and it all of a sudden starts breaking? Is that more or less confusing than just saying “they’re not necessarily equivalent” and making you say “I’m aware of that” with a cast?

I definitely see the frustration of declaring variables with the type at the call site eg:

let value1: CGFloat = 5
let value2: CGFloat = 7
let value3: CGFloat = 9
let value4: CGFloat = 11

I know it’s not a great solution, but you could always declare upfront under the same type:

let value1, value2, value3, value4: CGFloat
value1 = 5
value2 = 7
value3 = 9
value4 = 11
1 Like

From my perspective, CGFloat is a legacy artifact to support 32 bit architectures. For our purposes, CGFloat is exactly of type Double because we only target 64 bit architectures. Forcing a cast is counterintuitive and tiresome when the type is literally the same. And I think you misunderstood, our codebase doesn’t have dozens total, but rather each class has dozens of conversions — adding up to hundreds of conversions littering our code and hampering readability. It’s a large wart in an otherwise beautiful language.

Furthermore, In regards to CGFloat helping us to avoid breaking the changes when 128 bit architecture arrives a decade from now — I can tell you this is not even remotely close to being on our radar (or any other startup).

So from what I’m gathering in this discussion, there is no possible hack/trick to get CGFloat to be implicitly interpreted as Double? Am I correct that these are the only two options to avoid frequent conversions?

  1. Use CGFloat everywhere, and be explicit with initializations so that Swift doesn’t implicitly give us type Double
  2. Create extensions all over UIKit that takes Double and converts to CGFloat and passes to original api (yuck)
1 Like

Use CGFloat as much as possible and eg:

extension CGFloat {
    var double: Double { return Double(self) }
}
extension Double {
    var cgFloat: CGFloat { return CGFloat(self) }
}

You can change the default inference for float literals — that may erode some of the pain points.

typealias FloatLiteralType = CGFloat

let number = 5.0 // is inferred as CGFloat
15 Likes

Forgive my bluntness, but then your issue is your preference to ignore the difference.

Your argument seems to be “there is no difference in my scenario” which may be the case. But we can’t write Swift designed for your scenario. We have to write it for everyone’s scenario.

For everyone else who has to deal with reality of CGFloat, I think it’s clear it should remain as is.

Also, I said nothing about 128 bit. I was implying another platform that was 32 bit. I’m sure the people who dealt with iOS only before the watch came said “we won’t ever have to support 32 bit again”... until they did.

5 Likes

@Rod_Brown the above quote captures my sentiment exactly. You seem to have a more purist perspective on the matter, which is peculiar since there are plenty of options already in Swift that give options to improve developer ergonomics. It would be really great to have something similar for Double+CGFloat when only concerned with 64 bit.

And who ever said Swift has to cater only to my situation? I'd venture to speculate that the majority of iOS developers/teams care only about 64 bit (perhaps even vast majority, since watchOS is now 64 bit). As with many other language features, this doesn't have to be a binary judgement where "it's clear it should remain as is". Giving an option which greatly reduces clutter in the codebases of an [arguable] majority would be a middle ground that works for everyone.

Thanks @bzamayo for offering such a helpful solution! This is exactly the type of advice I was looking for, which helps tip us in a better direction. I will change all our Double vars/func parameters to CGFloat, and then change FloatLiteralType typealias, which will significantly improve the readability of our codebase.

1 Like

Whether or not in hindsight CGFloat would have implicitly convert to double inside Swift code is irrelevant - it doesn’t. Which means we need to handle it the way it is.

If I proposed another API we could have built a certain way, would you insist on using it that way now? This makes no sense.

Additionally, I’m personally glad they didn’t. It would make it hell if you made a number in Swift code that was 100 above the range representable in 32 bits, and then sent it back to an Obj-C API when on a 32 bit processor like on watchOS, and as it crossed the bridge back to Obj-C, it would have had a conversion failure and overflowed or capped. This, unfortunately, is a limitation that even if you fix in Swift, wont help you over the other side.

The same principle applies nonetheless. We cater to everyone, not just the majority who want to ignore the difference. And again, this is ignorance which to be honest, no one should give in to.

You seem to think Swift caters to the majority. It doesn’t. It caters to correctness, because correctness caters to everyone’s needs, just not necessarily everyone’s convenience.

That’s great. I’m glad there’s a solution for this, I didn’t know this particular one.

1 Like

You seem to be full of opinions that are absolute in nature. "Correctness" is just one dimension of a programming language, and a subjective one at that. Swift has taken great measures in almost every major version to become more convenient, more clean, and more readable. A solution here would be no different.

Anyway, I did not come here to get into a debate over whether or not my opinions are pure enough to warrant consideration. I came to ask the experts for a solution -- thanks again @bzamayo.

1 Like

The pain point of doing mixed mode arithmetic can be mitigated by operator overloading (I'm surprised no one has mentioned this.) If you overload the basic math operators you can do mixed mode math without typecasting.

 extension CGFloat {
     static func +(lhs: Double, rhs: CGFloat) -> CGFloat {
         return CGFloat(lhs) + rhs
     }
 }

 let double: Double = 5
 let cgfloat: CGFloat = 6
 let result = double + cgfloat

You just need to write the overloads for the various combinations that you need. I've never done this in an app but it's legal although contrary to the language design.

1 Like

While you are right that this is true for "a" programming language, Swift is opinionated about this.

2 Likes