SwiftUI, which is Swift-only and does not have a translated API uses CGFloat extensively. Because of this, I think CGFloat will remain a heavily-used type for the foreseeable future.
Existing code already needs to do explicit conversions to CGFloat so adding implicit conversions should not break any existing code.
With explicit or implicit conversions, if you have calculations that demand precision you should use Double on a 32-bit platform to get the most precise calculations and pass the results into API requiring CGFloat at the end.
Since Swift best practice is to use Double as the floating point type even on 32-bit platforms, using Double for intermediate calculations should generally be preferred.
That guidance should be documented and following that rule of thumb should be fairly straightforward.
My following comments are about platforms with 32-bit CGFloat:
In general, if a developer takes the most simple route, never annotating the type of a variable, new floating point variables or constants introduced will always be Double.
Accessed CGFloat properties on types like CGPoint and CGSize will always get widened to Double when used in a calculation with a Double.
So, if a developer never explicitly creates an intermediate CGFloat variable, I believe all intermediate calculations end up as Double or widened CGFloat values.
(Please correct me if I'm wrong @xedin.)
let widthMultiplier = 1.25 // Double
// CGSize has CGFloat height and width
let size = CGSize(230.0, 450.0)
// width widened to Double
let adjustedWidth = size.width * widthMultiplier
let heightMultiplier: CGFloat = 0.2545878583
// height * heightMultiplier calculated as CGFloats
// then result is widened to Double (is that correct?)
// Introduction of CGFloat variable heightMultiplier
// makes the calculation less precise
let adjustedHeight = size.height * heightMultiplier
I believe the main reason intermediate calculations lose precision is if variables are explicitly declared as CGFloat:
Variables declared as CGFloat are relatively easy to scan for to see if an accidental narrowing is happening.
let input1 = 34564.43434
let input2 = 0.0033304443203
// Possible loss of precision from the Double assignment to CGFloat
// and then using that value in further calculation
let intermediateResult: CGFloat = input1 * input2
// Another potential loss of precision
let input3 = 0.33333
let intermediateResult2: CGFloat = intermediateResult / input3
I believe if a developer never uses type annotations, the default Double type ends up being the calculation type and a narrowing never happens with the proposed change, until necessary, usually at API boundaries.
So, with these proposed changes, seeing a variable declared as CGFloat becomes a red flag that unintentional narrowing may be happening.
I believe with what is proposed as-is, a developer has to go out of their way to unintentionally lose precision by introducing CGFloat intermediate variables.
I'm open to counter-examples, of course.
Is there a case where you can lose precision in this model on 32-bit without introducing a CGFloat intermediate variable?
I think with this proposal writing the 'natural' code, without type annotations yields calculations at the higher precision automatically, and CGFloat values (which are already at lower precision) that are introduced automatically get promoted.
The loss of precision happens only when a Double must be converted to CGFloat which is typically at API bounds.