Narrowing is no doubt a problem but the question is it worse and how much to convert early in argument vs. later - the result when it comes to these types...
Yes, it is worse to narrow early as opposed to later. This is essentially analogous to when students are taught in grade school that they should round their answers only at the end to the required number of significant figures. See also my contrived example above:
That said, I agree with @scanon that implementing the rule "any number of widenings is better than one narrowing, and narrowing should happen as late as possible" should be done only if it doesn't require unreasonable heroics.
I would say that I am somewhat more concerned than @scanon about the numerical implications (but only to a degree), because while existing code is necessarily OK with Float precision and users can always take special care to use Double in new code, the nature of an implicit narrowing is that users writing new code that takes advantage of it may be unaware of narrowing they do not intend to make use of. Put another way, implicit narrowing requires folks who need to avoid its pitfalls to be careful, which by construction they may not be, because it requires no work to opt into an implicit feature but care to opt out.
It would be superior--again, only if reasonably feasible from an implementation perspective--if users were given instead an implicit conversion rule that's numerically more accurate with later narrowing (even if requiring more widening conversions) and the option of opting out by manually narrowing early to get the fewest conversions possible.
I don't have a problem with the fix-it: a big part of the rationale for requiring explicit conversions is that, if you want it, you're going to have to put parens around the thing you want to convert. If you choose to adopt the fix-it, it's plain as day what you're getting. My concern (again, bounded) isn't with narrowing when a user chooses it, it's with implicit narrowing when a user might not think about it.
What I'm trying to point out is that we should be suggesting widening conversions + narrowing result then because that's better in general...
I understand all the points regarding rounding and information loss if it starts early, but what I'm still wondering whether it matters for these particular types used in graphics related computations...
The question is, is it ever materially worse, in the context of graphics calculations on low-pixel (by modern standards) devices. Yes, of course there are many examples at the extreme outer bounds of double arithmetic that can be shown to be impacted by narrowing to float at different points in the calculation. But is there any risk of this actually occurring in practice?
And this is the heart of it: does the current regime of forcing people to insert conversions in their code ever catch such problems even if they exist? Even without the fixit, what is suggested is what everyone will do. The requirement to appease the typechecker adds zero value in practice because it is needed so constantly. It is akin to making array element access return an optional â it is would be so commonplace to need to ! it that the one time it matters it is never going to be helpful.
And since (my theory is that) everyone just narrows constantly anyway, you would think there would be some anecdotal accounts of this constant arbitrary narrowing mattering. But I have yet to ever hear of any real-world cases.
We could suggest it, but it's not better: it's more precise, at the cost of performance. My claim is that more precise but slower is a better choice for implicit narrowing conversions, because they are implicit, not merely because they are narrowing conversions. The choice made by the compiler takes on significance because the implicitness makes it a default that requires no opt-in (that is, it requires no user awareness that any lossy conversion is even happening).
[Edit] And for the same reason, I don't think we can generalize from the experience with explicit narrowing that Ben describes below:
Ultimately, I would guess that the precision matters in some of the same ways that making CGFloat 64-bit mattered in the first place (for affine transforms, someone mentioned above?), but bounded by the fact that 32-bit platforms are limited, and that the effects would generally only be observed when data crosses API boundaries.
Does it matter is more of a values question, so I can't answer that. Obviously, we can't rule out that there may be observable effects on user code. How often and to what degree is not clear, but suppose let's consider it rare and slight:
One might argue that we should do what we can to add a rule that's most likely to give optimal user results when we're going to the trouble of building into the compiler. One might also say that an ergonomics improvement should really try to avoid "you're-holding-it-wrong" pitfalls where it can since, after all, it's about ergonomics.
On the other hand, one could argue that it's a special-case rule anyway, and further that it'll all be a thing of the past with enough time as CGFloat is phased out in the distant future. But since you're pitching to the community at large--that is, folks who don't have to do the work of implementation but may have to use the feature--of course I'd advocate for the former approach.
I think that's a fair point @xwu, implicitness does play a major role here but even so, if users did/do use fix-its today, and all evidence I have seen points to that, then some project would have code that is CGFloat(x) / y that does behave as +inf today. If we were to implement a migrator fix-it to remove CGFloat and Double initializers (since the would no longer be used) in combination of narrowing conversion last then we'd end up with change of behavior in runtime since a is going to be 20. That may or may not be desirable but a change nevertheless.
My perspective is that everything should behave as if it was still required to write an initializer, so it's consistent with fix-its we currently suggest and code written with explicit conversions. That's why I'm really interested to understand whether it really matters with these types and APIs...
As @scanon points out, the only way to opt out of any implicit narrowing would be to use these explicit initializers. I would be very alarmed at the prospect of rolling out a migrator that deletes existing (correctly compiling) explicit conversions, and making the implicit behavior match the fix-it should be a non-goal, because anything but a perfect migration tool may offer to delete both explicit early narrowing and explicit late narrowing.
Migrator pass is just an hyperbole here which I tried to use to point out that if the same project has both CGFloat(x) / y and x / y IMHO it would make sense for them to behave consistently.
I think @xedinâs rule should work in practice for our case, since we already use Double everywhere, and this wouldnât really change anything, but @xwuâs rule certainly sounds safer in general.
Itâs also worth noting that this implicit conversion wouldnât improve things much for types composed of Double/CGFloat. Weâd be able to clean up some CGFloat(double) conversions, but weâd still be stuck with a lot of CGPoint(point) and CGSize(size) conversions.
No. This is an ABI breaking change, as method using CGFloat are nor mangled the same way than method using Double. And also a source breaking change as you can use overloading on CGFloat today.
I can't say that I understand why you argue that users would think that it would "make sense" to behave the same way rather than differently. Currently, and even with the adoption of what you propose here, we don't allow implicit narrowing conversions in the general case precisely because it would be problematic to assume that x / y implies narrowingConversion(x) / y. Doesn't that strongly suggest to users that it wouldn't make sense to assume x / y would imply CGFloat(x) / y, but rather that in allowing interchangeability we took some special care to help limit the pitfalls?
I have to agree with uliwitness. While I dislike needing to use CGFloat for UI items and originally thought this a good idea, this is not a Swift language issue. It's an Apple API problem.
Since CGFloat is also defined in the open-source Foundation project that is one of the Swift Core Libraries, it's part of the Swift ecosystem on all supported platforms.
I agree that for the general feature it wouldn't make sense but we are not talking about generalizing anything here, just about allowing a special-case for Double/CGFloat types. My question is whether it would be surprising behavior for users if CGFloat(x) / y == x / y returned false? We are trying to position this as a typealias that allows to lift a requirement to write explicit initializers and in practice they would be written for arguments so why not make that behave consistently?
We're specifically talking about the scenario where CGFloatcan't be modeled as a type alias.
I'm not sure why CGFloat(x) / y == x / y returning false would be considered surprising behavior, but not x / Double(y) == x / y returning false. So, no, I do not think it would be surprising behavior: one of these is necessarily false if there's rounding error from narrowing, better the one that yields poorer numerical results, even if by a hair.
Moreover, heterogeneous comparison for floating-point types has long been planned (it hasn't been implemented because of various other reasons, and would currently cause problems in generic contexts because of a bug that is too involved to detail here), but once introduced, CGFloat(x) / y == x / y would not always hold true regardless of what you choose to do here.
Of course, we want everything to "behave consistently" as much as possible, but I think we should be careful with asking what we're prioritizing being consistent with. Obviously, an implicit narrowing cannot be consistent with no implicit narrowing: there is loss of precision somewhere. I think it's helpful to hear what @hisekaldma has been taking care to be consistent with:
Delaying implicit narrowing conversions as late as possible is most consistent with a document looking the same on different platforms if a user hasn't been careful to do the conversions manually. I would argue that this is the priority in terms of consistency here: ideally, with this feature, we help users write the best code with the least effort and not just to take the quickest path to making their code typecheck. We do not have to pessimize the proposed behavior here just because a previous fix-it didn't prioritize the same goals that we have.