So, I have looked at the actual PR, and we need to be careful about how we extrapolate type checker performance from the experiments on hand. What you've established with your testing is that the presence of this mechanism, when not actually in use, isn't a drag on performance. That's good, but we can't extrapolate from that to address the concerns about exponential behavior expressed, e.g., here and here. Those are more about worst-case behavior, e.g., when overloading and implicit conversions interact to blow up the potential solution space.
If I wanted to implement user-defined implicit conversions, I would start by subsuming the existing implicit conversions into a new implementation. While CGFloat <-> Double
cannot all be subsumed, one could pick a direction to use the new implementation (say, CGFloat
-> Double
, which is at least never lossy) and leave the other direction with the old implementation. Then, subsume other implicit conversions as appropriate... can T
-> T?
be handled by the model? What about the inout
to pointer conversions? Doing this lets one validate both the design (does it subsume the bespoke conversions we have?) and the implementation (because now lots of code will go through the new path). Moreover, this is the kind of refactoring that could be merged into the Swift repository as soon as it's ready, rather than hanging out in an all-or-nothing feature branch.
Doug