I want to emphasize this. Personally, I think the additional axis of exponential search space is an unacceptable consequence of user-defined implicit conversions. It's important to realize that arbitrary implicit conversions would undermine many of the pruning heuristics that are in place in the constraint solver today to make simple cases fast.
For example, if there's no possibility that implicit conversions can add conformances to an argument (which is possible with an implicit conversion to a value type with different conformances, and is very rare today), conformance constraints on a parameter type can be transferred to the argument type directly, which often allows the constraint solver to prune search paths early. This heuristic allows the solver to fail before a generic argument type is attempted for generic overloads that "obviously" aren't satisfied by the given argument types. Generic arguments are only bound once the solver has attempted all disjunctions. So, in a case with several generic operators chained together, e.g. a + b + c + d
, the solver has to first bind each +
to an overload before any generic arguments are attempted, because only then does the solver have the complete set of possible bindings for those generic arguments. In the worst case, the solver attempts all permutations with repetition of the overloads. Without this pruning heuristic, expressions that use generic operators chained together (among other kinds of expressions) are subject to worst-case exponential behavior.
This is just one example. There are others. Another big issue is introducing more ambiguities. To resolve an ambiguity, you need to write explicit type information anyway.
Perhaps there are ways to make the constraint solver performance more immune to implicit conversions, but there is a lot of engineering work to be done before we get there. I don't think implicit conversions are feasible to implement with tolerable, let alone good type checker performance today.