Automatic Mutable Pointer Conversion

I want to approach my strong opposition to the idea of generalizing support for user-defined conversions (especially chains of user-defined conversions) from three angles:

  1. Algorithmic Complexity

The constraint solver embedded at the heart of the expression checker is bound to a worst-case exponential-ish time decision procedure precisely because of disjunctive forms like the kind introduced here. We must now consider, at each call site without a direct match, the complete set of applicable user-defined conversions, split any existing disjuncts, and solve. For chains of user-defined operators - rather than pursuing a C++-style “one conversion” rule, this problem becomes (literally) exponentially more difficult. The disjuncts, themselves, become subject to further disjunctions as chains of conversions are traversed. Disqualifying cyclic chains of conversions is not going to improve any of this (we must still detect those cycles anyways…)

A language feature that serves to complicate the typing rules in this manner must justify this leap in algorithmic complexity with a similar leap in functionality and quality of life improvements. I simply do not see a path forward where that standard applies to this feature.

  1. Ergonomics over Function

What is the savings for a user-defined conversion? At the point of definition, you are still writing as much code as you would for a (convenience) init. At the point of call, you save the use of a (set of) constructor forms. A commonly-cited example is in numeric types where value-preserving conversions to higher-bitwidths are expressed over and over again to keep the type checker happy. Such conversions have a place in the language: they’re safe, they’re pure, they’re common, they’re noisy. But user-defined conversions can be none of these in practice. I’ve seen conversion operators that allow accidental mixed comparisons of typed data, operators that execute effects as part of a DSL, unsafe operators added purely for convenience. These arbitrary effects become silently inserted at the point of use.

We have to remember, too, that the cost of convenience is steep:

  1. Impact on Readability over Writability

Implicit conversions of all kinds absolutely destroy readability. C++, with all its restrictions and formalisms, is an exemplar here: I cannot open C++ code outside of an IDE and have any idea about how the flow of data is derived from the flow of types. Without a full picture of all user-defined conversions in scope, I have no hope of being able to try. I write type checkers as a hobby, and I cannot remember the rules backing C++’s conversions and their cross-cutting interactions with the rest of the subsystems in their type system. I do not wish for Swift users to have to become human type checkers either.

What’s worse, a number of extremely surprising edge cases in the semantics of the language WRT its interactions with conversions become not just extremely common, but sources of actual peril. Consider my favorite example - what does this program do?

int main() {
    const char *x = "Hello world!";
    std::vector<std::string> v{{x, x}};
}

x is an iterator - not a candidate for std::string conversion, the begin/end iterator constructor is selected, and the vector is constructed to point into garbage.

17 Likes