Think hard: are you sure there aren't any other such tiny details you'd like to ignore? Although a mental model that fails to account for all those things is not of much use to me, you do mention a few things that I think I can usefully engage with:
When a conformance
MyType: MyProtocol
is declared, the compiler goes through each requirement and asks what function would be called, given what is known at that point in the program… if there are multiple overloads ofdoSomething(with:)
, the compiler picks the best one visible,
The mindset that normal overload resolution can be used to decide protocol requirement satisfaction leads inevitably to many of the problems we have today. It leaves us in a terrible bind:
- We don't want to do it at runtime based on full dynamic type information, because
- It would mean encoding way too much information into binaries that is currently known only during compilation.
- It would consider way too many candidates for each requirement, some of whom might not have been intended to satisfy requirements.
- It therefore produces hard-to-control/reason-about nonuniformity in how a given piece of generic code works
- It admits ambiguity at runtime into the system (though I think that door is somewhat open already for other reasons¹) that, using overloading as our model, leaves us in the unenviable position of trying to express a compilation error at runtime
- But if we do it at compile-time, we end up with
- Inconsistency between what a protocol requirement does when the static type is known and when it is not that at least appears to be different from the inconsistency you'd expect due to simple static overload resolution.
- The inability to express basic ideas from generic programming
IMO we need rules for requirement satisfaction that
- Don't depend on knowing everything about how a particular syntactic construct would statically dispatch based on the fully-elaborated static types.
- Can be efficiently encoded into binaries (unless we find some way to ensure most generic code is monomorphizable).
- May have a lot in common with overload resolution rules, but stand on their own
- Come with a strategy for runtime resolution/prevention of ambiguity, because AFAICS it is otherwise inevitable.
It would probably help a lot to have a way to be explicit about which definitions can satisfy which protocol requirements (what I think you mean by “link[ing]… operations with their names”), so we don't have to think in terms of open-ended overload resolution.
But your original point stands. Why is it hard for users to think about generic dispatching? I'd guess that it's the same reason many users have trouble with this code:
Saying it's the same as the reason for something else does little to actually answer the question, and I think your example gives those of us struggling with how things work too little credit for understanding the basic mechanics of protocols. In that case, there are no protocol requirements involved whatsoever; it's pure overloading, so it's not hard to see why the code acts as it does.
For better or worse, most people's model of what generic code ought to do seems to match up with what C++ actually does: it should pick the best option based on the value you pass. It's only us Swift implementers who can look at that and say "that's overload resolution at run time", and shrink away because we've designed a language where overload resolution is non-trivial.
Well, that's not my mental model. C++ does generic dispatch via unrestricted static overload resolution in the presence of all the knowledge the compiler has including the fully elaborated static types. Overload resolution in Swift is still less complicated than in C++ (at least I hope so; hard to tell for sure because Swift's rules aren't documented!), so I don't think that really explains why we shrink away. The reasons we don't do generic dispatch via overload resolution in Swift are:
- From an implementation POV, unlike for C++, the compilation process has ended when Swift would need to do it, and we can't reasonably encode all that data in the binary.
- From a usability POV, as you say, it's too ad-hoc, and thus hard to reason about. This is part of what makes C++ hard to use.