Default Implementation in Protocols

One of the things I don't like about the current system is that protocol requirements with unconstrained default implementations (RWDI) aren't really "requirements" at all - you don't need to implement them. It's just an awkward way of writing that you can write your own implementation, and it will be dynamically-dispatched.

At the same time, almost every time I browse these forums I see people having lots of difficulty understanding why some protocol methods are dynamically-dispatched and others not, and lots of people have difficulty reading protocol definitions because it isn't clear what you actually need to implement for conformance.

So I wonder if RWDIs really belong in the protocol body at all - perhaps they should just be standard extension methods with some kind of @overridable attribute (or maybe @dynamic?). This could open the door to constrained extension methods also being dynamically-dispatched, where currently you would need to introduce a new, refined protocol to do that.

There's another argument which is that everything in the protocol body is dynamically-dispatched, whether or not it has a default implementation. Still, we have the issue that it's difficult to determine which declarations you actually need to implement.

3 Likes

I agree that the dispatch part can be very confusing to people, and that it can be hard to know which parts of the protocol you need to implement. But this proposal seems unrelated to the former issue (I'm not sure your idea really helps here because it only further intertwines the methods with different dispatch), and the latter issue isn't really unique to protocols (e.g. it can be similarly confusing and complex to work out which parts of of a superclass you should override in a subclass, taking into account how the various methods interact, if overriding one method means you should also override another, etc). I think both of these issues would be better addressed directly by improvements in tooling and documentation.

1 Like

One day it would be a fun experiment if we could enable a force dynamic dispatch or message passing for everything allowing Swift developers to see if the trade-off between performance and convenience is worth it in their code... if they are really benefiting from the extra performance enough to offset the productivity loss.

1 Like

Static dispatch is not solely an optimisation. In a similar way to final methods in classes, it also allows protocol authors to ensure that some of their code isn't overridden, guaranteeing that some invariants are preserved. And, also similarly, some people will reasonably argue that library authors shouldn't be allowed to remove that flexibility. I think it's fairly clear which side of that tradeoff Swift comes down on, at least historically.

1 Like

Fair enough, but it does not come for free and the biggest customer, realistically, is still apps not backend services or servers and that is an area that has benefited from the flexibility that message passing or dynamic dispatch provide (and the easier mental model for developers).

Under that scenario, an β€œinverse C++” like choice to easily, explicitly, and clearly specifying / mark / label what code gets statically dispatched (opposite to adding virtual here and there) would allow library authors to determine the specific cases where they really really needed to take flexibility away.

1 Like

Default implementations of individual methods are actually quite a negative thing, as @jrose points out it makes it hard to understand which methods you should implement, etc. And as @Ben_Cohen points out it's actually much nicer to put code elsewhere to the public interface (.h files anyone? haha).

A complete default implementation could be useful, but better would be multiple complete implementations of a protocol, that you can choose from when implementing it, and one of those designated as default that will be chosen if you don't explicitly choose one. This is called composition, I'm sure most of you know it and use it already, but you have to be disciplined, and write boilerplate forwarding methods a lot.

A much better solution would be to declare a component that implements a protocol completely. Then compose your objects from these components by declaring which implementation to use along with the protocol. If protocols are broken up nicely, you'd be able to write an entirely original class/struct just by deciding what implementations it's made up of and perhaps setting some property-values for those.

This would also solve other problems for which other solutions are being proposed, which overall is cluttering up the language. For example Un-requiring required initializers would just use the default component implementation instead of a clumsy (sorry @Joe_Groff) work-around that breaks fundamental OO principles.

There are benefits to the explicit mixin/component approach (and based on experience with Swift's overload-resolution-through extensions approach, I'd be tempted to go in that direction given the chance to do it all over again), but you would still have the same problems with class variance interacting with protocol conformance, so you'd still run into the same issues with initializer requirements that are required when you don't really want them to be.

1 Like

Isn't that a design problem with the object hierarchy though, rather than the language?

There is an elephant lurking in a corner of the room, and the actual topic of discussion can't really be brought to a conclusion without dealing with the elephant first.

The "dispatch part" isn't actually very confusing at all. Instead, it's almost completely unknown. I wrote Swift code for about 3 years without having any idea that static or dynamic dispatch of protocol methods was something I had to take into account. It was only when someone in these forums happened to spell it out for me (as part of some other explantation) that I realized that dispatch was different between requirements and extension methods.

The linkage between the requirement (non-extension) section of a protocol declaration and dynamic dispatch is documented nowhere. It's certainly not in the official Swift documentation.

At the same time, there's no useful syntactic marker that might make someone think about the problem.

I'd be happy to be proved wrong about this, but basically, I think, the vast majority of Swift developers outside this forum community have no reason to think about dispatch of protocol methods.

How about we fix that problem first?

14 Likes

I get it is a source of confusion, but it has never seemed 'wrong' to me. It doesn't seem correct to think you can 'extend' a protocol's requirements except by creating a subtype or it being a breaking change, in the same way you can't 'extend' a contract between people except by agreeing on an addendum or drafting a new contract.

The protocol describes the relationship both in documentation around behavior and in code by specifying methods that will be used. The extensions give recommendations around how to implement the protocol/contract via default methods, and add robustness around the use of a protocol. This both incredibly valuable, and distinct from the protocol contract.

2 Likes

Sorry if I wasn't clear. What isn't documented is that protocol requirement methods are dynamically dispatched, and extension methods are statically dispatched.

Only for the customization points. Methods declared only in the extension aren't customization points, and can't be overridden.

1 Like

They aren't part of the protocol contract at all. They are recommendations on how to implement the protocol/contract.

Sequence for example has several requirements, but several shipped extensions go and say "I can fulfill this requirement for you based on a correct implementation of makeIterator()". This does not in any way alleviate the requirements of the protocol.

Protocols and extensions are entirely different things, way more so than say extensions to a struct/class/enum. It is almost a shame that both concrete type and protocol extensions are given the same name.

In a scenario like this:

protocol P {
  func a() -> Int
  func b() -> Int
}
extension P {
  func b() -> Int {
    return c()
  }
  func c() -> Int {
    return 2
  }
}

function c() is declared only in the extension, and it's not a requirement or a customization point. Nor is it any kind of "recommendation", since its implementation can't be replaced or overridden. It's the implementation.

(It can however be statically shadowed, which is why a developer needs to know that it's statically dispatched.)

I don't think this problem can be fixed without breaking source compatibility. If I had a completely free hand to change Swift, I'd seriously consider requiring final on (non-default) protocol extension methods, but that ship has sailed.

I want to go on record saying that this is not my own view. Default implementations were a huge boost in Swift 2: they made it possible to write protocols that both required minimal effort to confirm to but also had the flexibility necessary for good performance in specialized cases. I do think we have some presentation issues right now and I'm not sure how to solve them, but I will definitely take those issues over not having default implementations at all. (It hurts every time I run into it in Objective-C these days.)

I don't have a good answer for this because, as has been noted, sometimes there's exactly one default implementation that always works, and sometimes there isn't; in the case where there isn't, there might be one default implementation but it's constrained, or there might be more than one with different constraints.

"Components" are an interesting idea, but I'd want to see how this shakes out in practice. I'd think it'd be roughly one component per constrained extension, and that sounds like it could be a lot. I don't think this solves anything about required initializers, though.

9 Likes

The way I (and I think most of us) read protocol definitions is like this: the protocol body lists a set of requirements - that is to say, the maximal set of things a conformer must implement. Extensions may reduce that set based on additional knowledge about the conformer (e.g. you don't need to implement Collection.subscript(Range<Index>) if you choose Slice<T> as your SubSequence, because the standard library ships an implementation which knows how to construct a Slice for any Collection).

If the default implementation exists without any constraints at all, it isn't part of that maximal set. For example, regardless of your conforming type, and whichever associatedtypes you choose, the standard library always knows how to implement Sequence.map().

In fact, the way I learned about the protocol dispatching behaviour is because I asked once why map was listed as a requirement was told that it was for performance, not because conforming types were advised to roll-your-own.

So bringing this back on-topic: rather than bringing unconstrained default implementations in to the protocol body, I'd rather support the opposite: some way to push these methods out to extensions while preserving their performance and overridability. If, at the same time, we can find a way to make protocol dispatching behaviour more obvious, then I think the language overall will be better for it.

3 Likes

Okay, that's one interpretation. It's definitely never been my concept of what the initial protocol definition is, at least since protocols gained the ability to have default implementations in Swift 2. Protocols and extensions have essentially the same syntax as concrete types, and therefore I think of them as defining the API for working with the protocol as a type (just like concrete types and their extensions). If you just want to know the minimum set of things you have to implement in order to conform to the protocol, then that seems easily addressed in the documentation and tooling (e.g. the existing feature that offers to add stubs for protocol methods you need to implement). Your proposal would also be source breaking if mandatory (and probably not very useful if optional), while this pitch is additive and would make the language more uniform.

As it happens, there really is no good performance reason for map to be a customization point, and I'm in the process of writing a post about how it shouldn't be in Swift 5 :slight_smile:

(as well as first, which even worse shouldn't need to be and causes problems for collections of move-only types in the future if it is)

6 Likes

Just for clarification - how do you mean work with the protocol as a type? Do you mean work with implementations of that protocol?

It is a common pattern in languages like Java to have interfaces which have static factory methods and state for getting an implementation of the interface. This is not possible with Swift protocols because a protocol only declares the contract for implementing types, and is not usable outside the context of implementing types and instances.

Not the minimum set - the protocol body tells you the maximum set, and constrained extensions provide defaults which narrow it down. Also, it's not limited to reading - typically when defining a new protocol, you do start with the minimum set of requirements and write additional functionality by composing those features (e.g. map is possible because you can iterate the Sequence's elements).

Better tooling is also definitely needed. I don't think reading the text definition and fixing compiler errors really cuts it - it would be cool if some future version of Xcode include a kind of live view of the protocol requirements which added/removed unimplemented requirements as you write a conformance.

That's cool. IIRC, the exact reasoning was that a call to map should result in a single dynamically-dispatched method invocation. Otherwise the internal calls to (underestimated)Count and makeIterator would each result in DD.

But yeah, as you say - you're not really supposed to override it in your conformances. It was basically just a performance/implementation artefact - reducing 2 DDs to 1.