Protocol extensions inheriting protocols

Hi S/E,

I’ve prepared small patch to the Swift compiler which allows you to specify a protocol an extension is intending to conform to when creating a protocol extension. This means all types that adopt the protocol being extended conform to to the protocol being inherited from (in fact it works by adding the protocol being inherited from to the protocols the protocol being extended inherit from during validation of the extension.) As an optional feature the code checks that the protocol being extended conforms to the protocol being adopted rather than give an error about a type adopting the protocol not conforming.

The use case where this came up was the following:

extension FixedWidthInteger: ExpressibleByUnicodeScalarLiteral {
  public init(unicodeScalarLiteral value: Unicode.Scalar) {
    self = Self(value.value)

After this, all FixedWidthIntegers can be "expressed by" Strings that are UnicodeScalars.

Does anybody see any pitfalls with this suggestion? It feels like more of a bug fix of a combination of two existing features — Will it need to be subject to a full evolution?


The notation ProtocolA : ProtocolB means that one refines the other, which this does not do.

I think the feature is fine but what it is would be a close cousin of, if not just a specific use of, parameterized extensions:

extension<T: ProtocolA> T: ProtocolB

Is an extension with inheritance really “narrowing of” more than “add conformance to” as it is for a type? What are the advantages of syntax:

extension<T: ProtocolA> T: ProtocolB


extension ProtocolA: ProtocolB

? Why involve generic syntax ?

@johnno1962 does this basically make your patch equivalent to the following idea?

If we had to use any, some and potentially meta explicitly then it feels like you're proposing to allow to write extension some P : Q { ... }. Am I correct?

I have long wished for the ability to inject a new protocol underneath an existing one, so +1 if you can actually get it to work.

I cannot find it anymore, but I thought there was discussion about this a long time ago and this was found to be way more complicated that it looks on the surface. For example:


public struct A : FixedWidthInteger { /* ... */ }


extension FixedWidthInteger: ExpressibleByUnicodeScalarLiteral {
    public init(unicodeScalarLiteral value: Unicode.Scalar) { /* ... */ }

public func use<T>(_ value: T) -> T
    where T : FixedWidthInteger {
        return value + "1" // ← Used ExpressibleByUnicodeScalarLiteral


import ModuleA
import ModuleB

let a = A(/* ... */)
print(use(a)) // ← Indirectly uses ExpressibleByUnicodeScalarLiteral,
// but where does the conformance come from?
// ModuleA didn’t have it. ModuleB didn’t have it.
// Does it live in ModuleC, synthesized by the compiler?
// Where does the compiler get the implementation from?
// And what about the fact that this would be conforming
// a type we don’t own to a protocol we don’t own?

How does your design handle these sorts of things?

That appears to be the semantics implied by the description, but the syntax used isn't able to distinguish conforming types from the existential. The syntax extension FixedWidthInteger: ExpressibleByUnicodeScalarLiteral implies that all conforming types and the existential are given a conformance. One of the nice things about some / any / meta is that it becomes possible to express the intended semantics (regardless of what the intent is).

I can’t comment on the subtleties being discussed here. I’m well behind on Opaque types. All I’m proposing that a protocol extension can be combined with a conformance and this can be achieved using a bit of a compiler hack of adding the conformance to the protocol being extended internally. If this is never going to work or isn’t even desirable let me know.

After a seven month hiatus exploring things like SwiftUI and multi-threaded development I’ve had some time to take a second run at this concept over the vacation break and filed a new PR. A 5.2 toolchain is available: with a fairly complete implementation if you want to explore the possibilities.

As a potential new feature for Swift it's both powerful yet easy to understand though the full ramifications take a while to appreciate. It’s the proverbial double edged sword where not only is behaviour from all inherited protocols adopted by protocol being extended but also all protocols and nominals (classes, structs) that adopt the protocol are extended by inference. I’m sure in the wrong hands that will lead to some pretty exasperating code but rest assured you can still alt-click in Xcode to get to the actual implementation.

One feature of the toolchain is that while it works across modules it cannot currently access-control adoption of inherited protocols so the extension must be public to indicate this. Right now I’m looking for more test data and crash reports with code you would expect to work so I can round the implementation out as a proof of concept with a view to eventually putting it up for review.

I’ve made a small SPM app available with some examples if you want to kick the tires with the toolchain.

$ curl | tar tfvz - -C ~
$ git clone
$ cd ExtApp
$ ~/Library/Developer/Toolchains/swift-LOCAL-2020-01-17-a.xctoolchain/usr/bin/swift build
$ .build/debug/ExtApp

I’ve been able to move this idea forward to reach its theoretical conclusion. I’ve raised new PR and this time an evolution proposal.

This final version has a limitation (which I’ll discuss below) but with the prototype toolchain it is possible to do pretty much anything in the way of retroactively refining protocol conformances as mentioned in the generics manifesto.

To understand the limitation it necessary to recap how protocols are represented at run time and what are these “witness tables”.

An existential container is a 5 (64 bit) word struct in the C sense that represents a reference to a nominal (class or struct) conforming to a protocol i.e. if you define a function that takes a protocol as an argument, this container is what is actually passed to the function. A "witness table” (a pointer to which is contained in this structure) is a minimal representation of the information needed at runtime to dispatch calls to protocol members onto the nominal (or protocol extension)’s implementation. In the prototype toolchain they have the following slightly modified structure:

	* associated type entries
	* pointers to the witness tables of directly inherited protocols
	* pointer to “FTW” member function thunks of the original protocol for the nominal
	* witness tables of "extended conformances" in order of the module that added the conformance.

The last entry(s) is where the witness table has been extended and is at the end so the table is compatible with functions that are declared in modules that are unaware of any extended conformances. In concrete terms, say you have the following code in “ModuleA":

public struct A {
    public init() {}
    let string = "Hello Swift"

public protocol P {
    func foo()

extension A: P {
    public func foo() {

With the PR, it would now be possible to put the following in another ModuleQ:

import ModuleA

public protocol Q {
    func qoo()

public extension P: Q {
    func qoo() {

Meanwhile back in the project's main module you can now write:

import ModuleA
import ModuleQ

func something() {

This is the essence of the idea. New extended witness tables are generated for all nominal types referred to to in the source file for protocol P, all well and good.

Without wanting to emphasise a shortcoming of this model the worst case is where there is a function such this in Module A.

public func ap() -> P {
    return A()

Back in the main module the following will compile no problem:


The first call is fine but the second call will crash as the essential container returned by the function ap() in ModuleA contains a version of the witness table that does not contain the extended conformance to protocol Q. This is a reliable crash like a force unwrap and not exactly "undefined behaviour" but it doesn't give any useful diagnostic. Whether in practice this amounts to a common case is up for discussion but that’s the situation. Provided the function ap() is defined in a module that imported ModuleQ there isn't a problem.

Meanwhile, as these ad-hoc extended witness tables also emit protocol conformance descriptors, dynamic casting works without having to change the existing runtime. For example, if the following is called with an instance of A():

func anything(a: Any) {
    (a as? Q)?.qoo()

This is about as far as I can take the idea and for me it is worth a punt given its power and the changes to the compiler are relatively minor and strictly additive. There are no ABI issues I’m aware of.


This is interesting work, but I think it's important to clarify why this is happening here:

There is a difference between retroactively conforming a protocol P to another protocol Q, and conforming all types which conform to protocol P to another protocol Q. This is the point I raised above many months ago.

What you have done here is implemented the second feature, which is both possible to implement and adds very interesting possibilities, but as though you were implementing the first feature, which (as the Generics Manifesto documents) is impossible to implement both completely and efficiently, and used its syntax extension P: Q.

The second feature is a specific case of parameterized extensions (and called out in the draft proposal specifically as a future direction) that would be spelled extension<T> T: Q where T: P (or more succinctly, extension<T: P> T: Q). When we bring over the simplified spelling of opaque types to generics, then this could also be spelled extension some P: Q.

When you clarify conceptually the distinction between the two features, it will become obvious why this crashes: recall that the existential type P doesn't actually conform to P, so after implementing the second feature, it likewise does not conform to Q and doesn't have its default implementations. If we ever get around to clarifying the distinction between protocols and existential types by spelling the latter any P as some have suggested, then this would become immediately obvious: extension some P: Q would not be the same thing as extension any P: Q.

It would be wonderful to have the feature you've actually implemented in the language. Since it is a specific case of parameterized extensions, and @Alejandro has already written a draft implementation and draft proposal of that feature but not extended it to include the feature you've worked on, combining forces would produce a consistent and usable result without the problem you recount here.

Bravo on sticking to the effort!


I have indeed sought to implement the second feature as for me my naive intuition is that if P conforms to Q (retrospectively or otherwise) and type X conforms to P that would imply X should conform to Q as a result. I’m trying to demonstrate that it is possible and efficient to implement this but with an innate gotcha. @Alejandro, seems to have the bit between his teeth and is running with it and if it is possible to reuse some of the PR to complete his proposal that would be great but I’m not sure how that would solve the crashing problem when conformances span modules.

1 Like