Excessive protocols in my App

In my app I'm using protocols so that I can mock dependencies in unit test. Something like this

protocol Router {
   func showSuccess()
}

final class RouterImpl: Router {
   func showSuccess() {}
}

And in my tests

final class RouterMock: Router {
   func showSuccess() {}
}

I have tonnes of protocol & implementation pair that are just for the sake of unit test.

I was wonder whether we can do something like this
For release mode all these protocols that have single implementation gets trimmed out from binary

And in case of unit test (possible debug) these stays in the code.

We have something similar in obj-c objc_non_runtime_protocol

https://clang.llvm.org/docs/AttributeReference.html#objc-non-runtime-protocol

1 Like

Hi @Inder,

Is it performance/and or binary size you are concerned about as you specifically mention release builds?

Performance
I'm not sure there is an impact here: AFAIC protocols don't do dynamic dispatch, unless you use them as an existential (like in an array [any MyProtocol]. In the example you show, this would mean the concrete type is used directly - the protocol only at type checking time.

File size
I actually tried to test this. And indeed it looks like a release build without the protocol (only the concrete type) is slightly smaller than the one with the protocol:

Without protocol: 61752
With protocol: 62200 bytes
I.e. a difference of 448 bytes --> <1%

(note: I also tried to compare the generated ASM code, but there seems to be so much in there regardless of whether you use a protocol or not, its difficult for me to compare them)

This might add up if you have quite a lot of protocols.

Perhaps somebody with more intimate knowledge about the compiler can chime in here?

KR Maarten

// Example code --> swift package init --type executable 
// main.swift:
protocol HelloWorldPrinter {
    func printHelloWorld()
}

final class ConsolePrinter: HelloWorldPrinter {
    func printHelloWorld() {
        print("Hello, World!")
    }
}

ConsolePrinter().printHelloWorld()

// to get a version without the protocol, just remove it and its annotation in ConsolePrinter.

I have also thought about this in the past (though ultimately not done anything about it as binary size was never an issue in my projects).

Theoretically you could use a build configuration or user defined build setting with a compiler flag to do something like this:

#if DEBUG
protocol HelloWorldPrinter {
    func printHelloWorld()
}
extension ConsolePrinter: HelloWorldPrinter { }
#endif

final class ConsolePrinter {
    func printHelloWorld() {
        print("Hello, World!")
    }
}

ConsolePrinter().printHelloWorld()

However, that quickly becomes a lot of boiler plate and might not cover all cases you need, since usually you rely on the protocols for injection, i.e. they are used as existentials or in generic constraints. Especially the latter can become quite complex and then all require #ifdef shenanigans around them...

1 Like

@maartene My main concern is performance as these protocols are used as existential in my other classes, I could use generic constraint to make it better then these generic constrains make code less readable if you have multiple dependencies

I assume you mean something like this in most cases:

protocol SideEffecting {
    func mySideEffect()
}

final class SideEffect: SideEffecting {
    func mySideEffect() { /* ... */ }
}

final class Trigger {
    var sideEffect: SideEffecting = SideEffect() // will be replaced with a mock during tests

    func trigger () {
        sideEffect.mySideEffect() // mock/spy in tests to ensure the call happens
    }
}

That's a pattern that I use quite frequently, too. I think there's not really a performance loss if the involved types (especially the one that's later mocked) are reference types (classes) as in this example. I believe there is no additional indirection that doesn't already happen due to the mocked type being a reference itself.
Personally I try to use structs more often (I'll omit an example here), so in these cases there is a theoretical performance dip due to the additional indirection that comes from the property becoming an existential. Making the entire class generic over the affected properties's types might help, but it might also introduce other indirections depending on how spread its use is over modules and how the optimizer can improve it... I'm not an expert there, but that smells a little like "premature optimization" to me.

I think a good rule of thumb would be to ask yourself "how often is this actually called?" and "how 'clean' is the code by using this delegation pattern?"
I really doubt the performance is that big of a deal in many cases, but if it is, you might want to employ a completely different pattern to de-couple important code. The second question is about readability, which is also an important factor: Defining behavior that can be theoretically switched (whether that only happens in unit tests or not) via protocols is a good thing to keep your code understandable.

4 Likes

They take a little more effort, but I don't think it's a problem. Do you have a counterexample?

Swift 5.7 made this possible without two overloads.

final class Trigger<SideEffect: SideEffecting> {
  var sideEffect: SideEffect

  init(sideEffect: SideEffect = .default) {
    self.sideEffect = sideEffect
  }
}
extension SideEffecting where Self == SideEffect {
  static var `default`: Self { .init() }
}

private extension SideEffecting where Self == MockSideEffect {
  static var mock: Self { .init() }
}
 Trigger(sideEffect: .mock)

This idea that existentials are significantly slower is a myth that seems to have taken hold in the community alas. As an alternative you might want to watch this collection of videos about dependancy injecting and alternatives for testability. The first video is free but warning this content may result in dependancy.

1 Like

I'd second that! Keep in mind that it basically boils down to dynamic dispatch and we have been using that for literally decades in Objective-C (it's a little more, I think, what with witness tables and all, but basically that's it).
From what I see in lots of code a bigger issue in practice could then be that people (who come from other languages, like Java), might use classes where they could use structs instead, but that still also works nicely in many (if not most) cases.

Yup, I know, and your snippets showing some convenience extensions are a nice way to express that concisely. However, the Trigger class is then a generic, which is totally fine here, but that could become quite a hassle if you have lots of properties that you want to use protocols and mocks for. Just using an existential (i.e. any SifeEffecting) for the property is often sufficient, imo (whether you inject that via the init method or in-line doesn't matter).

1 Like