Dependency injection using `typealias` declarations

Hi folks, has anyone else used typealias declarations as their dependency injection mechanism? I haven't found anything so I wrote it up myself (Dependency injection · Swift DI with type alias declarations) but I feel like it should either be used, or there's a really good reason why it's even worse than all the other options.

Here's an example of what I mean:

Injection point declaration

#if TEST
typealias AnimalForUseInSomeSpecficContext = Cow
#else
typealias AnimalForUseInSomeSpecficContext = Sheep
#endif

Real type declaration

struct Cow { let name: String }
struct Sheep { let name: String }

Usage

let animal = AnimalForUseInSomeSpecficContext(name: "Daisy")

print(animal) //  Cow(name: "Daisy")

Note that I don't need protocols, or any other magic to make this work; the compile with fail something isn't correct.

Any ideas as to what's wrong with it?

1 Like

One downside with this approach is that the type system can't make any guarantees about what animal is when your compilation context changes. If, for example, Cow were to change its name property to nickname, you wouldn't necessarily find out that your code calling animal.name broke until you tried compiling with TEST enabled.

A better approach might be to use a combination of protocols, type erasure, and conditional compilation like so:

protocol Animal {
    var name: String { get }
}

struct Cow: Animal { let name: String }
struct Sheep: Animal { let name: String }

enum DI {
    static var daisy: Animal {
        #if TEST
        return Sheep(name: "Daisy")
        #else
        return Cow(name: "Daisy")
        #endif
    }
}

let animal = DI.daisy

print(animal) // Cow(name: "Daisy")

I've toyed around with using protocols to keep the two types in line, but I seem to always end up with miles of boilerplate code. I also found that - at least for mock objects - I might actually want the two to diverge slightly so that I wasn't writing stubs merely to keep the compiler happy.

With code like the example above, I've had to move the initialisation to a different location, away from its point of use which I then find difficult to track :slightly_frowning_face:.

If I did want type erase I played around with:

#if TEST
let AnimalForUseInSomeSpecficContext : Animal.Type = Cow.self
#else
let AnimalForUseInSomeSpecficContext : Animal.Type = Sheep.self
#endif

It did have the (small) downside of requiring init to be called explicitly, producing this

let animal = AnimalForUseInSomeSpecficContext.init(name: "Daisy")
print(animal) //  Cow(name: "Daisy")

Do you find things normal get moved out of context when using DI?

I'm not sure how I feel about the whole setup in the first place, but you can use generic typealiases to enforce constraints:

protocol Animal {
  var name: String { get }
}

typealias DIAnimal<T: Animal> = T

struct Truck {}
typealias AnimalForUseInSomeSpecficContext = DIAnimal<Truck> // error

Alternately, you could have Mattt's DI enum conform to a protocol with constrained associated types, and then your typealiases get validated.

5 Likes

That's really cool, thank you. I (obviously) like the idea of keeping the call-site unchanged.

Is there a way to genericize that further to allow an arbitrary protocol? Something that looks like this (but successfully compiles):

typealias Constrain<T: P, P> = T 

Yes this is once been pitched by @anandabits as "Generalized supertype constraint".

You beat me to it @DevAndArtist. This is still one of my most desired generics enhancements. @Douglas_Gregor has said it shouldn’t be too hard to support but the idea hasn’t gotten much attention. I think it is mainly waiting for someone with the time and ability to work on implementation. I would be happy to drive the SE proposal if somebody was willing to do that. :slight_smile:

Yeah I'm in the same boat, I really love the idea of that pitch because it will make generics so much more flexible.

I too would love that kind of thing, I think I've hit that issue a few times as it seems so natural to expect it to work already. Fortunately for me in this context, it's merely a nice-to-have.

Although the code is welcome, I am probably more interested in your concerns that prompted this:

I don't want to be using something that disrespects Swift, Swiftiness, or might break somewhere down the road.

Eh, I don't think I can put it into useful feedback. I don't like "dependency injection" as a pattern in general, and using a static configuration as described in your blog post means you're more likely to end up testing something that's different from what you ship. But being for or against dependency injection is different from thinking something is Swifty or not.

(I do think that having a protocol here is a good thing, even with the extra maintenance.)

2 Likes

I'm generally with you regarding dependency injection. I didn't see the need/benefit for it until UI testing became a bigger thing in iOS. If I have to though I want it to be approximately transparent in the code where it's used, and also in the stack-trace (which seems to be ignored far too much).

Being able to enforce protocol conformance (if/when I want it) in a way that is transparent to the call-site will really help.