Inner types that don't need the type parameters of generic outer types

In Swift, the type parameters of an outer type are in scope for an inner type. For example:

struct Outer<Param> {
    struct Inner {
        // Param is available here
        // but is not necessarily used.
        …
    }

    let inner: Inner
    …
}

let inner: Outer<Arg>.Inner

The inner type can't be directly used from an external scope without supplying the outer type's parameters: let inner: Outer<Arg>.Inner must include <Arg>. That makes sense…

Suppose Inner is private; I only ever need to use it within Outer. And suppose it never uses Param. I imagine the inner type's generated code will be specialized needlessly, but is that so?

For this reason, when the outer type is generic and the inner type need not be, I've been avoiding the use of inner types and instead creating new top-level types with prefixed names (OuterInner). Is that a good habit?

This habit has created some codebase churn when I've introduced a type parameter to an outer type and felt I had to move the inner types out.

7 Likes

An interesting question. I too am now curious of the answer. Until your post, I was assuming the opposite - that a non-generic inner type would be fully non-generic in the resulting binary. It would be surprising to me (and concerning) if it were any other way (because of the performance problems that creates).

And consider not just nested structs (and actors and classes) but nested enums, which is a particularly common occurrence.

They are definitely generic, because you're allowed to use the outer parameters in the Inner type…and even if you don't today, you might in the next release of the library. For binary-compatible library evolution, this means this can't just be done via checking whether the Inner type currently uses the outer parameters; they really do have to be tied to the outer type.

I'll remind that generic specialization is an optimization and not guaranteed for function bodies [EDIT: which in this case is a good thing]. If you're not using the outer parameters in your inner types, I don't think it will have a very big impact on code size or run time. But they will be extra hidden parameters that get passed along with those methods by default (though that can get optimized away too).

A compromise I've seen is to define OuterInner like you describe, but then put a typealias in the inner type. Then it really is the same non-generic type, and interchangeable between Outer<Int> and Outer<String>, but the real name of the type will peek through sometimes.

8 Likes

"OuterInner" is one option. A constraint is another.

struct Outer<Param> {
  private struct _Inner where Param == Never { } // Cold be anything. `Never` is a choice.
  private typealias Inner = Outer<Never>._Inner

  private let inner: Inner
}

And if the primary usage is external, instead:

struct Outer<Param> {
  struct Inner where Param == Never { }
  let inner: Outer<Never>.Inner
}

let inner = Outer.Inner()
2 Likes

Extensions:

struct G<T> {
  struct Inner {}
}

extension G.Inner {
  func f() { print(T.self) }
}
4 Likes

Thanks for the constraint idea; it would help me avoid a source-breaking change next time I have to introduce an outer type parameter.

I had tried to leave a typealias behind but found I couldn't do it in a constrained extension—something like that. I didn't think to leave the inner type in place and constrain its declaration!

For routine use it's probably more ceremony than I want to ask my teammates to adopt going forward, but versus a breaking change it's a way out.

1 Like

Good point about specialization happening only some of the time. I don't have intuition for when it does and doesn't. Maybe "when a type parameter is used" is a good minimum criterion. :)

I suppose for an empirical answer I ought to set a breakpoint in Inner, hit it from a couple different concrete types, and compare the instruction pointer in each case.

1 Like

i run into this a lot with generic model types and CodingKey(s). it’s not possible to use some of the other suggested workarounds because CodingKey needs to be present to witness serialization requirements for all T and not some unreachable T like Never. i am reluctant to define an external OuterCodingKey type because then the coding key type takes on a “life of its own” and is not physically tied to its model type.

Never is Codable now, so it's good for this kind of thing, but not necessary. Automatic synthesis is broken, for this technique, but what else are you saying doesn't work?

struct Landmark<Name: Codable>: Codable {
    var name: Name

    enum CodingKey: String, Swift.CodingKey where Name == Bool {
        case name = "title"
    }

    init(from decoder: Decoder) throws {
        let values = try decoder.container(keyedBy: Landmark<Bool>.CodingKey.self)
        name = try values.decode(Name.self, forKey: .name)
    }

    func encode(to encoder: Encoder) throws {
        var container = encoder.container(keyedBy: Landmark<Bool>.CodingKey.self)
        try container.encode(name, forKey: .name)
    }
}

no, what i mean is if i have Landmark<ID> and i’m using it with Landmark<A> and Landmark<B>, i can’t constrain Landmark.CodingKey to one of A or B.

This is interesting! I am trying to figure out what you're saying* but cannot. Do you have code to share which demonstrates it?

*Just forcing the associated type to be there is not the problem…

protocol Codabley: Codable {
  associatedtype CodingKey
}

extension Landmark: Codabley { }
enum _CodingKey: String, Swift.CodingKey where ID == A {
    case name = "title"
}

typealias CodingKey = Landmark<A>._CodingKey