Retroactive Conformances vs. Swift-in-the-OS

Let's start with the following definition...

Retroactive conformance: extending a class, struct, or enum from another module to conform to a protocol from another module. The protocol and the conforming type may be from the same module or from different modules; the important note is that neither is in the same module as the extension.

Hopefully uncontroversial. Retroactive conformances are a useful feature, mainly because (as Dave A has said in the past) they allow you to take types from one library and protocols from another library and make them work nicely together.

However, retroactive conformances suffer from the "What if two people did this?" problem:

// Framework A
extension SomeStruct: CustomStringConvertible {
  var description: String {
    return "SomeStruct, via A"
  }
}
// Framework B
extension SomeStruct: CustomStringConvertible {
  var description: String {
    return "SomeStruct, via B"
  }
}
// main.swift
import A
import B
print(SomeStruct()) // ???

If framework A imported framework B or vice versa, the compiler would be able to complain that there might be a conflict, but as is there's no way to do so in advance. And there's not really anything that the actual app can do about it either in this case.

Today, we just allow this to happen, with the Swift runtime deterministically picking one framework's conformance to "win". I believe this is based on the order the dynamic libraries get loaded or the static libraries get linked together, i.e. not something you really want to be depending on!

So, retroactive conformances do have a flaw. The fully safe rule would be to disallow them completely, or only allow them in the app target (and cross our fingers about any possible dynamically-loaded plugins). But they're a little too useful for that---you wouldn't be able to make a Swift package that combines two libraries together, for example.


However...the situation is even worse with libraries in the OS. Imagine this scenario:

// CoreKit overlay in iOS 15
public struct SomeStruct { … }
// main.swift
import CoreKit

extension SomeStruct: CustomStringConvertible {
  var description: String {
    return "SomeStruct, via my app"
  }
}

print(SomeStruct())

Everything's fine, right? Until the OS update.

// CoreKit overlay in iOS 16
public struct SomeStruct { … }

// A non-retroactive conformance! But one with availability.
// We don't actually support this syntax yet, but we're going
// to need something like it.
extension SomeStruct:
    @available(iOS 16, *) CustomStringConvertible {
  var description: String {
    return "SomeStruct, via CoreKit itself"
  }
}

Funny new syntax aside, we have a problem. The already-compiled version of my app is going to run on iOS 16 and expect to use its own implementation of CustomStringConvertible. But the one in the OS will win, because it's non-retroactive. Worse, if I recompile my app to support iOS 16 and remove my own implementation of CustomStringConvertible, it won't behave correctly on iOS 15!

Unless and until we come up with a better answer here, I propose the following rule:

It is an error in Swift 5 (warning in Swift 4 mode) to declare a retroactive conformance if both the protocol and the conforming type are from "resilient" libraries or system frameworks.

That "resilient" refers to "libraries that may be swapped out without recompiling clients". In Swift, such libraries are compiled with extra indirection in their ABI in order to handle future changes. We haven't formalized this feature yet, so for now you can read "resilient libraries" as "the standard library and SDK overlays". The "system frameworks" part is thrown in to account for Objective-C code, which has the same problem but no formal distinction between "libraries that may be swapped out without recompiling clients" and "libraries whose exact version is known by the client".

What do people think? It kind of stinks, but it definitely sidesteps these problems.

P.S. Once we've worked this out, there's a bonus problem involving class inheritance. I'll bring that up later, though.

Appendix: Co-existing conformances

A few other Apple people have pointed out that it would actually be possible to support conflicting conformances, except for in dynamic casts and features that use them (like print). This is because when a conformance is used at compile time, the compiler knows exactly where to find it, and it can be sure to continue using that implementation even if another one appears at run time in another module. However, it does complicate the language and runtime a little to support these "compile-time-only" conformances, and it still doesn't solve the problem when you do want to make the conformance available for dynamic casting, like the CustomStringConvertible example above.

Another option would be to come up with an attribute that indicates that a conformance is a "fallback". This would slightly slow down regular code that uses the conformance, as it would start off with a dynamic lookup for a non-fallback implementation, but that result could be cached. Again, though, that's making the language more complicated when we're not yet sure we have a need to.


Possibly interested Apple people: @Joe_Groff, @Douglas_Gregor, @dabrahams, @Slava_Pestov

15 Likes

For the module vs. module conflicts, in no particular order or real consideration:

  • Allow internal-only retro conformances, so frameworks don't have to publish every one they make.
  • Allow consumers to disambiguate all types of symbols. e.g. (SomeStruct.A.description, SomeStruct.B.description, A.someFunc(), B.someFunc())
  • Allow consumers to provide their own conformance for conflicted retros. This could merely be choosing a winner or the ability to provide their own implementation. Perhaps conflicts would be considered non-conforming?

For the system, the resiliency rule may be fine.

(As an aside, I've never understood why Apple doesn't take advantage of the ability to ship multiple versions of a framework in a single bundle, instead of the hacks that are implemented to let one executable try to dynamically match behavior across all versions.)

I have a question. What is the principle behind the decision (to some degree causing the stated problem) that a non-retroactive conformance supersedes any other retroactive one? This is of course subjective, but my mindset tells me the 'closest' conformance relative to the current module should supersede or rather shadow others, the same way this is done with members and, although we can disambiguate, types.

I had a conversation about this with @Joe_Groff on Twitter a while ago. In that conversation I shared a link to a sample project demonstrating the issue and the concerns I have about it. Anyone who is interested in understanding this better should take a look at that project.

Joe's response (based on the sample project) was:

SortedArray and SortedArray would be different types you couldn't intermix. There would need to be disambiguation referring to SortedArray from a context where multiple conformances are visible. This isn't done yet.

I don't know if that entirely solves the problem demonstrated in the sample project, but I definitely don't want to see any potential for that kind of arbitrary behavior.

(continuing aside)

Keeping multiple copies of a framework resident has significant memory costs (even to accommodate multiple resident ISA slices, like i386/x86_64 or armv7/arm64, which is part of why macOS and iOS have so aggressively deprecated their 32-bit slices). It's more efficient to have one version of a framework mapped into memory even if it has to accommodate different versioned behavior in different processes.

4 Likes

If you can only pick one conformance for the entire process, the non-retroactive one is the only uniquely-identifiable one, and it's the one that matches the intent of the type or protocol owner. If you allow multiple conformances, then yeah, we could do something more like "closest". But we'd have to be very careful to define what that means, especially when it comes to dynamic casts. I don't see good things down that direction.

6 Likes

That's the intended runtime model, but Jordan is right that the compiler doesn't always carry around the right conformance in all circumstances, and to @Jon_Shier's point, there's also no syntax for explicitly referring to a specific conformance or its members with qualified names.

1 Like

If I'm understanding this correctly, this seems like exactly the same problem as orphan instances in Haskell. Afaik, the consensus there is that they're usually best avoided although this is not enforced. However, GHC (the most common Haskell compiler) will raise a warning in such a case.

2 Likes

I don’t know what we should do about the scenario described in the original post, but I think it needs to be part of a larger discussion about resilient modules and our story for library evolution.

It would be highly unfortunate to prohibit retroactively conforming system-framework types to system-framework protocols. Especially since we would be preventing people from making a useful conformance today, on the grounds that in the future that conformance might be recognized as *so* useful that it would actually become available automatically.

2 Likes

Seems like a good rule for resilient libraries and obj-c frameworks.

For compiled against code I have often wanted some simple way of specifying which one (I have run into this).

I know we've had huge problems with this in the past at Apple with people adding, say, NSCoding conformance to a type, and then getting in trouble when a later Apple OS implements NSCoding in a different way. (See "Why you can’t make someone else's class Decodable".) If this restriction is the "stick", the "carrot" might be the thing that gets talked about every now and then: forwarding a protocol to a particular property, to make it easier to make wrapper types (the "newtype" solution in Haskell, as @yxckjhasdkjh pointed out).

To @Jon_Shier's points, I actually do think we need a way to disambiguate members from different modules, but that doesn't fix the conformance issue. For the rest of it, the sticking point is usually the behavior of dynamic casts. If we didn't have to worry about those, this would all be a lot simpler (see the "appendix" section in the original post).

3 Likes

Would it make sense (as another option) to split the difference here and say: without introducing any new attribute, retroactive conformance of a third-party or resilient type to a third-party or resilient protocol is always a "fallback"?

1 Like

I personally really want the fallback behavior to be explicit, since it means writing your code in such a way that it still works if the implementation changes out from under you, or even behaves differently in concrete and generic contexts. But you're right, that is an option.

1 Like

Would it be at all feasible to have an @available(beforeButNotIncluding: iOS 16) or similar? There's likely numerous problems with this (e.g. requires updating code if an upstream library introduces an @available(...) conformance), but it seems like this may retain some of the valuable flexibility of retroactive conformances.

Incidentally, this is a feature of Swift I've wanted for a while. There are circumstances where I'm interested in adding convenience conformances to types that my module does not own (e.g. conform this abstract C data structure from a system library to Collection so I can work with it more easily). However, I usually come to my senses about halfway through writing the conformance and remember that I'm just begging to have either my code or someone else's break.

This is not a critical limitation, because I can always declare a wrapper data type to allow me to do this. But it would help address part of the issue.

This is also an example of a good general pattern for conforming other people's types: if you want to avoid ambiguity, a good way to do so is to generate a wrapper type that you do own, and conform that type instead.

2 Likes

I guess this is related to this: Best way to implement new feature for old SDKs?
Ideally, solving the problem @jrose describes would also yield a solution to my question/problem.

¯\ _(ツ)_/¯

I've long wanted a way to declare a "weak" conformance in my framework. That is, the compiler recognizes the conformance, but will automatically defer to any other conformances if there is a conflict.

Ultimately, though, we need a way to explicitly choose a winner in cases where the compiler might get it wrong.

1 Like

This is good to break a lot of code without a clear migration path.

If we do go forward with this breakage, we definitely need some way to help people achieve similar functionality. @jrose What is this solution you are talking about? Can you go into more detail? Haskell's newtype just looks like typealias.

Appendix: Co-existing conformances

A reference to prior art for this: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2098.pdf

A few other Apple people have pointed out that it would actually be possible to support conflicting conformances, except for in dynamic casts and features that use them (like print). This is because when a conformance is used at compile time, the compiler knows exactly where to find it, and it can be sure to continue using that implementation even if another one appears at run time in another module. However, it does complicate the language and runtime a little to support these "compile-time-only" conformances, and it still doesn't solve the problem when you do want to make the conformance available for dynamic casting, like the CustomStringConvertible example above.

A reasonable answer to the dynamic casting problem, IMO, is that the cast fails unless one of the following holds:

  • there is only one such conformance in the system or
  • there is only one such conformance visible at the point where the concrete type is converted to the existential protocol type.

Haskell uses newtype to create simple, zero-cost wrapper types. E.g.

newtype Id = Id String

is kind of the same as writing

struct Id {
  let value: String
}

in Swift, only that Haskell will optimize the wrapper type away at runtime; it's purely used for type-safety purposes. There is a somewhat similar library for Swift: GitHub - pointfreeco/swift-tagged: 🏷 A wrapper type for safer, expressive code.

So, in Haskell, people (afaik) say "retroactive conformances" (they have a different name in Haskell, but that's beside the point) are best avoided, so your best bet if you don't own the type and you don't own the protocol is to create a wrapper type that you own and add the conformance for that wrapper type.

4 Likes