SE-0364: Warning for Retroactive Conformances of External Types

For what it's worth, the initial version of the implementation used a @_retroactiveConformance attribute, but @Ben_Cohen recommended this approach as a less syntactically heavy silencing technique. I think adding a new one-off attribute to the compiler just to silence a warning is kind of overkill from a compiler maintenance perspective, but I'll put all of this in an alternative.

2 Likes

Mmm, I am surprised by this interpretation.

My understanding is that retroactive conformance (at least in the non-resilient setting) is an explicitly supported use case and indeed tentpole feature of Swift, even if conforming another’s type to a third party’s protocol is unwise and brittle. So far as the user-facing model of protocols and their conformances are concerned, then, duplicative conformances to the same protocol without the possibility of conflicting implementations of requirements isn’t invalid in some explainable way.

Are you referring to some implementation-level limitation in the Swift compiler? Why do you label this scenario as “misuse”?

8 Likes

It seem that the solution of full module name forever hides the problem from the developer, without much indication that there's a potential underlying bug.

extension Foundation.Date: Swift.Identifiable {
  // ...
}

We already have warnings in the language. This seems like the canonical example of a compiler warning.

Let's ask the question this way: if Swift had individual warning suppression, would we pitch full module names here? I don't think so, and so I can't support this proposal as written.

5 Likes

It depends what you mean by explainable—I think it's perfectly explainable to say that it's invalid essentially by fiat: Swift's model for protocol conformances is that for each type-protocol pair there is either a unique conformance or none at all. The reasons for this might not apply to requirement-less protocols, but IMO it's reasonable to decide that we treat all conformances equally in this regard.

Also, I think there's still the potential for a soundness issue with requirement-less protocols since adding requirements is allowed for resilient libraries (as long as a default implementation is provided). So while marker protocols are guaranteed to never have any requirements (nor any runtime impact at all), in the general case you could have a situation like:

// v1.0
public struct S {}
public protocol P {}

// v2.0
public struct S: P {
  public var x: Int { 0 }
}
public protocol P {
  var x: Int { get }
}
public extension P {
  var x: Int { 1 }
}

Granted, this case is perhaps slightly easier for the library author to reason about, since they at least control both implementations of x rather than having to be defensive against arbitrary implementations of x that may already be out in the wild, but it doesn't solve the underlying issue AFAICT.

In this case, to my understanding, the “original” x cannot be a witness for the protocol requirement without recompilation and this would not be meaningfully different to v2.0 of the library vending a non-protocol requirement extension method of the same name, which is a supported feature albeit with its own complications (as all shadowing has)—but the runtime behavior is not undefin(ed|able).

In the general case, there is a difference between restating a protocol conformance and reimplementing one.

Swift will diagnose in specific circumstances when you’ve restated a conformance you don’t have to restate, but it is more akin to the unmodified-var-please-switch-to-let diagnostic than to the case of conflicting implementations of a protocol’s requirements—the latter being unsupportable (invalid) in Swift’s overarching framework where a type can conform to a protocol in only one way.

Protocols with no required members are just the degenerate case where there is no distinction between these two scenarios. We can decide many things by fiat, but I am struggling to see why we should prohibit anew in Swift Next something that currently works without a demonstration of active harm.

4 Likes

Hm—perhaps I'm exposing a gap in my understanding here but I don't see how that can be the case (I'm also not entirely clear which x you're referring to as the "original" since they were both added in the same version; I'm assuming x in the extension?). If the client code looks like:

import SomeLibrary

extension S: P {}

then with v1.0 of the library you've simply declared a retroactive conformance of S to P—no issue, strictly. But since adding a (defaulted) protocol requirement is a resilient change, SomeLibrary could have updated in the following manner:

// SomeLibrary v1.5
public struct S {}
public protocol P {
  var x: Int { get }
}
public extension P {
  var x: Int { 1 }
}

How would this be a resilient change if the definition of x in the extension isn't witnessing P.x within the client module? Otherwise, there would be no witness for P.x in the conformance S: P.

ETA:

Yeah I take your point, and if there's really no soundness issue in the requirement-less case then I don't feel super strongly. I just think that there's some value in having a simple rule here ("module-qualify retroactive conformances") that covers the general case rather than trying to subset out narrow cases that are okay.

1 Like

It's not true that there's no soundness issue in the no-requirements case. Protocols typically have API requirements, but it's also pretty common for them to have semantic constraints that are not encoded in API. It's pretty easy to retroactively conform a type to a protocol with semantic constraints that are not satisfied, and similarly easy for that to introduce soundness holes.

These issues are somewhat different in character, but they are quite real.

2 Likes

This strikes me as (mostly) orthogonal to the proposal. That it's possible to write a conformance which satisfies the compiler but still violates semantics constraints of the protocol is an issue well beyond retroactive conformances, and not one that can reasonably be addressed at the language level. Users can write such invalid conformances for their own types and cause similar soundness holes.

ETA:

IOW, any invalid conformance such as:

extension NotMyType: DontConformToThis {}

seems like it would have all the same issues as:

struct Wrapper: DontConformToThis {
  var x: NotMyType
}
3 Likes

Indeed, that's what I was getting at with "these issues are somewhat different in character." I merely want to point out that it is not the case that conforming a type to a protocol with no requirements cannot introduce soundness holes.

1 Like

Error does have requirements, but they're hidden because of the leading underscores.

2 Likes

Frustratingly, for Error in particular, even a single, unambiguous conformance on the wrong type will ruin someone’s day. We don’t even need to have two conflicting declarations — one is quite enough.

There is a significant amount of code out there that is relying on T is Error returning false for specific types (esp. types in the stdlib or one of the platform SDKs). This makes innocent-looking declarations such as extension String: Error {} both source- and ABI-breaking, even if they only occur once.

Retroactively conforming types to Error is not a wise thing to do. (To be clear, neither is relying on T is Error returning false, but here we are.)

I would very much like the new warning to trigger on conformances to Error.

8 Likes

In my view being able to extend types that you import from other packages with protocols from other packages is a key feature of swift. In the code I have seen and the code I have written this is very very common and very useful. Without this we would be forced to constantly write lots and lots of wrapper type that themselves would need to possibly conform to other protocols.. the general effect of this would mean most libs would be strongly discouraged from exporting any concrete types as for most users these will be useless.

For this reason I feel swift should at least provide some way for developers of libs to explicitly expose protocols that they promise you can use in this way. (eg protocols that they can declare publicly for users but never use as extensions on types they publicly declare). Just as we have frozen types that provide a contract promise from the lib developer so that clients can safely assume things will not change.

Furthermore I wander if perhaps there is something that could be added to the runtime so that in this situation were the library has added conformance (without the library consumer having been re-compiled) the compile is able to track which implantation it can safely use.

Not sure if it was mentioned in the proposal but this should also only apply to frameworks you link to dynamically.

4 Likes

Sorry I know we're past review but I just saw this and I haven't seen many counter-points.

Similar to what @hishnash said, as an application developer in Swift I consider this a feature. In terms of source compatibility, in the 7 years now of managing a codebase with multiple hundreds of thousands of lines of Swift I cannot think of even 1 single time anyone on our team cared that we had to spend a couple days refactoring some code because something changed in either a dependency or in Swift itself. Usually these refactors are because there's an improvement, and that improvement garners excitement for that work even if it's tedious. Source compatibility from my "application developer" view is mostly a non-issue. Granted this also implies that changing retroactive conformances to using "module-qualifying" paths is also a non-issue.

But what will be an issue is if this becomes impossible to do in Swift without wrappers. I also use Rust in my work daily and there are plenty of things Swift can do I wish Rust could, and retroactive conformance is near the very top of the list. When there are crates that expose traits and then miss internal types which likely should conform to those traits I'm forced to make these wrapper types and pollute my code with them, they are incredibly annoying and noisy.

The ideal scenario imo is if a retroactive conformance does not yet exist then the user has a way to conform, if suddenly it does exist then the user is forced to cleanup their code, which correct me if I'm wrong but will happen as soon as the developer compiles anyway. Since some individuals in this forum wish to never have this capability / make it an error, then I think the proposal as written with a warning and an escape hatch is an ok compromise +1, though I do not wish this to be pushed any further at least for app development.

Edit: After reading the proposal a second time I noticed

Before the client removes their conformance and rebuilds, however, their application will exhibit undefined behavior

which I believe is referring to for example when a user upgrades their operating system and there's a new dynamically linked Foundation. Therefore I agree that this seems more applicable to dynamically linked libraries. Has the solution been considered where the library's conformance "wins" within the library if the conformance exists, but if it does not exist the user's wins in the library, and from their app outside the lib their own conformance wins? At least until recompilation. Also, in iOS world at least, if an app hasn't been compiled yet for a target OS doesn't the app operate in a compatibility mode of some sort for the older OS it was targeting, or is the intent here to give space to Apple to remove that mode?

4 Likes

Any situation where you have multiple conformances sitting around in the same executable can cause issues. For instance, which conformance "wins" in the following scenarios?

// SomeKit
public protocol P {
  var x: Int { get }
}
public struct S: P {
  var x: Int { 0 }
}

public func useP<T: P>(_ t: T) {
  print(t.x)
}

// MyApp
import SomeKit

extension S: P {
  var x: Int { 1 }
}

useP(S()) // what does this print?

func callUseP<T: P>(_ t: T) {
  useP(t)
}

callUseP(S()) // what about this?

There is some ability for libraries (and presumably, the runtime) to change behavior based on the version of the SDK that was compiled against, but it's my understanding that using this for protocol conformances (if it's even technically possible, which I can't speak to confidently) would present some pretty significant performance downsides at the very least. When, say, SomeKit is upgraded to include the S: P conformance, the compiler would have to avoid all specialization of the P.x requirement within SomeKit for all clients on the off chance that it ends up linked against a client that introduced a retroactive conformance. So you'd lose out on any ability of the compiler to avoid dynamic dispatch even when it 'knows' the implementation to call under a "retroactive conformances aren't allowed" regime.

2 Likes

Any situation where you have multiple conformances sitting around in the same executable can cause issues.

But this is in the proposal:

if Foundation decides to add this conformance in a later revision, this client will fail to build

I thought ^ implies that this explicitly doesn't occur in statically linked scenarios because "this client will fail to build"?

Granted I didn't think very far through what I wrote for that option, in that scenario I'd expect useP to always use SomeKit's implementation, however if the client is doing anything impure in their extension it will completely break their expectations.

Thanks for thinking through that, that's obviously not good.

1 Like

Yes, sorry if I was unclear there. I was only trying to illustrate why, in the dynamically-linked case, a rule like the one you suggested ("library conformance wins in the library, client conformance wins in the client") doesn't solve the fundamental soundness hole. Assuming the SomeKit S: P conformance didn't exist at the compilation time of MyApp, the compiler has to use the MyApp conformance, even in situations where the conformance is being vended back to SomeKit.

4 Likes

Overall +1 for the direction of this proposal, but I hope this will cause an error, not just a warning, in some future version of Swift.

This is based on my experience with a test bundle, which declared a conformance overlapping with another one in the tested library. Not being able to predict which conformance is going to be used causes issues on its own, but it's even worse when you're not prepared for it or not aware of it. The warning will help, but ideally I'd like to avoid stumbling upon this problem altogether.

1 Like

Yes that is what I would expect as a developer. If SomeKit exposes a generic function that expects something conforming to P (from your example) even if the new version of SomKit has an extension on that same type to make it conform to P in situations where the generic constrain was evaluated at compile time of MyApp it should use the my-app implementation. But I get that this might sill have some possible issues if inside the dynamic lib there is some comparison to values locally generated were the type has been fully type-eased into Any and then cast back.

So that is why I feel that perhaps there is a better solution would be Swift instead constrains public protocols that are emitted by dynamic libs adding something similar to frozen and this would then require linking checks if the protocol is used to conform to other public types.

This would add complexity to developers building ABI stable SDKs but that feels like the correct pattern, after all that is what swift has opted to do over the years in most other situations were a dylib change would otherwise limit the lang's usefulness.

I don't feel this would be to much of a burden on a dylib as in most cases were they might be adding conformance of a public type to a public type this will be for a new public api eondptoin. And if this is truely for some internal use they can always write a wrapper type that is internal that they can extend with conformance to the public type.

eg.

// SwiftCharts

// This is then exposed to developers that compile against the new version of the lib
@linkedVersion(after: 2.0)
public extension Measurement: Plottable where UnitType == UnitDuration {...}

// If SwiftCharts need to use this internally
// in placers were they do not want to deal with 
// linked version checking use a wrapper type.

private struct PlottableMeasurement<UnitType: Unit> {
    var measurement: Measurement<UnitType>
}

private extension PlottableMeasurement: Plottable where UnitType == UnitDuration {...}

Needing to create wrapper types and private protocols as well is very common if your build a dylib as your exposed concrete types might well need to be frozen so the wrapper gives you the ability to easily add additional data needed by the new code paths without breaking the frozen promise.

1 Like

Swift on any platform does not currently have any solutions for this issue, other than just not adding new conformances to existing types.

I'm also too late to the discussion here, but when I just read this proposal I immediately thought that I will receive warnings in my own code and I will simply opt to "silence this warning by explicitly module-qualifying both of the types in question" in all of the cases. Because I already only extend an existing type with a protocol conformance if I really need to. And if I do, I expect my compiled code to work correctly forever. Of course, it's fine if my code fails to compile because a conformance was added in the standard library. Then I can adjust my code. But compiled code from the past should not change behavior just because the iPhone user has upgraded to a newer version of iOS.

For example, if I understand the proposal correctly I would get a warning here, right?

extension Binding: Equatable where Value: Equatable {
   public static func == (left: Binding<Value>, right: Binding<Value>) -> Bool {
      left.wrappedValue == right.wrappedValue
   }
}

And the workaround to get rid of the warning would be to replace the first line with:

extension SwiftUI.Binding: Swift.Equatable where Value: Swift.Equatable {

The reason I need to conform Binding to Equatable by the way is because I need a Binding value in my views state and I'm using TCA which requires state to be Equatable.

To me personally, and I fear this will be true for other app developers as well, the warning suggested here is simply causing confusion and let's me wonder why the compiler can't simply prefer my own conformance to the protocol and handle it like an "override" we already have for subclasses in Swift. The later added Swift standard library conformance would then behave like a "super class" for code that was compiled with an older version of the system and my own conformance would be the "subclass overriding the parents behavior" in that case.

I'm no compiler expert and my best guess to explain why this is not the current behavior already is that this would introduce a performance or complexity issue. But without knowing the complexities behind it, just being asked for what behavior I would expect: No warning, definitely not an error, but just Swift doing what I expect – which is the code which I tested with the version of Swift and the libs I developed my app with to work exactly the same in future versions of the system like at the time I tested it. To me, this is the very definition of backwards-compatibility.

If what I wish for really is not possible, I personally prefer to get a warning after the conformance was introduced to the standard library on the App Store and via E-Mail telling me that there is a conformance I provide clashing with a newly introduced conformance in the system and suggesting to re-compile my app with a newer Xcode version where I need to provide a proper adjustment because my code fails to build with the latest system as a target. The reason I prefer this is simply because (like already stated) I would ignore the error suggested here as an app developer and use the workaround at all times, making this warning a pure annoyance without any effect on my behavior.

Maybe I'm not understanding the full scale of this, but with my current understanding of the proposal, I'm against it due to not really helping at least form the perspective of an app developer.

1 Like