SE-0364: Warning for Retroactive Conformances of External Types

Error does have requirements, but they're hidden because of the leading underscores.

2 Likes

Frustratingly, for Error in particular, even a single, unambiguous conformance on the wrong type will ruin someone’s day. We don’t even need to have two conflicting declarations — one is quite enough.

There is a significant amount of code out there that is relying on T is Error returning false for specific types (esp. types in the stdlib or one of the platform SDKs). This makes innocent-looking declarations such as extension String: Error {} both source- and ABI-breaking, even if they only occur once.

Retroactively conforming types to Error is not a wise thing to do. (To be clear, neither is relying on T is Error returning false, but here we are.)

I would very much like the new warning to trigger on conformances to Error.

8 Likes

In my view being able to extend types that you import from other packages with protocols from other packages is a key feature of swift. In the code I have seen and the code I have written this is very very common and very useful. Without this we would be forced to constantly write lots and lots of wrapper type that themselves would need to possibly conform to other protocols.. the general effect of this would mean most libs would be strongly discouraged from exporting any concrete types as for most users these will be useless.

For this reason I feel swift should at least provide some way for developers of libs to explicitly expose protocols that they promise you can use in this way. (eg protocols that they can declare publicly for users but never use as extensions on types they publicly declare). Just as we have frozen types that provide a contract promise from the lib developer so that clients can safely assume things will not change.

Furthermore I wander if perhaps there is something that could be added to the runtime so that in this situation were the library has added conformance (without the library consumer having been re-compiled) the compile is able to track which implantation it can safely use.

Not sure if it was mentioned in the proposal but this should also only apply to frameworks you link to dynamically.

4 Likes

Sorry I know we're past review but I just saw this and I haven't seen many counter-points.

Similar to what @hishnash said, as an application developer in Swift I consider this a feature. In terms of source compatibility, in the 7 years now of managing a codebase with multiple hundreds of thousands of lines of Swift I cannot think of even 1 single time anyone on our team cared that we had to spend a couple days refactoring some code because something changed in either a dependency or in Swift itself. Usually these refactors are because there's an improvement, and that improvement garners excitement for that work even if it's tedious. Source compatibility from my "application developer" view is mostly a non-issue. Granted this also implies that changing retroactive conformances to using "module-qualifying" paths is also a non-issue.

But what will be an issue is if this becomes impossible to do in Swift without wrappers. I also use Rust in my work daily and there are plenty of things Swift can do I wish Rust could, and retroactive conformance is near the very top of the list. When there are crates that expose traits and then miss internal types which likely should conform to those traits I'm forced to make these wrapper types and pollute my code with them, they are incredibly annoying and noisy.

The ideal scenario imo is if a retroactive conformance does not yet exist then the user has a way to conform, if suddenly it does exist then the user is forced to cleanup their code, which correct me if I'm wrong but will happen as soon as the developer compiles anyway. Since some individuals in this forum wish to never have this capability / make it an error, then I think the proposal as written with a warning and an escape hatch is an ok compromise +1, though I do not wish this to be pushed any further at least for app development.

Edit: After reading the proposal a second time I noticed

Before the client removes their conformance and rebuilds, however, their application will exhibit undefined behavior

which I believe is referring to for example when a user upgrades their operating system and there's a new dynamically linked Foundation. Therefore I agree that this seems more applicable to dynamically linked libraries. Has the solution been considered where the library's conformance "wins" within the library if the conformance exists, but if it does not exist the user's wins in the library, and from their app outside the lib their own conformance wins? At least until recompilation. Also, in iOS world at least, if an app hasn't been compiled yet for a target OS doesn't the app operate in a compatibility mode of some sort for the older OS it was targeting, or is the intent here to give space to Apple to remove that mode?

4 Likes

Any situation where you have multiple conformances sitting around in the same executable can cause issues. For instance, which conformance "wins" in the following scenarios?

// SomeKit
public protocol P {
  var x: Int { get }
}
public struct S: P {
  var x: Int { 0 }
}

public func useP<T: P>(_ t: T) {
  print(t.x)
}

// MyApp
import SomeKit

extension S: P {
  var x: Int { 1 }
}

useP(S()) // what does this print?

func callUseP<T: P>(_ t: T) {
  useP(t)
}

callUseP(S()) // what about this?

There is some ability for libraries (and presumably, the runtime) to change behavior based on the version of the SDK that was compiled against, but it's my understanding that using this for protocol conformances (if it's even technically possible, which I can't speak to confidently) would present some pretty significant performance downsides at the very least. When, say, SomeKit is upgraded to include the S: P conformance, the compiler would have to avoid all specialization of the P.x requirement within SomeKit for all clients on the off chance that it ends up linked against a client that introduced a retroactive conformance. So you'd lose out on any ability of the compiler to avoid dynamic dispatch even when it 'knows' the implementation to call under a "retroactive conformances aren't allowed" regime.

2 Likes

Any situation where you have multiple conformances sitting around in the same executable can cause issues.

But this is in the proposal:

if Foundation decides to add this conformance in a later revision, this client will fail to build

I thought ^ implies that this explicitly doesn't occur in statically linked scenarios because "this client will fail to build"?

Granted I didn't think very far through what I wrote for that option, in that scenario I'd expect useP to always use SomeKit's implementation, however if the client is doing anything impure in their extension it will completely break their expectations.

Thanks for thinking through that, that's obviously not good.

1 Like

Yes, sorry if I was unclear there. I was only trying to illustrate why, in the dynamically-linked case, a rule like the one you suggested ("library conformance wins in the library, client conformance wins in the client") doesn't solve the fundamental soundness hole. Assuming the SomeKit S: P conformance didn't exist at the compilation time of MyApp, the compiler has to use the MyApp conformance, even in situations where the conformance is being vended back to SomeKit.

4 Likes

Overall +1 for the direction of this proposal, but I hope this will cause an error, not just a warning, in some future version of Swift.

This is based on my experience with a test bundle, which declared a conformance overlapping with another one in the tested library. Not being able to predict which conformance is going to be used causes issues on its own, but it's even worse when you're not prepared for it or not aware of it. The warning will help, but ideally I'd like to avoid stumbling upon this problem altogether.

1 Like

Yes that is what I would expect as a developer. If SomeKit exposes a generic function that expects something conforming to P (from your example) even if the new version of SomKit has an extension on that same type to make it conform to P in situations where the generic constrain was evaluated at compile time of MyApp it should use the my-app implementation. But I get that this might sill have some possible issues if inside the dynamic lib there is some comparison to values locally generated were the type has been fully type-eased into Any and then cast back.

So that is why I feel that perhaps there is a better solution would be Swift instead constrains public protocols that are emitted by dynamic libs adding something similar to frozen and this would then require linking checks if the protocol is used to conform to other public types.

This would add complexity to developers building ABI stable SDKs but that feels like the correct pattern, after all that is what swift has opted to do over the years in most other situations were a dylib change would otherwise limit the lang's usefulness.

I don't feel this would be to much of a burden on a dylib as in most cases were they might be adding conformance of a public type to a public type this will be for a new public api eondptoin. And if this is truely for some internal use they can always write a wrapper type that is internal that they can extend with conformance to the public type.

eg.

// SwiftCharts

// This is then exposed to developers that compile against the new version of the lib
@linkedVersion(after: 2.0)
public extension Measurement: Plottable where UnitType == UnitDuration {...}

// If SwiftCharts need to use this internally
// in placers were they do not want to deal with 
// linked version checking use a wrapper type.

private struct PlottableMeasurement<UnitType: Unit> {
    var measurement: Measurement<UnitType>
}

private extension PlottableMeasurement: Plottable where UnitType == UnitDuration {...}

Needing to create wrapper types and private protocols as well is very common if your build a dylib as your exposed concrete types might well need to be frozen so the wrapper gives you the ability to easily add additional data needed by the new code paths without breaking the frozen promise.

1 Like

Swift on any platform does not currently have any solutions for this issue, other than just not adding new conformances to existing types.

I'm also too late to the discussion here, but when I just read this proposal I immediately thought that I will receive warnings in my own code and I will simply opt to "silence this warning by explicitly module-qualifying both of the types in question" in all of the cases. Because I already only extend an existing type with a protocol conformance if I really need to. And if I do, I expect my compiled code to work correctly forever. Of course, it's fine if my code fails to compile because a conformance was added in the standard library. Then I can adjust my code. But compiled code from the past should not change behavior just because the iPhone user has upgraded to a newer version of iOS.

For example, if I understand the proposal correctly I would get a warning here, right?

extension Binding: Equatable where Value: Equatable {
   public static func == (left: Binding<Value>, right: Binding<Value>) -> Bool {
      left.wrappedValue == right.wrappedValue
   }
}

And the workaround to get rid of the warning would be to replace the first line with:

extension SwiftUI.Binding: Swift.Equatable where Value: Swift.Equatable {

The reason I need to conform Binding to Equatable by the way is because I need a Binding value in my views state and I'm using TCA which requires state to be Equatable.

To me personally, and I fear this will be true for other app developers as well, the warning suggested here is simply causing confusion and let's me wonder why the compiler can't simply prefer my own conformance to the protocol and handle it like an "override" we already have for subclasses in Swift. The later added Swift standard library conformance would then behave like a "super class" for code that was compiled with an older version of the system and my own conformance would be the "subclass overriding the parents behavior" in that case.

I'm no compiler expert and my best guess to explain why this is not the current behavior already is that this would introduce a performance or complexity issue. But without knowing the complexities behind it, just being asked for what behavior I would expect: No warning, definitely not an error, but just Swift doing what I expect – which is the code which I tested with the version of Swift and the libs I developed my app with to work exactly the same in future versions of the system like at the time I tested it. To me, this is the very definition of backwards-compatibility.

If what I wish for really is not possible, I personally prefer to get a warning after the conformance was introduced to the standard library on the App Store and via E-Mail telling me that there is a conformance I provide clashing with a newly introduced conformance in the system and suggesting to re-compile my app with a newer Xcode version where I need to provide a proper adjustment because my code fails to build with the latest system as a target. The reason I prefer this is simply because (like already stated) I would ignore the error suggested here as an app developer and use the workaround at all times, making this warning a pure annoyance without any effect on my behavior.

Maybe I'm not understanding the full scale of this, but with my current understanding of the proposal, I'm against it due to not really helping at least form the perspective of an app developer.

1 Like

Unfortunately this choice would also cause your code to break on OS upgrades, because internal uses of the new conformance inside the OS frameworks would suddenly be calling your code instead (since it overrode the conformance), and your implementation of the protocol might behave differently.

2 Likes

But how can my overriding function in my own module affect any module that doesn’t import my module? The standard library shouldn’t have access to my module, this would even create a circular dependency from my understanding.

Thus I would expect this overriding behavior only to happen where I explicitly import my module or within the module itself, of course.

That's how dynamic dispatch on protocols works. There's a single table with the conformances, and the runtime looks up what method to call from that table.

2 Likes

And that’s what I’m suggesting to change for apps that were built against an older SDK then, if possible only for protocols that were changed since the version of the SDK the app was built against. Shouldn’t affect performance too much since there are not so many such changes and the app developer can get back to 100% performance by simply building again with latest Xcode. Wouldn’t this be the best way of backwards compatibility here?

Could you be concrete about what you're suggesting here? What does the protocol conformance structure look like in memory in your scheme?

I am a compiler noob, I am only giving my opinions and expectations from a Swift users perspective. I have no idea if my suggestions are feasible from a performance or complexity perspective as already stated. I totally understand if they aren’t and that’s why things end up differently than I expect. But I imagine it’s still somewhat helpful to hear the user-only side here.

Unfortunately it's not even a performance or complexity thing, it's a "we actually don't know how to make it work like that" thing.

The idea I've liked best so far is to have extensions on types from other modules introduce a local type by the same name that forwards everything to the original type except for conformances. But that's not source compatible, and if not done very carefully could be incredibly confusing. Like, imagine extending String to conform to Error (which lots of people have done), and then trying to throw a String you got from a system framework, and it mysteriously doesn't work because it's the system's String which doesn't have your conformance.

Another idea that's been proposed is adding a new "scoped conformances" feature, but I don't think anyone has gotten far enough to have a concrete design for how the runtime would behave. And even then, I can imagine it being confusing: would a Dictionary<Foo> work if Foo's Hashable conformance was scoped to just your module? (since Dictionary is a type not from your module)

It's possible we'll eventually find a solution for this, but it's not just a matter of perf or complexity tradeoffs.

7 Likes

Thank you everyone for your participation. The review has concluded, and the proposal has been returned for revision.

1 Like