Ergonomics: generic types conforming "in more than one way"

Even if this is implementable (which it is not as per Joe's comment), it is extremely undesirable because it violates the open-world assumption. This manifests in terms of evolving libraries: making a type conform to a protocol which it did not conform to before should be an additive change without the risk of breaking downstream clients. I don't know if there are ways in the language to do this already, but adding such a feature makes it incredibly easy to have that kind of breakage.

I get this, but there have been times where I've wanted to do it where open-world doesn't apply, such as in App-level code (which can have no downstream dependencies), or in wholly private implementation within a given module. Not all code is public library API.

Yeah, I view that attribute as an important piece of the more general “what requirement am I implementing?” feature I've been alluding to (but don't yet have a design for). It would be a good step in the direction of having all the capabilities.

1 Like

You mean the compiler would have to pick? Or the code would have to pick? IIUC we don't have a way of doing the latter, so we must be getting the former… which is sorta my point.

Yeah, it's akin to generating specializations. It sounds like you're saying X<Y>:P and X<Z>:P are using the same :P witness table today? My mental model was always that a new witness table was generated for each distinct parameter to X. I have held that model because I know we are dynamically generating witness tables in at least some cases…

I had in mind that rather than “passing the conditional conformance down”, we could “look it up” at the point where the witness table was created. But I am definitely not super aware of ABI details; if you told me that was impossible I'd probably have to take your word for it.

I think there are two prongs to this discussion:

  1. What semantic limitations are actually inescapable, both in the short and long term?
  2. How do we design language features that allow programmers to be clear about the meaning of generic code, within those limitations.

I'm poking at #1 here in the hopes of informing the shape of #2. Thanks for your help so far!

1 Like

Yes, it's been shown to lead to exponential typechecking complexity (which we have enough of in Swift already!). IIRC the original paper is here.

There are already many ways to cause that to happen in Swift. Here's one I whipped up in a few minutes; just add a conformance of X to P. I don't know what you mean by “breaking”; this one causes a compilation error, but causing a semantic change is even easier.

protocol P {}
extension P {
  func foo(_: Int?) {  }
}
protocol Q {}
extension Q {
  func foo(_: String?) {  }
}
struct X : Q {}

// ============== Downstream ============

X().foo(nil)

Thanks! I may do so, but if so I'll start a separate thread and cc you, since this is about the language design.

Sure, thanks for the small example. That said, I think you'll agree that there's a difference between the likelihood of accidental name collision versus a language feature that is hard to use without creating the same problem. :slight_smile:

I understand this sentiment of wanting to express something and be willing to accept some limitations it might come with. However, I think there are many drawbacks to potentially adding a language feature expanding the functionality of protocols which you can only use in an app context. Protocols are used extensively throughout the Swift ecosystem, be it apps, frameworks or SwiftPM packages. If we add a language feature that you can only use in an app, it means that frameworks and SwiftPM packages cannot make use of this feature. It also prevents people from lifting out common code across their apps into a common framework/package.

There is a less ergonomic solution today that you can use -- enums. Enums precisely model a closed-world, although changing code that is relying on protocol-based patterns to use an enum is not straightforward.

I’m too tired to have really thought this through, but it seems like that wouldn’t be an issue if the compiler has access to the library’s/package’s source code. If they’re statically linked, anyway... not sure about dynamic linking. Either way, though, I can’t really think of a scenario where you wouldn’t have access to your own library’s source code, and I think most SwiftPM packages are hosted on GitHub.

Indeed, and as you say, this is less ergonomic. For one thing, it requires a kind of "guts-out" design (in that the calling code is responsible for "tagging" the value) that sets my teeth on edge. Encapsulation and information hiding isn't just good for public module API. In a large project, it can be essential in organizing internal code.

Alas, Swift doesn't have the best story in this arena, as I've mentioned before :pensive:

Right, we only have compiler-automated picking today, but we ought to also have a mechanism by which a conformance could be specified directly. This is why we have a lot of the paranoid restrictions around conformances today, like "no overlapping conformances", "no private conformances", and so on, because they would make it easier to end up in a situation where the compiler can't make a consistent automated choice, and we'd need "named" conformances or something to let source code direct it.

Overlapping conformances might get you somewhat closer to what you're looking for, since it would let you describe different conformances for different constraints:

protocol P {
    static var isEquatable: Bool { get }
}

extension P {
    static var isEquatable: Bool { false }
}

extension P where Self : Equatable {
    static var isEquatable: Bool { true }
}

// strawman syntax `named <identifier>` to name a conformance
extension Array: P named AnyArrayP {}

extension Array: P named EquatableArrayP where Element: Equatable {}

func foo<T: P>(x: T) { print(x.isEquatable) }

// strawman syntax `using <identifier>` to pick a specific conformance
foo(x: [1, 2, 3] using AnyArrayP) // prints false
foo(x: [1, 2, 3] using EquatableArrayP) // prints true

but that still has the issue where, in a generic context where you have a T without an Equatable constraint, you could only pick the AnyArrayP conformance.

Given a conformance, the set of witnesses it uses is the same across all generic arguments. We effectively look up the witnesses in the context of the place where the X: P conformance is declared, based on its set of generic constraints. We only generate witness tables to instantiate different associated types, or to handle protocol resilience when an ABI-stable library introduces new protocol requirements with default implementations that need to be injected into existing binaries.

The main problem with "looking it up" is that there isn't a good way to guarantee a good answer because of the possibility of multiple conformances, and the ability for dynamic code loading to change the behavior of lookup at runtime. All of our generics features thus far avoid depending on global lookup.

One possibility might be to add formal optional constraints to the language, so that you can write P as:

protocol P where Self ?: Equatable { ... }

This would give the type system enough information to try to collect an Equatable conformance up front when forming a P conformance, and plumb that information through witness table instantiation so that we know it's dependent on Equatable conformance.

As far as #2 goes, I think the design of traits and type classes in Rust, Haskell, and other languages would be informative. In those languages, default implementations are explicitly declared as such, and protocol conformances are established in dedicated declarations. This makes it possible to diagnose up front when declarations fail to fulfill their roles. We've been using warnings to softly nudge users in the direction of using one-extension-per-conformance, which gives better near-miss diagnostics, but we don't currently really have a good way of guaranteeing good diagnostics in the face of potentially surprising behavior.

3 Likes

I made an example a couple years ago demonstrating how to conflicting conformances were able to slip past the compiler. The code doesn't seem to build anymore, instead it fails with a linker error. Do you know if that linker error might be related to the conflicting conformances?

@dabrahams you may find it interesting to look at the example: GitHub - anandabits/Incoherence: An example demonstrating the "incoherent instances" problem in Swift.

Probably a bug relating to them, yeah.

Quite. I am no fan of negated or disjunctive constraints.

And requires explicit disambiguation at the use site. I'm trying to allow declaration sites to be more explicit/clear/expressive about what's going to happen at use sites having no explicit interventions.

[Just for my general edification, could you give an example of the former?]

Regardless of where it happens today, it seems like, if we can afford to generate witnesses dynamically for the purposes of ABI-stable library evolution, we can probably also afford to do it (if necessary) to enable other capabilities of ABI-stable libraries. Of course I'd rather avoid it…

I've always taken a “scoped concept map” view of how conformance lookup should work, in which what gets “passed down” is the scope where the generic parameters are bound to concrete types, and conformances are “looked up” in that scope. That model, AFAICT, prevents dynamic loading from having any effect on the semantics of generics.

I'd like to hear more about what problem you see with multiple conformances, and whether or not that could be addressed the same way.

Now, that begins to get at what I'm aiming for! It seems as though there's something about the complete independence of the conformance declarations that, at least in this case, is at odds with the intention of the author of P (uh, me). I was thinking of that code as a “system” when I wrote it, but it doesn't quite act that way. Syntactically, maybe it would make sense to have a way to group these conformances together so they can be considered as one, which I presume would make the ?: Equatable thing unnecessary as it could be deduced from the other conformances in the group. Another approach would be to grant special status to file or module boundaries, gather all the conformances within them, and consider them together.

Yeah, that's a good point; I'll do some digging. You're a polyglot, Joe; any other specific recommendations of languages to look at?

That sounds similar to how Swift protocol conformances try to work, by capturing concept/protocol conformance information from their environment at instantiation time. It seems like with the "scoped concept map" concept you would still have the issue where, outside of up front declaration of the relationship between P and Equatable, you'd be required to grab information "out of scope" in order to fulfill the conditional witness to isEquatable. Having a way of declaring up front the conditional aspects of P conformance with ?: or something similar might be enough to fix that problem.

I think Rust's model is probably the closest to Swift, though they do allow specialization as an experimental feature (with the drawbacks that that does rely on global coherence of conformances and whole-program information, which are not really options for Swift's compilation model). I think the way Rust handles impl conformance declarations and default implementations is a good model, though.

2 Likes

Could you show an example of what you mean here, Joe? It seems entirely reasonable to me that, to keep the last assertion from firing in my example, the extensions of P and the conditional conformance of Array to Equatable would all have to be visible in the scope where Array<Int> was bound to a P-constrained generic parameter, i.e. at the point of the last isEquatable call. It is the context of that scope in which I expect the lookups to happen.

Having a way of declaring up front the conditional aspects of P conformance with ?: or something similar might be enough to fix that problem.

Not having the up-front declaration is obviously more work for the compiler, but it doesn't seem like a requirement. I'm not dead-set against doing that up-front declaration, but unless I'm missing something important, it's something we should decide to require or not based on what makes the best language design for users.

I think Rust's model is probably the closest to Swift, though they do allow specialization as an experimental feature (with the drawbacks that that does rely on global coherence of conformances and whole-program information, which are not really options for Swift's compilation model). I think the way Rust handles impl conformance declarations and default implementations is a good model, though.

I'll spend some time playing with that and see what I can learn, thanks.

Thanks, I took a look. I've always assumed that because Swift tries to make protocol conformance “truly, semantically, dynamic,” it would have to (at best) pick an arbitrary conformance in some situations… But I created a simpler example so I could do some experimentation, which seems to demonstrate it's much worse than that, and this part of the compiler is nondeterministic, if not insane.

1 Like

We are well aware that the implementation is not very robust in the face of multiple conformances, because there are many places in the compiler where it ought to carry conformance information forward but gives up and does global lookup again. The intended semantics should be that you get the one conformance that's visible at the point of use (or else an ambiguity if there are more than one), and that generics with protocol requirements which get instantiated with different conformances end up instantiating different types.

3 Likes

That's what I've always thought the intended semantics should be. It's never been clear to me that anyone else—particularly those putting intentions into code—had the same semantics in mind, though. And absent an implementation, there doesn't seem to be any mechanism for establishing an official consensus.