Special Case Protocol Conformance: Long-term Tradeoffs for Near-term Conformance

Recently, @Alejandro launched the Tuples Conform to Equatable pitch, along with a proposed implementation.

This thread, here, is for purposes of discussing the long-term tradeoffs that may result from the near-term adoption of built-in, special-case automatic protocol conformance.


In short, the original pitch was to create a special-case, compiler-level mechanism to generate Equatable conformance for all tuples whose elements themselves conform to Equatable. The conformance would be automatic, without any means to opt-in or opt-out.

There exists a fairly wide-spread and long-standing desire to enable protocol conformance for tuples and perhaps other structural types, and especially so with respect to ubiquitous protocols, like Equatable, Comparable, Hashable and Codable. Presently, we are not in a position to facilitate custom protocol conformance as a general feature for tuples or other structural types. It is anticipated that that sort of feature will be created, eventually.

It is apparent that there would be some long-term implications to creating a near-term, special-case feature ("special case behavior"). For instance, when we gain the ability to provide custom protocol conformance for tuples, how will we migrate away from the special case behavior, and will there be conflict? More generally, will adopting this sort of special case behavior tend to constrain future lines of evolution?

Expansion of Proposal

The discussion in the original pitch thread, led to the Core Team, through @Joe_Groff, expressing the opinion:

We agree that the lack of these conformances is a significant pain point in the language and that it's worth considering introducing special case behavior until more general mechanisms for extending structural types are available.

As part of that same post, the Core Team indicated the desire to expand the proposal to include Equatable, Comparable and Hashable, suggested that Codable should be explored, and suggested that Hashable also be discussed as a special-case conformance for other structural types.

Call for Discussion of Tradeoffs and Long-term Liabilities

The Core Team post concluded with a request for the sort of discussion contemplated by this thread (emphasis added):

In the ensuing discussion, we'd like to see attention paid to the long-term compatibility tradeoff of special-casing these conformances today, vs. what future language directions such as variadic generics may enable. If we accept any of these proposals, we would be committing to preserving compatibility with the source-level behavior, and on ABI-stable platforms, any runtime behavior and ABI requirements this introduces, even when more general mechanisms get introduced in the future. It would be good to explore and enumerate what these long-term liabilities are so that we know what we would be committing to.

The original thread included some preliminary discussion of the possible long-term implications and tradeoffs of adopting the special case behavior. This thread is for purposes of eliciting community-wide brainstorming to identify and discuss the potential tradeoff and implications.

Summary of Points from Original Thread

For convenience, and without intending to limit the scope of discussion in this thread, the following is a summary of some long-term-implication/trade-off points discussed in the original thread. Please pardon me if I inadvertently have omitted any contributions from the original thread.

In the pitch itself, @Alejandro noted the eventual divergence between the special case behavior and generalized protocol conformance for tuples:

In the future when we have proper tuple extensions along with variadic generics and such, implementing Equatable for tuples will be trivial and I imagine the standard library will come with a conformance for tuples. When that happens all future usage of that conformance will use the standard library's implementation, but older clients that have been compiled with this implementation will continue using it as normal.

In the ensuing discussion, @Alejandro cautioned against allowing work on special case behavior to slow progress towards generalized protocol conformance for tuples:

It's also important to state that we shouldn't wait for all of these magical conformances before working on proper implementations. This proposal does, however, allow newer conformances to be added fairly easily, but each conformance should be carefully considered.

@xwu raised the question of special case behavior leading to a potential ABI-break roadblock. The original thread does not contain an answer to, or even much discussion of, the question. Certainly, the question needs to be answered as definitively as possible.

Question—if this is accepted, and a resilient library makes use of magically generated tuple equality, will it be an ABI-breaking change in a future version of Swift to implement a custom conformance when it becomes possible to do so?

In considering the near-term benefits of adopting special case behavior, @DevAndArtist lamented that many users may not be able to benefit from the behavior in the near term:

One thing that makes me sad is that this requires new runtime support, so even if accepted I wouldn't be able to use it for a long long time.

To that same point, @Joe_Groff discussed the possibility of backward deployment to older runtimes:

One important implementation concern, though, is how to deal with backward deployment to older runtimes where tuples do not have any protocol conformances. There are couple of possibilities I can think of:

  • The tuple-equatable conformance is only available when targeting an OS with Swift 5.2 or later. This wouldn't require any backward deployment hackery, but would likely require us to design and implement the general feature of how conditionally available protocol conformances work in the language.
  • We use the backward compatibility library to backward-deploy this functionality into binaries targeting older OSes. This would allow users to take advantage of the functionality without restrictions*, but would require that we actually have enough hooks in the 5.0 and 5.1 runtimes that can be replaced to understand tuple conformances. In theory, it seems like this is possible, since swift_conformsToProtocol ought to be hookable, and we could provide copies of the runtime data structures for the conformance in the compatibility library, but it would require implementation effort to make sure this is actually possible.

(*And the hooks have limitations of their own—they only take effect when linked into the main binary, not dylibs, so dynamic libraries that want to be distributed for use with older Swift toolchains that don't have the compatibility library would still not be able to take advantage of the new functionality.)

As food-for-thought about the unexpected ways that evolution might occur, I note that @DevAndArtist renewed his desire to explore treating tuple-types as structs, and asked whether the same is possible while maintaining source compatibility:

I mentioned this multiple times in different threads that personally I would find it cool if we had tuples as a struct while the general tuple syntax would become a tuple literal syntax which could also be used for other types such as for the python lib I believe.

Can we once and for all debunk if we could make this happen and keep source compatibility in the future. Obviously we would need variadic generics and at least generic parameter type labels.

struct Tuple<variadic Element>: ExpressibleByTupleLiteral { ... }
extension Tuple: Equatable where Element: Equatable { ... }
extension Tuple: Hashable where Element: Hashable { ... }
extension Tuple: Encoding where Element: Encoding { ... }
extension Tuple: Decoding where Element: Decoding { ... }

This would also allow for single element labeled tuples as we would have an unambiguous syntax (e.g. Tuple<label: Int> ) and allow the average swift user to extend the type with custom protocol where appropriate.

@Tino offered some thoughts about future directions:

Imho adding conformances for tuples is a good idea, but I wish there was a bigger plan what should happen in the long run, instead of allowing some basic protocols in an ad hoc fashion.

  • Will tuple conformance always stay a special-case feature?
  • Will it be limited to protocols that only define operators (and some special cases)?
  • Or are we going to see extension (Int, Int): SomeProtocol in the future?

I can't see any downsides for the last option, as it fits great into my model of tuples as anonymous structs - but I know that for every possible change, you can find someone who opposes ;-)

Thinking about this, I might have found a downside myself ;-): When you allow custom conformances, you can have incompatible implementations - this does not happen when everything is handled by the compiler. But the same problem already exists for other types, so imho that's not a dealbreaker. It's still worse for tuples, though...

Finally, @Joe_Groff broadly observed (emphasis added):

I wouldn't personally expect much contention from the high-order idea that tuples have these protocol conformances, but adding them as a special case does have interactions with future language directions (allowing structural types to conform to protocols, variadics, and such) that need to be considered, ....

And, so, discussion has been invited.


Well I would recommend considering the formulation of some guiding principles.

For example, consider a desired outcome such as Equatable for tuples. By desired I mean some people want it, and few seem to object. It appears a good enough idea to consider, because it solves some current problems, allows some things to be done that couldn't be, etc.

Now consider ways this could be implemented, consider the impact of a specific implementation, and the impact of the change outside its desired effect.

The first principle I suggest examining is this: it is better to implement a feature in the library than the compiler. This leads to considering why it cannot be implemented in user code. What would be needed to implement it.

The second principle is, if it is implemented in the compiler, can it be made sugar, so that it necessarily has no semantic downsides. Sugar is basically a syntax extension, and they're always preferred to semantic ones to achieve a goal.

The third principle is: is this a completion of existing semantic concept? That somehow just didn't get implemented yet?

The fourth principle is: if its an extension, can it be generalised. If so, how would that be done. How useful is the generalisation. An important point is that if there is a probably useful generalisation, which requires considerable time and resources and further research, would the special case be subsumed by it? If this is so, the special case implementation in the compiler is less risky.

The primary guiding principle is that: special cases point directly to design faults. Always. No exceptions. There are never special cases in good designs. Almost always any special case that seems to be required is not the real requirement.

The questions that should be asked are: why can't we apply protocols to tuples with user code. Is there something special about tuples that makes it impossible? Are there other similar cases? What do they have in common? Is it that they're structural types not nominal types? If so we should solve the problem in principle for all structural types (that means if we special case a solution in the compiler we believe it is actually part of a more general improvement that we're not ready to do yet).

Or it it because the type is a value type (as opposed to a reference type)?

The key to good language design is the type system, and the key to a good type system is algebra. If you do not have an algebraic model for your type system, it will be complex, full of holes, and generally make extensions and improvements impossible. Things that seem like a good idea at the time should never be implemented if it involves the type system. Opinions have little merit, mathematics is what matters.

The C++ design committee has some design principles. Not that they're good or bad but they have them. One is: never make a change that reduces performance. Another is, don't break existing code. The principle of least surprised is often cited. I'm not recommending these for Swift, just illustrating how principles can help make design decisions. Another core principle in selecting what to work on, was espoused by Bjarne: if the people developing the library ran into a road block and needed core language support, that was the highest priority. In other words library design by expert programmers has a higher weight than features being pushed for by users. This is how C++ got partial specialisations.

In turn, language designers should be focused on higher order programming, even if most users will never do any.

1 Like

I offer the example, below, for purposes of discussion. I am not confident that I am correctly thinking about the matter.

Generally, a type capable of conformance to protocols can be conformed to a given protocol, or not, as the user sees fit. In some cases, it may be useful to use the presence or absence of conformance to a given protocol for gating purposes. As a contrived example (and perhaps poor practice), a user might write code that performs different actions based on whether a value's type conforms to a special case protocol, like Equatable, or not.

struct Sample: Equatable {
    let x: Int, y: Int

func process<T>(_ value: T) {
    print("some action")

func process<T>(_ value: T) where T:Equatable {
    print("a different action")

let a = (x: 1, y: 0)
let b = Sample(x: 1, y: 0)

process(a) // "some action"
process(b) // "a different action"

The proposed special case behavior will result in certain tuple types always conforming to Equatable, whereas currently they always do not conform.

Possible Near-term Issue (off topic)

In the near term, the change seemingly would break source code that relies upon the current behavior of a tuple never conforming to Equatable. But, that is not the topic of this thread.

In the long run, after generalized protocol conformance for tuples becomes available, it will be possible to write code that depends upon whether a tuple type conforms to a given protocol, or not. However, where the given protocol is one that has been bestowed via special case behavior, it will not be possible to write that sort of code. For those protocols, the relevant tuple types will always and forever conform to the protocol, and so cannot be gated in this manner. It appears that Swift would have a permanent edge case surrounding the special case behavior.

As a practical matter, not being able to write this sort of code does not seem like a significant consequence. But, having to work with the ongoing presence of this sort of edge case could be significant in terms of (i) possibly creating technical obstacles when implementing other features that might bump into the edge case, and (ii) certainly creating a burden on documentation and on users to maintain awareness of the edge case.

With that said, it may be possible, now or in the future, to provide some sort of additional special case functionality that allows a tuple type to be opted out of the special case behavior--perhaps via some sort of annotation. But, that would compound the burden of special cases, not avoid it.

If or when explicit protocol conformances become available for tuples, the conformances for all tuples to Equatable and other standard library protocols would be vended by the standard library and apply across the board.

As to underlying philosophy, etc., that’s been explored at length elsewhere. Let’s keep the focus of this discussion on the core team’s prompt:

In the ensuing discussion, we'd like to see attention paid to the long-term compatibility tradeoff of special-casing these conformances today, vs. what future language directions such as variadic generics may enable.

Certainly, performing that evaluation with a long-term view is the topic of this thread.

Tuple types are anonymous. Presently, Swift does not have (accessible) functionality within the compiler to formally declare or extend an anonymous type. We hope to have that functionality in the future, but, for now, we don't. So, we consider whether to work around that limitation by implementing special case behavior inside the compiler.

Right. The design could make a significant difference in terms of the long-term consequences. At this stage, the design details of the implementation are best discussed on the main thread, and may serve as fodder for thought, here.

And, there, lies the heart of the question. A generalized approach could be achieved, but not for some time. So, we consider the special case behavior version.

I'm not sure that that outcome is expressly part of the proposal. It is alluded to as a future direction, but even then, it is not clear whether the intent is a synthesized conformance implementation when the user declares conformance to the protocol (like the behavior available, today, for Equatable on structs and classes) or whether the protocol conformance would be automatic merely by virtue of the nature of a tuple type's elements.

Once we have the generalized ability to conform tuple types to protocols, why would we want to continue to have tuple types automatically conform to Equatable and the like? That behavior is fundamentally different than the behavior for similarly-situated structs and classes.

I'm trying to catch up. Creating edge-case semantics that can't later be revised to fit the generalized model for protocol conformance seems relevant to future language directions. Have we explored the consequences of permanently locking tuples into automatic conformance?

All tuples with equatable elements up to arity 6 already have an implementation of ==. That they would be Equatable when it’s possible is not in question: this was settled in SE-0015. The only question is whether it should be done now, or when conformance to arbitrary protocols can be expressed.

1 Like

This is my personal preference.

This isn’t a question of preference. The core team is asking what, if any, negative consequences there would be of having to maintain compatibility with these special cases in any future, more generalized features. It is a deeply technical question.

So it does not allow me to express and share my personal preference on the forums?

The core team asked the community a question:

If we accept any of these proposals, we would be committing to preserving compatibility with the source-level behavior, and on ABI-stable platforms, any runtime behavior and ABI requirements this introduces, even when more general mechanisms get introduced in the future. It would be good to explore and enumerate what these long-term liabilities are so that we know what we would be committing to.

This thread was created for the purpose of focusing on this particular aspect; I hope we can stick to it.

Perhaps it should be in the compiler section then, rather than evolution? Evolution is about the language, not the tools or implementation.

I don't doubt any of the technical aspects of it. But allow me to correct you. If you say

The only question is whether it should be done now, or when conformance to arbitrary protocols can be expressed.

And then add

It is a deeply technical question.

Then I can only argue that my answer is as deeply technical as the mentioned question was.

That said, I prefer the 'safest' technical way, which would however imply that there won't be a near-term solution to the problem in question. I can't answer the 'why' part of that question, as I'm not a compiler developer and have no expertise in that area. Yet I do prefer the safer route for the language to evolve so there won't be room for any technical regrets afterwards.

Thank you for hearing me out. :slight_smile:


It is a specific question directed at certain proposals, so I don’t think it’s out of place where it is. But I guess it could be equally at home under the compiler section. In either case, I don’t think it much matters as long as we can focus on the question at hand.


As I tried to explain, the specific question cannot and should not be answered yet. The correct approach is to first develop principles of judgement. Then apply them to the specific case. Nothing ruins a language more than ad hoc features, or democratic decision making. Opinions are of no value until the structure of the problem is elaborated. Only when the alternative paths are known with costs and benefits and timelines does it make sense for people to put their person weights on the options.

So: it is possible to handle real cartesian products with polymorphic recursion in the library. Assuming there are no labels, a second representation of a product is introduced. The primary representation is an array of types and requires iteration to analyse. If a secondary representation is introduced with a binary operator that combines a type with an existing tuple, polymorphic recursion can be used to access each component by pattern matching the head and tail tuple, doing whatever with the head, and the recursively analysing the tail, using the unit tuple to terminate the recursion.

I know exactly how this works because I have implemented it already. I don't currently know the details of the Swift compiler, but if it is reasonably well constructed someone who did could implement this in about a week. Its not a big job. Once implemented, there is no need for any special casing in the compiler. Any protocol in which folding some function of each value of the tuple into a value of a fixed type can be handled. This covers equality and total ordering, among other things. I have checked and at least get properties already do polymorphic recursion. Someone needs to check if that extends to a fold.

I do not know if you can fold hashes. Someone needs to do some research and prove that there is a formula which takes a sequence of random numbers and combines them to produce a random number. Perhaps the linear congruential method will work, i.e. ax + b mod p, where p is prime.

This development is easy enough that there is no need for a special case. It allows protocol to be applied to tuples by the end user, provided the protocol consist only of get properties. It may be possible to generalise this further.

So according to my principles, the best option is to implement this now, because it replaces a specific hack with a general facility, and is relatively easy to implement.

This is NOT the only generalisation. It is also possible to handle tuples with iteration directly. This doesn't require a recursive definition of a tuple. On the other hand it does require extra meta-programming facilities which I personally have been looking to find for quite some time, without success.

Apart from my own implementation, in C++ the polymorphic recursion can be done using variadic templates, because the head/tail form is precisely how that works. However extensive facilities are required to do it. For example, in C++ variadic templates work on lists (I think called parameter packs), so you have to be able to convert a tuple to a list anyhow.

There are ALSO objections to the idea that protocols apply to values of structural types. In Swift this also includes function closures. In Felix, any type can be used, however, it is not clear to me it means anything. Typically structural types should always be handled by parametric polymorphism. Type classes (protocols) are only needed for nominal typing, because they cannot otherwise be decomposed.

In particular, consider that a tuple is a weak type with no particular meaning. two different users may have a different concept of say ordering. Just consider a pair of doubles representing a complex number .. which has no order.

This does not follow at all. Implementing a hack in a compiler is often HARDER than implementing something general. In fact in my experience, it is not only almost always harder, it almost always breaks something as well. The special case has to "flow through" all phases of compilation, just as the general case does. Since the general case is more abstract, it is actually much easier to implement, because, by omitting details, the compiler writer is forced to ensure all the required information is available for processing at all times, and, can more easily ensure exhaustion. On the other hand if a special case, once implemented, is to be replaced by something more general, all the special casing code has to be removed. In addition, another special case may interfere with it.

Actually precisely this just happened to a friend of mine. Two extensions to GHC conflicted, and he was tasked with finding and fixing the bug. It took about a week which is amazing because at the time he didn't even know Haskell, let alone understand the compiler.

No doubt, at the end of the day, our individual, subjective evaluations of the tradeoffs are highly relevant to determining a path forward. At the moment, I hear @xwu focussing the conversation on identifying exactly what the tradeoffs are, at a technical level, so that we have some data to evaluate at a later stage.

Not to be off-putting, but let's hone in on specifics, like how variadics might be implemented/designed in Swift, and how the special case behavior under consideration might constrain the decisions to be made when designing variadics. Even if you, like me, don't have all the compiler knowledge needed to delve into some of the more technical implementation details, I think we have much to contribute and learn, here, in terms of reasoning it through at the Swift language level and perhaps at a meta level somewhere between the language level and the C++ code in the compiler. At least, that is what I'm trying to do (we'll see if I manage it).

1 Like

@xwu, you wrote the following:

Question—if this is accepted, and a resilient library makes use of magically generated tuple equality, will it be an ABI-breaking change in a future version of Swift to implement a custom conformance when it becomes possible to do so?

Have you any follow-up thoughts on what the answer might be?

Thank you for that reference. SE-0015 is a quick read, and for the benefit of others, I'll summarize: In short, the proposal was to implement == and != operators for tuples whose elements conform to Equatable. The proposal did not directly state an intent that applicable tuples should automatically conform to Equatable, but merely that they should have the benefit of an == operator. Two different things, in my mind.

SE-0015 also spoke to the future, and did anticipate, "When Swift gains support for conditional conformation to protocols, and if Swift ever gains support for extending tuples, then the tuples up to the chosen arity should also be conditionally declared as conforming to Equatable and Comparable."

Did that statement in SE-0015 mean:

  1. the language automatically will declare and provide the conformance via the Standard Library, OR
  2. users will have the ability to declare the conformance, if they so choose?

Either way, I don't think SE-0015 can be fairly read as making that decision--especially given that SE-0015 was five years ago, and we probably remain years away from actually being in a position to implement the generalized concept of tuple-type conformance to protocols.

While rehashing this history is off topic, understanding the nature of the decisions we are making today definitely is on topic. By adopting the current proposal, we effectively would be making a decision to forever have special case behavior for tuple types whose elements happen to conform to Equatable, etc. If we instead wait until we have generalized tuple-type protocol conformance, we can avoid that special case behavior.

My example, above, was intended as a rudimentary starting point for exploring the tradeoffs impacted by the special case behavior. Certainly, we should look at the impact on variadics, and also should think about other possible impacts that have yet to be identified.

I asked about that in the pitch thread:

1 Like