I don't know about the actual use cases for
sealed, but I guess hiding a protocol might be suffient in some situations.
In this context, the meaning of "hiding" would be using an internal protocol in a public method, with the effect that the module would expose overloads of that method for each type that conforms to the protocol (while keeping the connecting protocol secret).
This woudn't need new keywords, and afaics has no impact on backwards compatibility.
I don't know about the actual use cases for
Agree with this philosophy generally, but if this is to be the tentpole advantage of
sealed over the status quo then we are modifying our designs--of Swift itself no less!--around bad behavior.
This is not how I see it. The tentpole advantage is that it allows us to express our design in the language itself. Preventing conformances outside the module is a relatively common design constraint and it is unfortunate that it cannot be expressed in the language.
Expressing our designs clearly in the language is an important goal IMO. This is one of the advantages of Swift-style protocols over the duck-typed generics that some languages have. If we want libraries to be able to define their public contracts as clearly as possible in the language then we need to support features like this and not just fall back on RTFM.
This would not meet any of the use cases I have. The protocol and conformances to it must be visible to users of the library. They are just not allowed to add new conformacnes.
The syntactic requirements of a protocol have always been expressible in the language itself, but the semantic requirements are not and largely cannot be. It is essential to a proper conformance to "RTFM"--that is not a fallback--and I would say that anything that gives a user the impression that they don't need to "RTFM" to understand the contract is not only a non-goal but an anti-goal.
Another thing to consider though is what guarantee the "sealed" invariant gains you as a programmer developing your library. As proposed, "sealed" alone doesn't give the compiler enough information to statically verify that you've exhaustively switched over conforming types, so the only statically total way to interact with a protocol interface would still be through its requirements. Therefore, while you would be getting a guarantee that code outside your module doesn't add conformances, you'd still have to rely on dynamic assertions and/or auditing outside of what the compiler checks to ensure that you're exhaustively handling your own cases in switches or other non-protocol-requirement-dispatch mechanisms. OTOH, if you factor your code such that you always dispatch your type-specific logic through the protocol's requirements, then the "no outside conformances" constraint may not really buy you anything—it may not always be practical for external types to conform, but if they manage to, there's no direct harm in their doing so.
If part of the goal of this proposal is to allow exhaustive enumeration of conformances outside of protocol requirement dispatch, then an alternative design might need to be explored. However, having a requirement like "private and local classes are not allowed to conform if a protocol is sealed" would be really weird. In languages with "case classes" to represent closed sums as class hierarchies, for instance, all of the cases must be defined together with the parent class; maybe a sealed protocol could have a way to enumerate the conformances up front as part of the protocol definition?
There are a lot of semantics expressed in the type system and more semantics implied by the names we choose for our members. The fact that we can’t express all semantics directly in the language is not an excuse for supporting the ability to express the semantics that are reasonable to support in the language. I hope we can agree on this point and the debate is focused around what is “reasonable”.
IMO, there is very little burden on users who don’t use this feature, only benefits for users who need it, and it sounds like @Karl has an implementation. The staunch opposition that arose today is rather confusing, surprising and disappointing to me.
I think this would be acceptable. It’s inline with the spirit of most or all of the use cases I know of. One note: I think only the conformance declaration itself should be required to be stated with the protocol declaration while members implementing requirements could still be declared elsewhere. The same mechanism could also be used for
Sure, there are semantics expressed in the type system: that's precisely why protocols aren't just bags of syntax. Conforming to
Error expresses semantics. And yes, names can imply semantics, and a name that implies unintended semantics would be rightfully judged a poor name. And yes, names are important. (But the compiler does not enforce the semantics of names. It's a pretty thin definition of "in the language" if the semantics implied in a method name is "in" but the semantics stated immediately above the declaration in a doc comment is "out.")
The opposition, I think, is borne out of the clarity that comes with sufficient discussion. We have learned that some of the proposed compiler benefits are not realizable. We are confronted with the challenge that there's an unclear division of labor between this feature and enums.
Circling back, the question is: what does supporting the expression of these specific semantics in this compiler-enforceable way, at the cost of a new addition to the language, gain users that isn't possible now? I would disagree that this is an end unto itself, and it's not clear to me anymore that there is much gained besides.
This, exactly this. We don't need language solutions to every problem where users (ab)use APIs by doing the wrong thing with them.
What if "users" is not just "other teams using some library I developed" (for which I'm of course not responsible), but "my coworkers (some of which might be working on different subsystems) accidentally misusing a part of the code that I wrote as a separate module, and if they do it will affect me too"? I would say that, all else being equal, having the compiler being able to verify properties about your program should be preferable to having to rely on human judgement.
All else are not equal though, there is a cost for every feature added. This post applies equally well (if not more) to Swift:
The thread has moved on quite a bit, but it's worth noting that this is only true for protocol requirements without default implementations. It is not true for methods defined in a protocol extension (whether default or not). Protocol extensions provide a lot of power that enums never will. Even if N is relatively small the boilerplate for enums is significant compared to what is required for protocol extensions.
Which is one of the reasons it would be very nice to be able to use them with sealed protocols!
Would any of those opposed to this proposal possibly warm up to it more if we explore the "case class"-inspired direction @Joe_Groff pitched upthread? That would make exhaustive switch a lot more tractable. This would bring a significant new capability and open up new design options.
You can use
default or wildcard patterns to cover enum cases in exactly the same way.
Not if you need data or primitive operations that are on the associated values (which can be made available using protocol requirements).
Those intermediate requirements then have to be described somewhere. With a switch you can do that with
case .a(let x, let y), .b(let y, _, let x), ...:.
Good point, I stand corrected.
We have to draw the line somewhere. You are right that there are similar analogies in other parts of the language: For example, we require
open to distinguish between methods that can be overridden and those that don't, precisely to avoid certain kinds of accidental API lock in due to "abuse". One could use my argument that such problems could be handled through comments, and many did during the discussion about
The difference in this case is one of scope and magnitude - there are just a lot more class methods in the world than there are intentionally "sealed" protocols.
OTOH, there are tons of other kinds of abuse (e.g. pre and post conditions) that we have no way to model and express right now, and there is a very general class of problems that can occur from that. I would argue that pre/post conditions are worth solving, because a well done feature could be a great expansion of the expressive capabilities of the languages and enable new classes of safer APIs entirely. Such a feature would bloat swift, but would also be widely applicable to a large range of problems.
My issue with sealed isn't just that it is bloat. It is also that the problem it solves is very very narrow and doesn't seem to cause big enough problems in practice. This is just a cost/benefit tradeoff, and very much MHO.
Thanks for elaborating. If your perspective is not one of being opposed to
sealed protocols, but more one of not viewing them as a priority I can understand that. I’m not sure whether you have something along the lines of refinement types or more along the lines of design by contract in mind in the prior paragraph, but either way I agree that those would deliver a lot more benefit to a lot more people than
sealed protocols (especially if they don’t deliver exhaustive switch).
Fair enough. Would you feel differently about the “case class” inspired design Joe mentioned above? That approach opens the feature up to being applicable in a much wider range of use cases because it would be useful for non-public protocols as well.
FWIW, I run into use cases that would best be solved by exactly this feature on a somewhat regular basis. That is why I have advocated strongly for it. There are times when both enums and protocols force a tradeoff that would evaporate if this tool was available. Even if it isn’t a priority right now, I do hope Swift will have something along these lines “in the fullness of time”.
Firstly, apologies for not being so involved with the discussion. I've been ill and haven't been able to concentrate enough to read and consider all of the points.
Secondly, I welcome the scrutiny. I believe it's important that we thoroughly discuss anything that gets added to the language, and consider as many alternatives as we can think of.
So, even though it is not directly a part of this proposal, the most important thing this will enable in the future is non-public protocol requirements. That is a significant feature which would solve a lot of real-world problems in an elegant and straightforward way. To do that, we at least need a way to communicate across modules that external conformances are not supported.
Whether or not we allow more fine-grained access control (e.g.
fileprivate) for protocol requirements is an open question which I don't want to get in to here. It deserves its own proposal and discussion.
I still believe that this proposal would enable optimisations. Perhaps not to the existential layout specifically, but there may be other operations where knowledge of what is (not) inside the box allows the compiler to omit handling certain kinds of structures which it knows it will never encounter. Some information is better than no information.
Yup, another hack. This is definitely an issue for those with advanced models involving value-types.
I'm not sure how anybody can look at this and be entirely satisfied that this is the ideal solution and we shouldn't even try to make it better/more straightforward.
We (the programmers) know that everything which conforms to
Syntax should have the things inside
_SyntaxBase, but the compiler doesn't know that, and needs to account for the possibility that somebody ignored the documentation. Would we not be able to generate better code by eliminating the dynamic downcasting and the possibility of trapping on every access?
Personally, I don't consider it to be a goal of this proposal. As I said before, if we allow exhaustive downcasting for protocol existentials, there is no reason why it shouldn't be extended to non-publically-subclassable base classes (or existentials involving them). So then we'd need to redesign
public/open. I don't think it's worth it at all.
But it's still a good idea to discuss what the goals of this proposal are/are not.
That depends on your design. Many developers are more comfortable with class hierarchies because they learned Object-oriented programming. If your design makes heavy use of value-types or associated types, you might be using protocol-oriented programming.
Unfortunately this can become impractical because the language cannot express protocols like
Syntax which are only used for type erasure. We need to pretend at the language-level that external conformances are allowed, even when they are clearly not supported.
Better pre/post-conditions would be great, but they don't solve the same problems as sealed protocols. It is absolutely not just a bloat feature; please try and keep an open mind and not be overly dismissive. Protocol-oriented design is made significantly weaker than object-oriented design in practice, because of this need to superficially pretend that any public protocol can be conformed to.