Final by default for classes and methods

+0. This seems reasonable, and a lot of the arguments are compelling. The argument put forth about library design especially so. But coming from C++, where I have to prefix nearly every method in my classes with virtual, I'm worried that we could end up with the same problem in Swift.

We don't know what the dominant paradigm in swift will be ten years from now. Inheritance has a raft of problems, but there's no guarantee that the alternatives will be better in the long run. I suspect they will be, but I also suspect we will find new and exciting problems in large codebases using more functional patterns.

While there's a lot of excitement in the Swift community right now about final, value types, and other language features, but I fear that when the rest of the world jumps on the Swift bandwagon, most are just going to use classes exclusively over structs and continue their OOP practices, simply because it's what they're used to.

Making final the default may be a great way to discourage them. But it may also get us right back to where we are in C++ today, where programmers want virtual functions 99% of the time, but have to specify each function as virtual.

In my considerable experience with C++, that is not at all where we are today. Increasingly, C++ is becoming seen as a language for high-performance computing, and people working in that area learn that they don't want to pay for virtual dispatch when they don't have to. It is true that for some of them, reflexive use of OOP is hard to shake, but they do learn eventually. Note also that Swift is really the second major language to take value semantics seriously. The first was C++.

I agree with this. -1 to the proposal.

Charles

To play devils advocate, take for example UINavigationController in UIKit on iOS.

I’ve seen multiple times in multiple projects legitimate reasons for subclassing it, despite the fact that UIKit documentation says we “should not need to subclass it”. So if we relied on Apple to “declare”, they most probably wouldn’t, and these use cases (and some really impressive apps) would become impossible.

While I agree with all points made about “If it’s not declared subclassable, they didn’t design it that way”, I think that ties everyone’s hands too much. There is a balance between safety and functionality that must be worked out. I think this errs way too far on the side of safety.

Rod

What if one framework provider thinks “you won’t need to subclass this ever”

If the framework author didn't design and implement that class with subclassing in mind, chances are it's not necessarily safe to do so, or at least not without knowledge of the implementation. That's why I think deciding that a class can be subclassed is a decision that should be made consciously, and not just "I forgot to make it final"
My opinion is -1 on this proposal. Classes seem by design to intrinsically support subclassing.

What if one framework provider thinks “you won’t need to subclass this ever” but didn’t realise your use case for doing so, and didn’t add the keyword? When multiple developers come at things from different angles, the invariable situation ends with use cases each didn’t realise. Allowing subclassing by default seems to mitigate this risk at least for the most part.

I think this definitely comes under the banner of “this would be nice” without realising the fact you’d be shooting yourself in the foot when someone doesn’t add the keyword in other frameworks and you’re annoyed you can’t add it.

Does it seem like there's enough interesest in this proposal? If so, what would be the next steps? Should I go ahead and create a PR on the evolution repo, describing the proposal version that Joe suggested, with classes closed for inheritance by default outside of a module?

Thanks!

I understand the rationale, I just disagree with it.

IMO adding a keyword to state your intention for inheritance is not a significant obstacle to prototyping and is not artificial bookkeeping. I really don't understand how this would conflict with "consequence-free" rapid development. It is a good thing to require people to stop and think before using inheritance. Often there is a more appropriate alternative.

The assumption that it is straightforward to fix problems within a module if you later decide you made a mistake is true in some respects but not in others. It is not uncommon for apps to be monolithic rather than being well factored into separate modules, with many developers contributing and the team changing over time. While this is not ideal it is reality.

When you have the full source it is certainly *possible* to solve any problem but it is often not straightforward at all. Here is an example of a real-work scenario app developers might walk into:

1) A class is developed without subclassing in mind by one developer.
2) After the original developer is gone another developer adds some subclasses without stopping to think about whether the original developer designed for subclassing, thereby introducing subtle bugs into the app.
3) After the second developer is gone the bugs are discovered, but by this time there are nontrivial dependencies on the subclasses.
4) A third developer who probably has little or no context for the decisions made by previous developers is tasked with fixing the bugs.

This can be quite a knot to untangle, especially if there are problems modifying the superclass to properly support the subclasses (maybe this breaks the contract the superclass has with its original clients).

It may have been possible to avoid the whole mess if the second developer was required to add 'inheritable' and 'overrideable' keywords or similar. They are already required to revisit the source of it while adding the keywords which may lead to consideration of whether the implementation is sufficient to support inheritance in their currently intended manner.

Implementation inheritance is a blunt tool that often leads to unanticipated problems. IMO a modern language should steer developers away from it and strive to reduce the cases where it is necessary or more convenient. Making final the default would help to do this.

Supporting sealed classes and methods that can only be subclassed or overridden within the same module is not in conflict with final by default. Both are good ideas IMO and I would like to see both in Swift.

I hope the core team is willing to revisit this decision with community input. If not I will let it go, although I doubt I will ever agree with the current decision.

Matthew

Sent from my iPad

>>> Defaults of public sealed/final classes and final methods on a class by default are a tougher call. Either way you may have design issues go unnoticed until someone needs to subclass to get the behavior they want. So when you reach that point, should the system error on the side of rigid safety or dangerous flexibility?
>>
>> This is a nice summary of the tradeoff. I strongly prefer safety myself and I believe the preference for safety fits well with the overall direction of Swift. If a library author discovers a design oversight and later decides they should have allowed for additional flexibility it is straightforward to allow for this without breaking existing client code.
>>
>> Many of the examples cited in argument against final by default have to do with working around library or framework bugs. I understand the motivation to preserve this flexibility bur don't believe bug workarounds are a good way to make language design decisions. I also believe use of subclasses and overrides in ways the library author may not have intended to is a fragile technique that is likely to eventually cause as many problems as it solves. I have been programming a long time and have never run into a case where this technique was the only way or even the best way to accomplish the task at hand.
>>
>> One additional motivation for making final the default that has not been discussed yet is the drive towards making Swift a protocol oriented language. IMO protocols should be the first tool considered when dynamic polymorphism is necessary. Inheritance should be reserved for cases where other approaches won't work (and we should seek to reduce the number of problems where that is the case). Making final the default for classes and methods would provide a subtle (or maybe not so subtle) hint in this direction.
>>
>> I know the Swift team at Apple put a lot of thought into the defaults in Swift. I agree with most of them. Enabling subclassing and overriding by default is the one case where I think a significant mistake was made.
>
> Our current intent is that public subclassing and overriding will be locked down by default, but internal subclassing and overriding will not be. I believe that this strikes the right balance, and moreover that it is consistent with the general language approach to code evolution, which is to promote “consequence-free” rapid development by:
>
> (1) avoiding artificial bookkeeping obstacles while you’re hacking up the initial implementation of a module, but
>
> (2) not letting that initial implementation make implicit source and binary compatibility promises to code outside of the module and
>
> (3) providing good language tools for incrementally building those initial prototype interfaces into stronger internal abstractions.
>
> All the hard limitations in the defaults are tied to the module boundary because we assume that it’s straightforward to fix any problems within the module if/when you decided you made a mistake earlier.
>
> So, okay, a class is subclassable by default, and it wasn’t really designed for that, and now there are subclasses in the module which are causing problems. As long as nobody's changed the default (which they could have done carelessly in either case, but are much less likely to do if it’s only necessary to make an external subclass), all of those subclasses will still be within the module, and you still have free rein to correct that initial design mistake.
>
> John.
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution
--

Javier Soto _______________________________________________

swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

--
Javier Soto

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

-Dave

···

On Dec 20, 2015, at 3:51 PM, Michael Buckley via swift-evolution <swift-evolution@swift.org> wrote:
On Sun, Dec 20, 2015 at 2:53 PM, Charles Srstka via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Dec 17, 2015, at 8:00 PM, Rod Brown via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On 18 Dec 2015, at 12:51 PM, Javier Soto <javier.api@gmail.com <mailto:javier.api@gmail.com>> wrote:
On Thu, Dec 17, 2015 at 5:41 PM Rod Brown <rodney.brown6@icloud.com <mailto:rodney.brown6@icloud.com>> wrote:

On 18 Dec 2015, at 10:46 AM, Javier Soto via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
On Tue, Dec 8, 2015 at 7:40 AM Matthew Johnson via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
On Dec 7, 2015, at 10:30 PM, John McCall <rjmccall@apple.com <mailto:rjmccall@apple.com>> wrote:
>>> On Dec 7, 2015, at 7:18 PM, Matthew Johnson via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

I personally don't like this but I can't put my finger on why. Obviously there are some things where you really don't need to call super (mostly abstract methods), but you just said "default", which implies that we could have an opt-out attribute.

I will say, however, that making NS_REQUIRES_SUPER the default for overridable methods is separable from deciding which methods are overridable by default. Making sure the base method is called isn't really the same as knowing the base method is all that's called.

Jordan

···

On Dec 20, 2015, at 3:40 , Tino Heth via swift-evolution <swift-evolution@swift.org> wrote:

Frankly, I think having `final` in the language at all is a mistake. While I agree that we should prefer composition to inheritance*, declaring things final is hubris. The only reasonable use case I've seen is for optimization, but that smacks of developers serving the compiler rather than the converse. Bringing an analog of NS_REQUIRES_SUPER to Swift would be most welcome; that's as far as I'd go down the path of dictating framework usage.

I really like the direction this discussion has taken ;-):
Is there any counter argument beside performance (which imho should always be seen under the aspect of premature optimization) that speaks against making NS_REQUIRES_SUPER the default behavior?

Okay, so I probably shouldn't be putting this so bluntly, but the ship has already sailed on this. Supporting arbitrary code injection into someone else's framework is a non-goal for Swift, perhaps even an anti-goal.

- 'private' and 'internal' methods are not exposed outside of a library, so you can't call them, much less override them. Similar for 'private' and 'internal' classes: you cannot subclass them.

- Structs, enums, protocol extensions, and free functions are all not overrideable. (Similarly, neither are C functions or pretty much anything C++.)

- There's a difference between "we're not going to optimize" and "we're not going to optimize now". Objective-C's "everything uses objc_msgSend" model is essentially unoptimizable. It's not that the developer can't work around that when performance is necessary; it's that the resulting code doesn't feel like Objective-C. Swift can do better, and even with its current semantics it does do better, for free. (And optimizations in frameworks are incredibly important. Where do you think your app spends most of its CPU time? I would guess for many many non-game apps, it's in framework code.)

- A major goal of Swift is safety. If you are writing a safe type built on unsafe constructs (like, say, Array), it is imperative that you have some control over your class invariants to guarantee safety. At the same time, your clients shouldn't have to know that you're built on unsafe constructs.

That last one is really the most important one. If you replace a method on someone else's class, you don't actually know what semantics they're relying on. Of course Apple code will have bugs in it. Trying to patch over these bugs in your own code is (1) obviously not an answer Apple would support, but also (2) fraught with peril, and (3) likely to break in the next OS release.

TLDR: It's already unsafe to do this with the existing set of Swift features. Yes, this makes things "worse", but it's not something we're interested in supporting anyway.

Jordan

···

On Dec 19, 2015, at 20:02 , Rod Brown <rodney.brown6@icloud.com> wrote:

Yeah, this really is a difficult one.

Adding final seems to be a risky thing at all, from a library or framework standpoint. As you say, there is hubris there in the suggestion “assume we’re right, and you can’t work around us.” I can see developer relations being inundated and unable to provide effective workarounds when the assumption is closed frameworks and finalised subclassing.

If they allow final in public frameworks, the default of “sealed” as mentioned by Jordan Rose makes sense. Closing an API further at a later date creates hell for those who subclass classes “hoping” that classes don’t become “finalised”. How do you handle such cases? It’s better than the alternatives.

But that suggests that indeed there is a greater problem here. Final in end products makes sense. It provides clarity, and allows optimisations. But in frameworks? For those who rely on the frameworks, the ability to subclass to avoid a bug, or to add functionality, while unintended and potentially unsafe, is something we use to develop apps, and how we inspire development of the framework. I think this risks stifling creativity and blocking effective workarounds to bugs.

If we are to add finalisation to API for frameworks, it makes sense to do “Sealed” by default as discussed. But perhaps it needs to be examined if we really want this aggressive optimisation and restriction on frameworks at all.

On 20 Dec 2015, at 2:09 PM, Curt Clifton via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

I'm not sure how many minuses I have to give, but I'd give them all to this proposal.

Anyone who tries to ship products on release day of Apple's operating system updates spends most of August and September writing horrible hacks so that their users are insulated from OS bugs as much as possible. All software has bugs, frameworks included. Please don't take away our tools for working around those bugs. Making classes final by default assumes a level of perfection on the part of framework developers that is not achievable.

Yes, subclassing a class that wasn't designed to be subclassed has serious risks. Thoughtful developers sometimes take on those risks in order to serve their customers.

Frankly, I think having `final` in the language at all is a mistake. While I agree that we should prefer composition to inheritance*, declaring things final is hubris. The only reasonable use case I've seen is for optimization, but that smacks of developers serving the compiler rather than the converse. Bringing an analog of NS_REQUIRES_SUPER to Swift would be most welcome; that's as far as I'd go down the path of dictating framework usage.

Cheers,

Curt

*- and am thrilled with the property behaviors proposal for this use case

On Dec 17, 2015, at 5:55 PM, Joe Groff via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Dec 17, 2015, at 5:41 PM, Rod Brown via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

My opinion is -1 on this proposal. Classes seem by design to intrinsically support subclassing.

What if one framework provider thinks “you won’t need to subclass this ever” but didn’t realise your use case for doing so, and didn’t add the keyword? When multiple developers come at things from different angles, the invariable situation ends with use cases each didn’t realise. Allowing subclassing by default seems to mitigate this risk at least for the most part.

Frameworks change, and if the framework author didn't anticipate your use case for subclassing, they almost certainly aren't going to anticipate it while evolving their implementation and will likely break your code. Robust subclassability requires conscious design just like all other aspects of API design.

-Joe

I think this definitely comes under the banner of “this would be nice” without realising the fact you’d be shooting yourself in the foot when someone doesn’t add the keyword in other frameworks and you’re annoyed you can’t add it.

On 18 Dec 2015, at 10:46 AM, Javier Soto via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

Does it seem like there's enough interesest in this proposal? If so, what would be the next steps? Should I go ahead and create a PR on the evolution repo, describing the proposal version that Joe suggested, with classes closed for inheritance by default outside of a module?

Thanks!

On Tue, Dec 8, 2015 at 7:40 AM Matthew Johnson via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
I understand the rationale, I just disagree with it.

IMO adding a keyword to state your intention for inheritance is not a significant obstacle to prototyping and is not artificial bookkeeping. I really don't understand how this would conflict with "consequence-free" rapid development. It is a good thing to require people to stop and think before using inheritance. Often there is a more appropriate alternative.

The assumption that it is straightforward to fix problems within a module if you later decide you made a mistake is true in some respects but not in others. It is not uncommon for apps to be monolithic rather than being well factored into separate modules, with many developers contributing and the team changing over time. While this is not ideal it is reality.

When you have the full source it is certainly *possible* to solve any problem but it is often not straightforward at all. Here is an example of a real-work scenario app developers might walk into:

1) A class is developed without subclassing in mind by one developer.
2) After the original developer is gone another developer adds some subclasses without stopping to think about whether the original developer designed for subclassing, thereby introducing subtle bugs into the app.
3) After the second developer is gone the bugs are discovered, but by this time there are nontrivial dependencies on the subclasses.
4) A third developer who probably has little or no context for the decisions made by previous developers is tasked with fixing the bugs.

This can be quite a knot to untangle, especially if there are problems modifying the superclass to properly support the subclasses (maybe this breaks the contract the superclass has with its original clients).

It may have been possible to avoid the whole mess if the second developer was required to add 'inheritable' and 'overrideable' keywords or similar. They are already required to revisit the source of it while adding the keywords which may lead to consideration of whether the implementation is sufficient to support inheritance in their currently intended manner.

Implementation inheritance is a blunt tool that often leads to unanticipated problems. IMO a modern language should steer developers away from it and strive to reduce the cases where it is necessary or more convenient. Making final the default would help to do this.

Supporting sealed classes and methods that can only be subclassed or overridden within the same module is not in conflict with final by default. Both are good ideas IMO and I would like to see both in Swift.

I hope the core team is willing to revisit this decision with community input. If not I will let it go, although I doubt I will ever agree with the current decision.

Matthew

Sent from my iPad

On Dec 7, 2015, at 10:30 PM, John McCall <rjmccall@apple.com <mailto:rjmccall@apple.com>> wrote:

>>> On Dec 7, 2015, at 7:18 PM, Matthew Johnson via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
>>> Defaults of public sealed/final classes and final methods on a class by default are a tougher call. Either way you may have design issues go unnoticed until someone needs to subclass to get the behavior they want. So when you reach that point, should the system error on the side of rigid safety or dangerous flexibility?
>>
>> This is a nice summary of the tradeoff. I strongly prefer safety myself and I believe the preference for safety fits well with the overall direction of Swift. If a library author discovers a design oversight and later decides they should have allowed for additional flexibility it is straightforward to allow for this without breaking existing client code.
>>
>> Many of the examples cited in argument against final by default have to do with working around library or framework bugs. I understand the motivation to preserve this flexibility bur don't believe bug workarounds are a good way to make language design decisions. I also believe use of subclasses and overrides in ways the library author may not have intended to is a fragile technique that is likely to eventually cause as many problems as it solves. I have been programming a long time and have never run into a case where this technique was the only way or even the best way to accomplish the task at hand.
>>
>> One additional motivation for making final the default that has not been discussed yet is the drive towards making Swift a protocol oriented language. IMO protocols should be the first tool considered when dynamic polymorphism is necessary. Inheritance should be reserved for cases where other approaches won't work (and we should seek to reduce the number of problems where that is the case). Making final the default for classes and methods would provide a subtle (or maybe not so subtle) hint in this direction.
>>
>> I know the Swift team at Apple put a lot of thought into the defaults in Swift. I agree with most of them. Enabling subclassing and overriding by default is the one case where I think a significant mistake was made.
>
> Our current intent is that public subclassing and overriding will be locked down by default, but internal subclassing and overriding will not be. I believe that this strikes the right balance, and moreover that it is consistent with the general language approach to code evolution, which is to promote “consequence-free” rapid development by:
>
> (1) avoiding artificial bookkeeping obstacles while you’re hacking up the initial implementation of a module, but
>
> (2) not letting that initial implementation make implicit source and binary compatibility promises to code outside of the module and
>
> (3) providing good language tools for incrementally building those initial prototype interfaces into stronger internal abstractions.
>
> All the hard limitations in the defaults are tied to the module boundary because we assume that it’s straightforward to fix any problems within the module if/when you decided you made a mistake earlier.
>
> So, okay, a class is subclassable by default, and it wasn’t really designed for that, and now there are subclasses in the module which are causing problems. As long as nobody's changed the default (which they could have done carelessly in either case, but are much less likely to do if it’s only necessary to make an external subclass), all of those subclasses will still be within the module, and you still have free rein to correct that initial design mistake.
>
> John.
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution
--
Javier Soto _______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

I can't speak for the others but I join the list as a skeptical of this proposal and in one day I've embraced it. The benefits of it far out weight the fears of having it.

I'd love to read it from the beginning but is kind of impossible.

Felipe Cypriano

···

On Dec 23, 2015, at 02:51, Tino Heth <2th@gmx.de> wrote:

Isn't this proposal solving a problem that in practice doesn't exist or isn't common enough to be worth a language level fix? I'm trying to find an example of a common problem - in any language - that would benefit by having final/sealed by default.

I guess you share that attitude with most developers that have not read the full thread and never will do… and that is not meant as a suggestion to review all those posts;-): I don't think there is anything that will make someone change his opinion on this topic.

I can't speak for the others but I join the list as a skeptical of this proposal and in one day I've embraced it. The benefits of it far out weight the fears of having it.

That’s great! Thank you so much for keeping an open mind and considering the arguments for their merit rather than having an emotional bias Felipe.

···

On Dec 23, 2015, at 10:51 AM, Felipe Cypriano via swift-evolution <swift-evolution@swift.org> wrote:

I'd love to read it from the beginning but is kind of impossible.

Felipe Cypriano

On Dec 23, 2015, at 02:51, Tino Heth <2th@gmx.de <mailto:2th@gmx.de>> wrote:

Isn't this proposal solving a problem that in practice doesn't exist or isn't common enough to be worth a language level fix? I'm trying to find an example of a common problem - in any language - that would benefit by having final/sealed by default.

I guess you share that attitude with most developers that have not read the full thread and never will do… and that is not meant as a suggestion to review all those posts;-): I don't think there is anything that will make someone change his opinion on this topic.

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

I thought sealed and final were effectively the same thing for production code, which is why it confuses me when you say final is right anything less including sealed is not.

In Scala at least sealed is final with the exception that subclasses within the same source file are allowed. When it is compiled and shipped - you can no longer modify that source file…..

They are not at all the same. The difference is that with sealed you cannot inherit from classes in other modules which are not explicitly marked `inheritable`, but you can inherit from classes in your own module that are not explicitly marked `inheritable`. That is a big difference.

···

On Dec 23, 2015, at 12:43 PM, Craig Cruden <ccruden@novafore.com> wrote:

On 2015-12-24, at 1:36:01, Matthew Johnson via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

I strongly feel that I shouldn’t pay a price in production code in order to better support those use cases. IMO ‘final’ is the right default for production code and we pay a price if the default is anything less, including ‘sealed’.

No, I don’t mean performance. I mean that the code is significantly less clear when final is not the default. It isn’t clear at all whether the author intended to allow subclasses or not when the default allows inheritance. The value in making this clear is significant, especially if you are a new developer walking into a large application.

I don’t want to rehash the entire case here. It has been discussed many times already on this list, most recently in the summary I posted this morning, as well as Kevin’s post from yesterday that I was replying to.

Matthew

···

On Dec 23, 2015, at 12:52 PM, Michel Fortin <michel.fortin@michelf.ca> wrote:

Le 23 déc. 2015 à 13:36, Matthew Johnson <matthew@anandabits.com <mailto:matthew@anandabits.com>> a écrit :

I'm not sure why you say the last two should be addressed separately from the "production" language. Are you proposing Swift should come in multiple language variants?

Not exactly.

My point is that it is best to design the language to address production concerns first. If the defaults in that design cause problems for prototyping and / or education it is possible to offer an environment with a modified default. I don’t know if that is a good idea or not. I am not a target user for either of those use cases (personally, I would rather prototype with the production language). Either way, it is possible, just as it is possible to have @testable to facilitate unit testing.

I strongly feel that I shouldn’t pay a price in production code in order to better support those use cases. IMO ‘final’ is the right default for production code and we pay a price if the default is anything less, including ‘sealed’.

By "pay a price" you mean diminished performance, right? That would depend on the ABI (which hasn't been discussed much yet, is there some preliminary docs about it?).

I don't think there is a price in performance to pay for sealed. You simply call a static function in the library, and that static function does the dynamic dispatch only if the library contains some overrides for that function. If there's no override it's simply purely a static call.

Responses inline.

If the frameworks are written in Swift, nothing is stopping Apple from slapping 'final' on all the classes they don't want developers to inherit from, even if the proposal is rejected; neither is there anything stopping them from making all their classes inheritable (if that happened to be what they wanted, which it isn't).

You cannot look at a language simply as a technical artifact. A language creates culture and shapes the systems written in it. It defines not only what is possible and what is not, but also what is preferred and what is discouraged. Defaults matter. If they didn't, people wouldn't be trying to change the default.

You're right that defaults matter, but that's not the point. The argument is that this proposal is bad because it will make future Apple frameworks less flexible than if nothing were done. I'm saying that's not the case; if Apple wants to write frameworks in Swift then they will most likely annotate their APIs however they want, whether with 'final' or 'inheritable' or whatever their internal policy on this matter is, regardless of whether this proposal is adopted. They already make intended usage clear in the documentation.

Making `final` the default, or `sealed` the default, encourages the use of closed class hierarchies. It attempts to make inflexibility the preferred form of shared Swift code. I'm not sure that's the right thing to do.

Protocols are a more general and arguably superior way of making shared code flexible. I understand and sympathize with the flexibility argument but I think pushing for better abstractions is worth it (and I understand why others disagree).

···

On Dec 23, 2015, at 12:35 PM, Brent Royal-Gordon <brent@architechies.com> wrote:

--
Brent Royal-Gordon
Architechies

While I agree with you, the same argument can be made for modules where `internal` code isn't marked `private`. Existing access control makes a case for `sealed` being the default, though I think class subclassing happens less frequently, and thus could be made `final` by default and utilize fix-its to make marking things inheritable simple enough.

Stephen

···

On Dec 23, 2015, at 1:55 PM, Matthew Johnson via swift-evolution <swift-evolution@swift.org> wrote:

By "pay a price" you mean diminished performance, right? That would depend on the ABI (which hasn't been discussed much yet, is there some preliminary docs about it?).

I don't think there is a price in performance to pay for sealed. You simply call a static function in the library, and that static function does the dynamic dispatch only if the library contains some overrides for that function. If there's no override it's simply purely a static call.

No, I don’t mean performance. I mean that the code is significantly less clear when final is not the default. It isn’t clear at all whether the author intended to allow subclasses or not when the default allows inheritance. The value in making this clear is significant, especially if you are a new developer walking into a large application.

If the frameworks are written in Swift, nothing is stopping Apple from slapping 'final' on all the classes they don't want developers to inherit from, even if the proposal is rejected; neither is there anything stopping them from making all their classes inheritable (if that happened to be what they wanted, which it isn't).

You cannot look at a language simply as a technical artifact. A language creates culture and shapes the systems written in it. It defines not only what is possible and what is not, but also what is preferred and what is discouraged. Defaults matter. If they didn't, people wouldn't be trying to change the default.

Agreed.

Making `final` the default, or `sealed` the default, encourages the use of closed class hierarchies. It attempts to make inflexibility the preferred form of shared Swift code. I'm not sure that's the right thing to do.

I don't agree with this framing. IMO it encourages alternative designs emphasizing protocols and composition. This is a very good thing IMHO. I like to think of inheritance is a tool is last resort.

BTW, I am planning a future proposal regarding automatic forwarding which if accepted would make the use of protocols and composition more convenient.

···

Sent from my iPhone
On Dec 23, 2015, at 2:35 PM, Brent Royal-Gordon <brent@architechies.com> wrote:

--
Brent Royal-Gordon
Architechies

The benefits of it far out weight the fears of having it.

so what is the practical problem that's solved by final that convinced you?

I also would change the list of downsides and put "annoyance" on top — especially for those who don't care for theoretical improvement when they have to pay the price in form of more effort:
"If I don't want to subclass something, I just don't do it — why do I have to change the properties of the superclass?"

Also:
- structs are always final, inheritance is one of the major aspects of class. In many cases, the decision for class is made because of the ability to subclass.
- you can't subclass across module borders with the default visibility
In summary, final already is very common, and I don't see the need to push this further just because "inheritance" became old-fashioned lately.

Best regards,
Tino

I suspect these two statements are true in most shops these days, but there
are still plenty of companies out there, typically older companies with
many superfluous layers of middle management, where buying closed-source
binary blobs is the rule (and in some, open-source is verboten). This
hasn't been as much of a thing with Objective-C, but it's not unheard of,
and I've run into them more often than I'd like. There are a lot of
companies out there that make a killing in the Java and .NET worlds selling
these libraries. If Swift makes significant gains into the enterprise
space, which seems likely, given IBM's backing, you can expect a lot of
these companies to start vending Swift frameworks.

As much as you and I might wish that, in 2015, third party libraries ==
open source, a lot of business gets done the crappy way.

···

On Wed, Dec 23, 2015 at 6:43 PM, Matthew Johnson via swift-evolution < swift-evolution@swift.org> wrote:

Paul points out that many people have similar concerns about how the
default might impact 3rd party frameworks and libraries. In 2015 this
*usually *means open source libraries.

...

In reality closed source, binary only libraries from 3rd parties are the
exception, not the norm. Bad experiences with closed source, binary
libraries (some have mentioned C++) in the past aren’t really applicable to
future experiences with *open source *libraries.

Those are good points and things I came to realize after I asked that. I
don't have a Text Expander snippet but it is common in the code I work
to have some kind of hack to inform what is intended to be overrided.

Thanks for sharing, this is exactly the kind of answer (not because it
is in favor of final) I was hoping to get when I asked.

···

On Wed, Dec 23, 2015, at 20:09, Andrey Tarantsov wrote:

Isn't this proposal solving a problem that in practice doesn't exist
or isn't common enough to be worth a language level fix?

Well I have a TextExpander macro that inserts "// override point" when
I type ;overp. Been marking all overridable methods this way for
years. I think it's an indication that the problem is worth solving.

I'm trying to find an example of a common problem - in any language -
that would benefit by having final/sealed by default.

Understanding the code and reasoning about the class is easier when
you know the exact customization points.

I do agree that this won't really prevent many actual bugs.

A.

The “module” in this case being the same source file.

i.e. Family.scala

contains a sealed class called “Parent” which is sealed,

you could have another class in their called “Child” which inherits from the parent.

but you cannot write another class and inherit from it in Sibling.scala.

Since only the library writer has access to Family.scala and the classes are sealed by default - it is effectively the same as final by default other than it does not restrict the actual writer of the library not to inherit from their own class.

···

On 2015-12-24, at 1:45:45, Matthew Johnson <matthew@anandabits.com> wrote:

On Dec 23, 2015, at 12:43 PM, Craig Cruden <ccruden@novafore.com <mailto:ccruden@novafore.com>> wrote:

I thought sealed and final were effectively the same thing for production code, which is why it confuses me when you say final is right anything less including sealed is not.

In Scala at least sealed is final with the exception that subclasses within the same source file are allowed. When it is compiled and shipped - you can no longer modify that source file…..

They are not at all the same. The difference is that with sealed you cannot inherit from classes in other modules which are not explicitly marked `inheritable`, but you can inherit from classes in your own module that are not explicitly marked `inheritable`. That is a big difference.

On 2015-12-24, at 1:36:01, Matthew Johnson via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

I strongly feel that I shouldn’t pay a price in production code in order to better support those use cases. IMO ‘final’ is the right default for production code and we pay a price if the default is anything less, including ‘sealed’.

You're probably right. It's very likely that you have worked on more C++
codebases than I have, and I haven't been working on code for
high-performance computing, so it's possible that I'm suffering from a
small sample size. But if you're working on a consumer app, I do think that
it's logical vtable dispatch is what you want most of the time. So in my
experience, functions need to be virtual more often than not, and the C++
code I've seen would be shorter if you had to explicitly mark methods as
nonvirtual rather than virtual.

When I said, "programmers want virtual functions 99% of the time," I was
mostly thinking of the legion of programmers who grew up learning languages
where virtual methods are the only kinds of methods. Objective-C,
JavaScript, Ruby, Python, Java, etc. I've worked with a few younger
programmers who are thrown to the C++ sharks by management, and once they
learn the difference between virtual and non-virtual methods, they tend to
mark all their methods virtual as a defensive measure.

You make a very good point about Swift being the second major language to
take value semantics correctly. My original point though, was once most iOS
developers move to Swift, I think it's possible that they'll just stick to
what they're comfortable with, using classes exclusively and writing
Massive View Controllers, because it's what they know, it's easy to do, and
it doesn't require learning sometimes conceptually difficult new concepts.
So my question is whether we want to make that more difficult for them. It
seems like there are benefits and disadvantages to both. I'm just trying to
raise the possibility that this may be the dominant programming paradigm in
Swift for some time, as unfortunate as that may be.

···

On Mon, Dec 21, 2015 at 10:04 AM, Dave Abrahams <dabrahams@apple.com> wrote:

On Dec 20, 2015, at 3:51 PM, Michael Buckley via swift-evolution < > swift-evolution@swift.org> wrote:

+0. This seems reasonable, and a lot of the arguments are compelling. The
argument put forth about library design especially so. But coming from C++,
where I have to prefix nearly every method in my classes with virtual, I'm
worried that we could end up with the same problem in Swift.

We don't know what the dominant paradigm in swift will be ten years from
now. Inheritance has a raft of problems, but there's no guarantee that the
alternatives will be better in the long run. I suspect they will be, but I
also suspect we will find new and exciting problems in large codebases
using more functional patterns.

While there's a lot of excitement in the Swift community right now about
final, value types, and other language features, but I fear that when the
rest of the world jumps on the Swift bandwagon, most are just going to use
classes exclusively over structs and continue their OOP practices, simply
because it's what they're used to.

Making final the default may be a great way to discourage them. But it may
also get us right back to where we are in C++ today, where programmers want
virtual functions 99% of the time, but have to specify each function as
virtual.

In my considerable experience with C++, that is not at all where we are
today. Increasingly, C++ is becoming seen as a language for
high-performance computing, and people working in that area learn that they
don't want to pay for virtual dispatch when they don't have to. It is true
that for some of them, reflexive use of OOP is hard to shake, but they do
learn eventually. Note also that Swift is really the second major language
to take value semantics seriously. The first was C++.

On Sun, Dec 20, 2015 at 2:53 PM, Charles Srstka via swift-evolution < > swift-evolution@swift.org> wrote:

I agree with this. -1 to the proposal.

Charles

On Dec 17, 2015, at 8:00 PM, Rod Brown via swift-evolution < >> swift-evolution@swift.org> wrote:

To play devils advocate, take for example UINavigationController in UIKit
on iOS.

I’ve seen multiple times in multiple projects legitimate reasons for
subclassing it, despite the fact that UIKit documentation says we “should
not need to subclass it”. So if we relied on Apple to “declare”, they most
probably wouldn’t, and these use cases (and some really impressive apps)
would become impossible.

While I agree with all points made about “If it’s not declared
subclassable, they didn’t design it that way”, I think that ties everyone’s
hands too much. There is a balance between safety and functionality that
must be worked out. I think this errs way too far on the side of safety.

Rod

On 18 Dec 2015, at 12:51 PM, Javier Soto <javier.api@gmail.com> wrote:

What if one framework provider thinks “you won’t need to subclass this
ever”

If the framework author didn't design and implement that class with
subclassing in mind, chances are it's not necessarily safe to do so, or at
least not without knowledge of the implementation. That's why I think
deciding that a class can be subclassed is a decision that should be made
consciously, and not just "I forgot to make it final"
On Thu, Dec 17, 2015 at 5:41 PM Rod Brown <rodney.brown6@icloud.com> >> wrote:

My opinion is -1 on this proposal. Classes seem by design to
intrinsically support subclassing.

What if one framework provider thinks “you won’t need to subclass this
ever” but didn’t realise your use case for doing so, and didn’t add the
keyword? When multiple developers come at things from different angles, the
invariable situation ends with use cases each didn’t realise. Allowing
subclassing by default seems to mitigate this risk at least for the most
part.

I think this definitely comes under the banner of “this would be nice”
without realising the fact you’d be shooting yourself in the foot when
someone doesn’t add the keyword in other frameworks and you’re annoyed you
can’t add it.

On 18 Dec 2015, at 10:46 AM, Javier Soto via swift-evolution < >>> swift-evolution@swift.org> wrote:

Does it seem like there's enough interesest in this proposal? If so,
what would be the next steps? Should I go ahead and create a PR on the
evolution repo, describing the proposal version that Joe suggested, with
classes closed for inheritance by default outside of a module?

Thanks!

On Tue, Dec 8, 2015 at 7:40 AM Matthew Johnson via swift-evolution < >>> swift-evolution@swift.org> wrote:

I understand the rationale, I just disagree with it.

IMO adding a keyword to state your intention for inheritance is not a
significant obstacle to prototyping and is not artificial bookkeeping. I
really don't understand how this would conflict with "consequence-free"
rapid development. It is a good thing to require people to stop and think
before using inheritance. Often there is a more appropriate alternative.

The assumption that it is straightforward to fix problems within a
module if you later decide you made a mistake is true in some respects but
not in others. It is not uncommon for apps to be monolithic rather than
being well factored into separate modules, with many developers
contributing and the team changing over time. While this is not ideal it
is reality.

When you have the full source it is certainly *possible* to solve any
problem but it is often not straightforward at all. Here is an example of
a real-work scenario app developers might walk into:

1) A class is developed without subclassing in mind by one developer.
2) After the original developer is gone another developer adds some
subclasses without stopping to think about whether the original developer
designed for subclassing, thereby introducing subtle bugs into the app.
3) After the second developer is gone the bugs are discovered, but by
this time there are nontrivial dependencies on the subclasses.
4) A third developer who probably has little or no context for the
decisions made by previous developers is tasked with fixing the bugs.

This can be quite a knot to untangle, especially if there are problems
modifying the superclass to properly support the subclasses (maybe this
breaks the contract the superclass has with its original clients).

It may have been possible to avoid the whole mess if the second
developer was required to add 'inheritable' and 'overrideable' keywords or
similar. They are already required to revisit the source of it while
adding the keywords which may lead to consideration of whether the
implementation is sufficient to support inheritance in their currently
intended manner.

Implementation inheritance is a blunt tool that often leads to
unanticipated problems. IMO a modern language should steer developers away
from it and strive to reduce the cases where it is necessary or more
convenient. Making final the default would help to do this.

Supporting sealed classes and methods that can only be subclassed or
overridden within the same module is not in conflict with final by
default. Both are good ideas IMO and I would like to see both in Swift.

I hope the core team is willing to revisit this decision with community
input. If not I will let it go, although I doubt I will ever agree with
the current decision.

Matthew

Sent from my iPad

On Dec 7, 2015, at 10:30 PM, John McCall <rjmccall@apple.com> wrote:

>>> On Dec 7, 2015, at 7:18 PM, Matthew Johnson via swift-evolution < >>>> swift-evolution@swift.org> wrote:
>>> Defaults of public sealed/final classes and final methods on a
class by default are a tougher call. Either way you may have design issues
go unnoticed until someone needs to subclass to get the behavior they want.
So when you reach that point, should the system error on the side of rigid
safety or dangerous flexibility?
>>
>> This is a nice summary of the tradeoff. I strongly prefer safety
myself and I believe the preference for safety fits well with the overall
direction of Swift. If a library author discovers a design oversight and
later decides they should have allowed for additional flexibility it is
straightforward to allow for this without breaking existing client code.
>>
>> Many of the examples cited in argument against final by default have
to do with working around library or framework bugs. I understand the
motivation to preserve this flexibility bur don't believe bug workarounds
are a good way to make language design decisions. I also believe use of
subclasses and overrides in ways the library author may not have intended
to is a fragile technique that is likely to eventually cause as many
problems as it solves. I have been programming a long time and have never
run into a case where this technique was the only way or even the best way
to accomplish the task at hand.
>>
>> One additional motivation for making final the default that has not
been discussed yet is the drive towards making Swift a protocol oriented
language. IMO protocols should be the first tool considered when dynamic
polymorphism is necessary. Inheritance should be reserved for cases where
other approaches won't work (and we should seek to reduce the number of
problems where that is the case). Making final the default for classes and
methods would provide a subtle (or maybe not so subtle) hint in this
direction.
>>
>> I know the Swift team at Apple put a lot of thought into the
defaults in Swift. I agree with most of them. Enabling subclassing and
overriding by default is the one case where I think a significant mistake
was made.
>
> Our current intent is that public subclassing and overriding will be
locked down by default, but internal subclassing and overriding will not
be. I believe that this strikes the right balance, and moreover that it is
consistent with the general language approach to code evolution, which is
to promote “consequence-free” rapid development by:
>
> (1) avoiding artificial bookkeeping obstacles while you’re hacking
up the initial implementation of a module, but
>
> (2) not letting that initial implementation make implicit source and
binary compatibility promises to code outside of the module and
>
> (3) providing good language tools for incrementally building those
initial prototype interfaces into stronger internal abstractions.
>
> All the hard limitations in the defaults are tied to the module
boundary because we assume that it’s straightforward to fix any problems
within the module if/when you decided you made a mistake earlier.
>
> So, okay, a class is subclassable by default, and it wasn’t really
designed for that, and now there are subclasses in the module which are
causing problems. As long as nobody's changed the default (which they
could have done carelessly in either case, but are much less likely to do
if it’s only necessary to make an external subclass), all of those
subclasses will still be within the module, and you still have free rein to
correct that initial design mistake.
>
> John.
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

--

Javier Soto _______________________________________________

swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

--

Javier Soto

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

-Dave

In C++, you also often see polymorphic type erasure containers built on top of types that themselves don't require dynamic dispatch, like `boost::any`, `std::function`, and the like. This is something Swift makes first-class with protocols and protocol types. You don't need virtual dispatch of implementations as much if you can introduce ad-hoc virtual dispatch of interfaces at any point.

-Joe

···

On Dec 21, 2015, at 10:04 AM, Dave Abrahams via swift-evolution <swift-evolution@swift.org> wrote:

On Dec 20, 2015, at 3:51 PM, Michael Buckley via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

+0. This seems reasonable, and a lot of the arguments are compelling. The argument put forth about library design especially so. But coming from C++, where I have to prefix nearly every method in my classes with virtual, I'm worried that we could end up with the same problem in Swift.

We don't know what the dominant paradigm in swift will be ten years from now. Inheritance has a raft of problems, but there's no guarantee that the alternatives will be better in the long run. I suspect they will be, but I also suspect we will find new and exciting problems in large codebases using more functional patterns.

While there's a lot of excitement in the Swift community right now about final, value types, and other language features, but I fear that when the rest of the world jumps on the Swift bandwagon, most are just going to use classes exclusively over structs and continue their OOP practices, simply because it's what they're used to.

Making final the default may be a great way to discourage them. But it may also get us right back to where we are in C++ today, where programmers want virtual functions 99% of the time, but have to specify each function as virtual.

In my considerable experience with C++, that is not at all where we are today. Increasingly, C++ is becoming seen as a language for high-performance computing, and people working in that area learn that they don't want to pay for virtual dispatch when they don't have to. It is true that for some of them, reflexive use of OOP is hard to shake, but they do learn eventually. Note also that Swift is really the second major language to take value semantics seriously. The first was C++.

I love those parts of Swift. Generics and value-type structs and high performance from static binding. But I also love UIKit and AppKit and the loosey-goosey but highly productive Objective-C style of dynamic binding and subclassability everywhere.

There’s a great balance here in Swift between ‘struct’ and ‘class’ and two very different styles of programming, and in my opinion, this proposal is trying to extend what ARE benefits of one half of the language in a way that is likely to wreck the other half of the language. Which is why I’m -1.

  - Greg

···

On Dec 21, 2015, at 10:04 AM, Dave Abrahams via swift-evolution <swift-evolution@swift.org> wrote:

In my considerable experience with C++, that is not at all where we are today. Increasingly, C++ is becoming seen as a language for high-performance computing, and people working in that area learn that they don't want to pay for virtual dispatch when they don't have to. It is true that for some of them, reflexive use of OOP is hard to shake, but they do learn eventually. Note also that Swift is really the second major language to take value semantics seriously. The first was C++.

Presumably a goal for Swift is that application developers will use it to build user-facing apps for Apple’s platforms. And presumably a goal for Apple is that developers help promote Apple’s platforms by shipping apps that take advantage of the new OS features when they ship. I fear that you and others dramatically underestimate the difficultly of doing that. I acknowledge your three points. But understand that we are professionals trying to serve our mutual customers. Temporary hacks in the service of shipping is the nature of the business.

I don’t know how to make the case more strongly than I already have. This thread makes me worry that the team does not understand what it’s like for third party developers trying to serve our mutual customers.

Sincerely,

Curt

···

On Dec 21, 2015, at 11:50 AM, Jordan Rose <jordan_rose@apple.com> wrote:

If you replace a method on someone else's class, you don't actually know what semantics they're relying on. Of course Apple code will have bugs in it. Trying to patch over these bugs in your own code is (1) obviously not an answer Apple would support, but also (2) fraught with peril, and (3) likely to break in the next OS release.

TLDR: It's already unsafe to do this with the existing set of Swift features. Yes, this makes things "worse", but it's not something we're interested in supporting anyway.

-----------------------------------------------------------------------------
Curt Clifton, PhD
Software Engineer
The Omni Group
www.curtclifton.net

Frankly, I think having `final` in the language at all is a mistake. While I agree that we should prefer composition to inheritance*, declaring things final is hubris. The only reasonable use case I've seen is for optimization, but that smacks of developers serving the compiler rather than the converse. Bringing an analog of NS_REQUIRES_SUPER to Swift would be most welcome; that's as far as I'd go down the path of dictating framework usage.

I really like the direction this discussion has taken ;-):
Is there any counter argument beside performance (which imho should always be seen under the aspect of premature optimization) that speaks against making NS_REQUIRES_SUPER the default behavior?

I personally don't like this but I can't put my finger on why. Obviously there are some things where you really don't need to call super (mostly abstract methods), but you just said "default", which implies that we could have an opt-out attribute.

I will say, however, that making NS_REQUIRES_SUPER the default for overridable methods is separable from deciding which methods are overridable by default. Making sure the base method is called isn't really the same as knowing the base method is all that's called.

Agree. There are at least four possibilities from most to least restrictive:

* not overridable
* overridable but requires a call to super in a specific location in the overriding method (i.e. the first or last line)
* overridable but requires a call to super somewhere in the overriding method
* overridable with no restrictions

···

On Dec 21, 2015, at 1:26 PM, Jordan Rose via swift-evolution <swift-evolution@swift.org> wrote:

On Dec 20, 2015, at 3:40 , Tino Heth via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

Jordan

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

I think this is potentially getting beyond Swift language development and into much wider platform concerns, and so I realize that I’m perhaps arguing at the wrong level and in the wrong place. Sorry about that. That having been said:

There is a really big design difference between a library like Swift’s stdlib or Foundation, which have fairly straightforward interfaces and simple program flow in and out. (In both of these cases you generally call in, and any calls out to application code are explicit and mostly short-lived closures.) AppKit or UIKit, on the other hand, are incredibly porous and have quite complicated program flow between the framework and the application code. The framework design is more like a skeleton upon which the application code hangs, and which in turns moves the kit objects about, rather than a self-contained system with a lot of invariants.

I can’t prove any causation, but I would certainly argue that the dynamic nature and possible overridability of even things that Apple doesn’t specifically intend to allow overriding is one of the primary reasons why AppKit has survived for 20+ years and spawned arguably the most successful application framework in history in UIKit. On the other hand, efficiency and safety have rarely been major issues.

TLDR: I don’t think using the design trade-offs of Array (which is, after all, a value type and can’t be subclassed anyway) inside stdlib, can be very usefully broadened to apply to reference types in application frameworks.

  - Greg

···

On Dec 21, 2015, at 11:50 AM, Jordan Rose via swift-evolution <swift-evolution@swift.org> wrote:

- There's a difference between "we're not going to optimize" and "we're not going to optimize now". Objective-C's "everything uses objc_msgSend" model is essentially unoptimizable. It's not that the developer can't work around that when performance is necessary; it's that the resulting code doesn't feel like Objective-C. Swift can do better, and even with its current semantics it does do better, for free. (And optimizations in frameworks are incredibly important. Where do you think your app spends most of its CPU time? I would guess for many many non-game apps, it's in framework code.)

- A major goal of Swift is safety. If you are writing a safe type built on unsafe constructs (like, say, Array), it is imperative that you have some control over your class invariants to guarantee safety. At the same time, your clients shouldn't have to know that you're built on unsafe constructs.

That last one is really the most important one. If you replace a method on someone else's class, you don't actually know what semantics they're relying on. Of course Apple code will have bugs in it. Trying to patch over these bugs in your own code is (1) obviously not an answer Apple would support, but also (2) fraught with peril, and (3) likely to break in the next OS release.

TLDR: It's already unsafe to do this with the existing set of Swift features. Yes, this makes things "worse", but it's not something we're interested in supporting anyway.