Proposal: Universal dynamic dispatch for method calls

3. C++: C++ is a step up from C in terms of introducing dynamism into the model with virtual functions. Sadly, C++ also provides a hostile model for static optimizability - the existence of placement new prevents a lot of interesting devirtualization opportunities, and generally makes the compiler’s life difficult.

Interesting, I haven’t heard that placement new is problematic before. What are the problems involved (feel free to reply off-list). Is it legal to use placement new to replace an existing instance and change its vtable pointer or do aliasing rules prohibit that?

It looks like there was a recent discussion on reddit that explained more:
https://www.reddit.com/r/programming/comments/3wla9f/chris_lattner_author_of_the_swift_programming/cxx7k0m

The upshot of this is that Swift isn’t squarely in either of the static or dynamic camps: it aims to provide a very predictable performance model (someone writing a bootloader or firmware can stick to using Swift structs and have a simple guarantee of no dynamic overhead or runtime dependence)

To be fair, we still emit implicit heap allocations for value types whose size isn’t known. So your boot loader will have to avoid generics (and non-@noescape closures), at least :-)

Yes. I would expect such system code to be built in a mode that would warn or error when any of these occurred. These are all statically knowable situations.

Actually, for code within a single module, are we always able to fully instantiate all generics?

We have the mechanics & design to do that, but I don’t thing we have any user-visible way to expose it. There is certainly more to be done in any case.

Finally, while it is possible that a JIT compiler might be interesting someday in the Swift space, if we do things right, it will never be “worth it” because programmers will have enough ability to reason about performance at their fingertips. This means that there should be no Java or Javascript-magnitude "performance delta" sitting on the table waiting for a JIT to scoop up. We’ll see how it works out long term, but I think we’re doing pretty well so far.

JITs can teach us a lot about optimizing for compile time though, which would help REPL usage, tooling and scripting.

Absolutely. I’m not anti-JIT at all, and in fact our REPL and #! script mode use a JIT (as I’m sure you know).

I was trying to convey that building a model that depends on a JIT for performance means it is very difficult to practically target spaces where a JIT won’t work (e.g. because of space constraints). If you have a model that doesn’t rely on a JIT, then a JIT can provide value add.

-Chris

···

On Dec 12, 2015, at 3:36 PM, Slava Pestov <spestov@apple.com> wrote:

On Dec 11, 2015, at 11:45 PM, Chris Lattner via swift-evolution <swift-evolution@swift.org> wrote:

random note: my previous email was very high level, so I’ve made an effort to make this more concrete and include examples, to avoid confusion.

Swift isn’t squarely in either of the static or dynamic camps: it aims to provide a very predictable performance model … while also providing an expressive and clean high level programming model. A focus of Swift … is to provide an apparently simple programming model. However, Swift also intentionally "cheats" in its global design by mixing in a few tricks to make the dynamic parts of the language optimizable by a static compiler in many common cases, without requiring profiling or other dynamic information.

I’d say that Swift is an “opportunistic” language, in that it provides a very dynamic “default" programming model, where you don’t have to think about the fact that a static compiler is able to transparently provide great performance - without needing the overhead of a JIT.

You really need to include the compilation model and thus the resultant programmer model into the story, and the programmer model is what really matters, IMHO.

First, two clarification requests for Chris on two things I imagine might lead to confusion on this thread:

When you say “programmer model,” I understand you to mean "how a Swift programmer thinks about the language’s semantics while writing Swift code, without regard to how they’re implemented in the compiler.”

Yes. Except in extreme cases, the interesting question isn’t whether it is “possible" to do thing X in language Foo, it is to ask whether Foo “encourages" X and how it rewards it. For example, people can (and do!) implement v-table dynamic dispatch systems in C to manually build OO models, but C requires tons of boilerplate to do that, and rewards those efforts with lack of type checking, no optimization of those dispatch mechanisms, and a typically unpleasant debugger experience.

What I really care about is “what kind of code is written by a FooLang programmer in practice”, which is what I refer to as the programmer model encouraged by FooLang. This question requires you to integrate across large bodies of different code and think about the sort of programmer who wrote it (e.g. “systemsy" people often write different code than “scripty” people) and how FooLang’s design led to that happening. People end up writing code a certain ways because many obvious and subtle incentives inherent in the language. When designing a programming language from scratch or considering adding a feature to an existing one, the “big” question is what the programmer model should be and whether a particular aggregation of features will provide it.

As a concrete example, consider “let”. A Swift goal is to “encourage" immutability, without “punishing” mutability (other languages take a position of making mutability very painful, or don’t care about immutability). This is why we use “let” as a keyword instead of “const” or “let mut". If it were longer than “var”, some people would just use var everywhere with the argument that consistency is better. Warning about vars that could be lets is another important aspect of this position.

As a more general example, Swift’s goal is to provide a scalable programming model, where it is easy, friendly and familiar for people new to Swift and/or new to programming. Its defaults are set up so that common mistakes don’t lead to bugs, and that forgetting to think about something shouldn’t paint you into a corner. OTOH, Swift doesn’t achieve this by being “watered down” for newbies, it does this by factoring the language so that power-user features can be learned at the appropriate point on the learning curve. “Niche” features for power uses make sense when they enable new things things being expressed, new markets to be addressed, or new performance wins to be had. This is key to Swift being able to scale from “low level system programming” all the way up to “scripting”, something I’m quite serious about.

If you’re interested in examples of niche power-user features, they could be things like inline assembly support, “#pragma pack” equivalents, enforced language subsets for constrained environments, or a single-ownership / borrow / move model to guarantee no ARC overhead or runtime interaction. So long as the feature doesn’t complicate the basic model for all Swift programmers, allowing more expert users to have more power and control is a (very) good thing IMO.

When you say “dynamic,” I take that to mean any kind of dispatch based on runtime type — whether implemented using vtables a la C++, message dispatch a la Objective-C, string-based lookup in a hash a la Javascript, or anything else that uses something’s runtime type to resolve a method call.

Do I understand you correctly?

Yes, I’d also include checked downcasting, since that relies on runtime type as well. It is admittedly a stretch, but I include mark and sweep GCs as well, since these need runtime type descriptors to be able to walk pointer graphs in the “mark" phase.

On this thread, there are (I think?) two related goals at hand:

Allow dynamic dispatch of protocol extension methods even when the method does not appear in the extended protocol.
Provide a good mental model of the language for programmers, and prevent programmer errors caused by misunderstandings about dispatch rules (if such misunderstandings do indeed exist in the wild).

I’ll copy and paste what Chris wrote into a “Swift philosophy” checklist for Brent’s proposal, and for any others working toward these goals. Chris, please correct me if I’m putting words in your mouth!
Provide a programmer model that:
is high level
is expressive and clean
is dynamic by default
doesn’t require a programmer to think about the fact that a static compiler is able to transparently provide great performance
Provide a performance model that:
is predictable
makes the dynamic parts of the language optimizable by a static compiler in many common cases
does not requiring profiling or other dynamic information
does not require JIT compilation

Yes, this is a good summary.

How do we resolve tension between these goals? The programmer model is what really matters, but we cannot reason about it without considering its impact on the compilation model. We should give the compiler opportunities to “cheat” in its optimization whenever we can do so without undermining the programmer model.

I’d consider adding a keyword (really, a decl modifier) to make it clear what the behavior is. This provides predictability.

-Chris

···

On Dec 12, 2015, at 10:04 AM, Paul Cantrell <cantrell@pobox.com> wrote:

I realize I’m straying for the topic of the thread (and Brent’s neglected proposal, which I really do mean to think some more about), but how I can I not chime in to these wonderful musings on language design?

When you say “programmer model,” I understand you to mean "how a Swift programmer thinks about the language’s semantics while writing Swift code, without regard to how they’re implemented in the compiler.”

Yes. Except in extreme cases, the interesting question isn’t whether it is “possible" to do thing X in language Foo, it is to ask whether Foo “encourages" X and how it rewards it.

Yes! When students ask why they should take Theory of Computation, part of my answer is that it’s good to get a really deep handle on the question of what’s possible in a language, and how very different that is from the question of what’s elegant in a language. The Church-Turing Thesis closes the door on a whole category of questions about what a given language can do: algorithmically, all these languages we work with are equivalent! It’s a really freeing insight once you’ve wrapped your head around it.

What I really care about is “what kind of code is written by a FooLang programmer in practice”, which is what I refer to as the programmer model encouraged by FooLang.

When designing a programming language from scratch or considering adding a feature to an existing one, the “big” question is what the programmer model should be and whether a particular aggregation of features will provide it.

Thanks for this. I was thinking “programmer model” meant only the programmer’s mental model of the language — but you’re talking about something broader and deeper: the style, the culture, the patterns of thought, and the aesthetics that arise from the experience of working with a particular language.

That’s wonderful. And daunting.

So … how do you test this? How do you evaluate language features for it? I think of these questions about protocol extensions, and trying to predict the resulting programmer model seems a fool’s errand.

This is why we use “let” as a keyword instead of “const” or “let mut". If it were longer than “var”, some people would just use var everywhere with the argument that consistency is better.

I love this example. Yes, of course, we programmers would all concoct some post-hoc justification for doing what’s comfortable to us.

Swift doesn’t achieve this by being “watered down” for newbies, it does this by factoring the language so that power-user features can be learned at the appropriate point on the learning curve. “Niche” features for power uses make sense when they enable new things things being expressed, new markets to be addressed, or new performance wins to be had. This is key to Swift being able to scale from “low level system programming” all the way up to “scripting”, something I’m quite serious about.

The other half of this is that the language doesn’t impose any cognitive burden on those who don’t use the niche / expert features. I don’t want to be an expert in everything all the time; I want to be able to focus on only the tools appropriate to the problem at hand. I don’t want to have to worry about bumping into the unshielded circular saw every time I pick up a screwdriver, even if I do know how to use a circular saw.

I like what Swift has done on this front so far. UnsafePointer is a great example. Swift can still provide bare memory access without making it ubiquitous. Take that, C++!

On which note: is there thought of eventually bootstrapping the Swift compiler?

Cheers,

Paul

I realize I’m straying for the topic of the thread (and Brent’s neglected proposal, which I really do mean to think some more about), but how I can I not chime in to these wonderful musings on language design?

No problem, I’m taking time to pontificate here for the benefit of the community, hopefully it will pay itself back over time, because people understand the rationale / thought process that led to Swift better :-)

When you say “programmer model,” I understand you to mean "how a Swift programmer thinks about the language’s semantics while writing Swift code, without regard to how they’re implemented in the compiler.”

Yes. Except in extreme cases, the interesting question isn’t whether it is “possible" to do thing X in language Foo, it is to ask whether Foo “encourages" X and how it rewards it.

Yes! When students ask why they should take Theory of Computation, part of my answer is that it’s good to get a really deep handle on the question of what’s possible in a language, and how very different that is from the question of what’s elegant in a language. The Church-Turing Thesis closes the door on a whole category of questions about what a given language can do: algorithmically, all these languages we work with are equivalent!

Yep, almost. I’m still hoping to get an infinite tape someday :-)

Thanks for this. I was thinking “programmer model” meant only the programmer’s mental model of the language — but you’re talking about something broader and deeper: the style, the culture, the patterns of thought, and the aesthetics that arise from the experience of working with a particular language.

Right.

So … how do you test this?

You can only test it by looking at a large enough body of code and seeing what problems they face. Any language that is used widely will evidence of problems that people are having. There are are shallow problems like “I have to type a few extra characters that are redundant and it annoys me”, and large problems “Two years into my project, I decided to throw it away and rewrite it because it had terrible performance / didn’t scale / was too buggy / couldn’t be maintained / etc". I don’t believe that there is ever a metric of “ultimate success", but the more big problems people have, the more work there is left to be done.

The good news is that we, as programmers, are a strongly opinionated group and if something irritates us we complain about it :-). It is somewhat funny that (through selection bias) I have one of the largest list of gripes about Swift, because I see a pretty broad range of what people are doing and what isn’t working well (a result of reading almost everything written about swift, as well as tracking many, many, bug reports and feature requests). This drives my personal priorities, and explains why I obsess about weird little things like getting implicit conversions for optionals right, how the IUO model works, and making sure the core type checker can be fixed, but prefer to push off “simple” syntactic sugar for later when other pieces come in.

How do you evaluate language features for it? I think of these questions about protocol extensions, and trying to predict the resulting programmer model seems a fool’s errand.

Adding a feature can produce surprising outcomes. A classic historical example is when the C++ added templates to the language without realizing they were a turing complete meta-language. Sometime later this was discovered and a new field of template metaprogramming came into being. Today, there are differing opinions about whether this was a good or bad thing for the C++ programmer model.

That said, most features have pretty predictable effects, because most features are highly precedented in other systems, and we can see their results and the emergent issues with them. Learning from history is extremely important. You can also think about the feature in terms of common metrics by asking things like “what is the error of omission?” which occurs someone fails to think about the feature. For example, if methods defaulted to final, then the error of omission would be that someone didn’t think about overridability, and then discovered later that they actually wanted it. If symbols defaulted to public, then people would naturally export way too much stuff, because they wouldn’t think about marking them internal, etc.

Swift doesn’t achieve this by being “watered down” for newbies, it does this by factoring the language so that power-user features can be learned at the appropriate point on the learning curve. “Niche” features for power uses make sense when they enable new things things being expressed, new markets to be addressed, or new performance wins to be had. This is key to Swift being able to scale from “low level system programming” all the way up to “scripting”, something I’m quite serious about.

The other half of this is that the language doesn’t impose any cognitive burden on those who don’t use the niche / expert features. I don’t want to be an expert in everything all the time; I want to be able to focus on only the tools appropriate to the problem at hand. I don’t want to have to worry about bumping into the unshielded circular saw every time I pick up a screwdriver, even if I do know how to use a circular saw.

I like what Swift has done on this front so far. UnsafePointer is a great example. Swift can still provide bare memory access without making it ubiquitous. Take that, C++!

Right!

On which note: is there thought of eventually bootstrapping the Swift compiler?

There are no short term plans. Unless you’d consider rewriting all of LLVM as part of the project (something that would be awesome, but that I wouldn’t recommend :-), we’d need Swift to be able to import C++ APIs. I’m personally hopeful that we’ll be able to tackle at least some of that in Swift 4, but we’ll see - no planning can be done for Swift 4 until Swift 3 starts to wind down.

-Chris

···

On Dec 13, 2015, at 9:34 PM, Paul Cantrell <cantrell@pobox.com> wrote:

I forgot the most important part. The most important aspect of evaluating something new is to expose it to ridiculously smart people, to see what they think.

For best effect, they should come from diverse backgrounds and perspectives, and be willing to share their thoughts in a clear and direct way. This is one of the biggest benefits of all of swift being open source - public design and open debate directly leads to a better programming language.

-Chris

···

On Dec 13, 2015, at 10:03 PM, Chris Lattner via swift-evolution <swift-evolution@swift.org> wrote:

How do you evaluate language features for it? I think of these questions about protocol extensions, and trying to predict the resulting programmer model seems a fool’s errand.

Adding a feature can produce surprising outcomes.

No problem, I’m taking time to pontificate here for the benefit of the community, hopefully it will pay itself back over time, because people understand the rationale / thought process that led to Swift better :-)

I appreciate these philosophical musings about language design greatly! They make for very interesting reading and definitely shed more light on some of the decisions made in the design of Swift.

You can also think about the feature in terms of common metrics by asking things like “what is the error of omission?” which occurs someone fails to think about the feature. For example, if methods defaulted to final, then the error of omission would be that someone didn’t think about overridability, and then discovered later that they actually wanted it.

In this example I think it is reasonable to consider “what is the error of omission?” from the reverse standpoint. Because methods do not default to final somebody may not think about inheritance / overridability and fail to specify final. The class or method may be one that really should not be inheritable / overridable or it may be one where this is reasonable, but the implementation is not well designed to support this. In this case they may later discover that they have buggy subclasses upon which a lot of code depends. IMHO this is much more serious than discovering that one should have allowed inheritance / overridability which can be added with a non-breaking change (from a semantic point of view, maybe implementation is not so simple when ABI resilience is a requirement).

Inheritance is a pretty complex tool to wield *without* prior consideration IMHO. Swift already offers a lot of tools that reduce the need for inheritance and will hopefully offer more in the future (such as improved protocols / generics, better support for composition through synthesized forwarding, etc). Why not *require* some forethought to use inheritance? This would provide subtle guidance towards other solutions where appropriate just as Swift subtly guides users towards immutability.

As an aside, final is actually the right choice for the majority of classes I encounter in iOS apps. This may not always be the case in system frameworks but they *should* receive much more careful design than application level code.

Please don’t take this as pedantic. I’m genuinely curious to hear more about why you apply “error of omission” one way and not the other in this case.

Thanks,
Matthew

You can also think about the feature in terms of common metrics by asking things like “what is the error of omission?” which occurs someone fails to think about the feature. For example, if methods defaulted to final, then the error of omission would be that someone didn’t think about overridability, and then discovered later that they actually wanted it.

In this example I think it is reasonable to consider “what is the error of omission?” from the reverse standpoint.

Yes, absolutely.

Because methods do not default to final somebody may not think about inheritance / overridability and fail to specify final. The class or method may be one that really should not be inheritable / overridable or it may be one where this is reasonable, …

Understood, I wasn’t trying to present a well-rounded analysis of this decision, I just wanted to use it as a simple example.

-Chris

···

On Dec 14, 2015, at 8:23 AM, Matthew Johnson <matthew@anandabits.com> wrote:

No problem, I’m taking time to pontificate here for the benefit of the community, hopefully it will pay itself back over time, because people understand the rationale / thought process that led to Swift better :-)

Cheers to that. It’s helpful to get these philosophical thoughts from the core team, as well as little “smell checks” on specific proposals — both for taste and for feasibility. I don’t think it will stop anyone from sharing opinions (programmers, strong opinions, like you said), but it does help guide discussion.

I see a pretty broad range of what people are doing and what isn’t working well … This drives my personal priorities, and explains why I obsess about weird little things like getting implicit conversions for optionals right, how the IUO model works…

It’s no “weird little thing” — that’s been huge. Confusing implicit optional conversions (or lack thereof) + lack of unwrapping conveniences + too many things unnecessarily marked optional in Cocoa all made optionals quite maddening in Swift 1.0. When I first tried the language, I thought the whole approach to optionals might be a mistake.

Yet with improvements on all those fronts, I find working with optionals in Swift 2 quite pleasant. In 1.0, when optionals forced me to stop and think, it was usually about the language and how to work around it; in 2.x, when optionals force me to stop and think, it’s usually about my code, what I’m modeling with it, and where there are gaps in my reasoning. Turns out the basic optionals approach was solid all along, but needed the right surrounding details to make it play out well. Fiddly details had a big impact on the language experience.

Still, it seems like a lot of people fall back on forced unwrapping rather than trying to fully engage with the type system and think through their unwrappings. Is this a legacy of 1.x? Or does the language still nudge that way? I see a lot of instances of “foo!” in the wild, especially from relative beginners, that seem to be a reflexive reaction to a compiler error and not a carefully considered assertion about invariants guaranteeing safe unwrapping. This discussion makes me wonder: conversely to the decision of making “let” as short as “var,” perhaps “foo!” is too easy to type. Should the compiler remove fixits that suggest forced / implicit unwraps? Should it even be something ugly like “forceUnwrap!(foo)”? (OK, probably not. But there may be more gentle ways to tweak the incentives.)

So there’s the notion of the “programmer model” playing out in practice.

Adding a feature can produce surprising outcomes. A classic historical example is when the C++ added templates to the language without realizing they were a turing complete meta-language. Sometime later this was discovered and a new field of template metaprogramming came into being.

I remember my mixture of delight & horror when I first learned that! (I was an intern for HP’s dev tools group back in the mid-90s, and spent a summer trying to find breaking test cases for their C++ compiler. Templates made it like shooting fish in a barrel — which is nothing against the compiler devs, who were awesome, but just a comment on the deep darkness of the corners of C++.)

That experience makes me wonder whether in some cases the Swift proposal process might put the cart before the horse by having a feature written up before it’s implemented. With some of these proposals, at least the more novel ones where the history of other languages isn’t as strong a guide, it could be valuable to have a phase where it’s prototyped on a branch and we all spend a little time playing with a feature before it’s officially accepted.

One of my favorite features of Swift so far has been its willingness to make breaking changes for the health of the language. But it would be nice to have those breaking changes happen _before_ a release when possible!

I forgot the most important part. The most important aspect of evaluating something new is to expose it to ridiculously smart people, to see what they think.

Well, I don’t have the impression that the Swift core team is exactly hurting on _that_ front. But…

This is one of the biggest benefits of all of swift being open source - public design and open debate directly leads to a better programming language.

…yes, hopefully many eyes bring value that’s complementary to the intelligence & expertise of the core team. There’s also a lot to be said for the sense of ownership and investment that comes from involving people in the decision making. That certainly pays dividends over time, in so many different community endeavors.

I’m grateful and excited to be involved in thinking about the language, as I’m sure are many others on this list. When it comes right down to it, I trust the core team to do good work because you always have — but it’s fun to be involved, and I do hope that involvement indeed proves valuable to the language.

Cheers,

Paul

–––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
https://innig.net@inthehandshttp://siestaframework.com/

Understood, I wasn’t trying to present a well-rounded analysis of this decision, I just wanted to use it as a simple example.

Makes sense. If you're ever in the mood to share more on this I would find it very interesting (even if it doesn't convince me :) ).

Matthew

I see a pretty broad range of what people are doing and what isn’t working well … This drives my personal priorities, and explains why I obsess about weird little things like getting implicit conversions for optionals right, how the IUO model works…

It’s no “weird little thing” — that’s been huge. Confusing implicit optional conversions (or lack thereof) + lack of unwrapping conveniences + too many things unnecessarily marked optional in Cocoa all made optionals quite maddening in Swift 1.0. When I first tried the language, I thought the whole approach to optionals might be a mistake.

Yet with improvements on all those fronts, I find working with optionals in Swift 2 quite pleasant. In 1.0, when optionals forced me to stop and think, it was usually about the language and how to work around it; in 2.x, when optionals force me to stop and think, it’s usually about my code, what I’m modeling with it, and where there are gaps in my reasoning. Turns out the basic optionals approach was solid all along, but needed the right surrounding details to make it play out well. Fiddly details had a big impact on the language experience.

Right, but what I’m getting at is that there is more work to be done in Swift 3 (once Swift 2.2 is out of the way). I find it deeply unfortunate that stuff like this still haunt us:

  let x = foo() // foo returns an T!
  let y = [x, x] // without looking, does this produce "[T!]" or "[T]” ???

There are other similar problems where the implicit promotion from T to T? interacts with sametype constraints in unexpected ways, for example, around the ?? operator. There is also the insane typechecker complexity and performance issues that arise from these implicit conversions. These need to be fixed, as they underly many of the symptoms that people observe.

Still, it seems like a lot of people fall back on forced unwrapping rather than trying to fully engage with the type system and think through their unwrappings. Is this a legacy of 1.x? Or does the language still nudge that way? I see a lot of instances of “foo!” in the wild, especially from relative beginners, that seem to be a reflexive reaction to a compiler error and not a carefully considered assertion about invariants guaranteeing safe unwrapping.

Unclear. I’m aware of many unfortunate uses of IUOs that are the result of language limitations that I’m optimistic about fixing in Swift 3 (e.g. two phase initialization situations like awakeFromNib that force an property to be an IUO or optional unnecessarily), but I’m not aware of pervasive use of force unwraps. Maybe we’re talking about the same thing where the developer decided to use T? instead of T!.

This discussion makes me wonder: conversely to the decision of making “let” as short as “var,” perhaps “foo!” is too easy to type. Should the compiler remove fixits that suggest forced / implicit unwraps? Should it even be something ugly like “forceUnwrap!(foo)”? (OK, probably not. But there may be more gentle ways to tweak the incentives.) So there’s the notion of the “programmer model” playing out in practice.

It depends on “how evil” you consider force unwrap to be. If you draw an analogy to C, the C type system has a notion of const pointers. It is a deeply flawed design for a number of reasons :-), but it does allow modeling some useful things. However, if you took away the ability to "cast away" const (const_cast in C++ nomenclature), then the model wouldn’t work (too many cases would be impossible to express). I put force unwrap in the same sort of bucket: without it, optionals would force really unnatural code in corner cases. It is “bad” in some sense, but its presence is visible and greppable enough to make it carry weight. The fact that ! is a unifying scary thing with predictable semantics in swift is a good thing IMO. From my perspective, I think the Swift community has absorbed this well enough :-)

Here is another (different but supportive) way to look at why we treat unsafety in Swift the way we do:

With force unwrap as an example, consider an API like UIImage(named:"foo”). It obviously can fail if “foo.png" is missing, but when used in an app context, an overwhelming use-case is loading an image out of your app bundle. In that case, the only way it can fail is if your app is somehow mangled. Should we require developers to write recovery code to handle that situation?

To feature creep the discussion even more, lets talk about object allocation in general. In principle, malloc(16 bytes) can fail and return nil, which means that allocation of any class type can fail. Should we model this as saying that all classes have a failable initializer, and expect callers to write recovery code to handle this situation? If you’re coming from an ObjC perspective, should a class be expected to handle the situation when NSObject’s -init method returns nil?

You can predict my opinion based on the current Swift design: the answer to the both questions is no: in the first case, we want the API to allow the developer to write failure code in situations they want, and situations don’t care they can use !. In the later case, we don’t think that primitive object allocation should ever fail (and if it does, it should be handled by the runtime or some OS service like purgable memory) and thus the app developer should never have to think about it.

This isn’t out of laziness: “error handling” and “recovery” code not only needs to be written, but it needs to be *correct*. Unless there is a good way to test the code that is written, it is better to not write it in the first place. Foisting complexity onto a caller (which is what UIImage is doing) is something that should only be done when the caller may actually be able to write useful recovery code, and this only works (from a global system design perspective) if the developer has an efficient way to say “no really, I know what I’m doing in this case, leave me alone”. This is where ! comes in. Similarly, IUOs are a way to balance an equation involving the reality that we’ll need to continue importing unaudited APIs for a long time, as well as a solution for situations where direct initialization of a value is impractical.

This sort of thought process and design is what got us to the current Swift approach. This is balancing many conflicting goals, in an aim to produce a programming model that leads to reliable code being written the first time. In the cases when it isn’t reliable, it is hopefully testable, e.g. by “failing fast”. (https://en.wikipedia.org/wiki/Fail-fast\).

Adding a feature can produce surprising outcomes. A classic historical example is when the C++ added templates to the language without realizing they were a turing complete meta-language. Sometime later this was discovered and a new field of template metaprogramming came into being.

I remember my mixture of delight & horror when I first learned that! (I was an intern for HP’s dev tools group back in the mid-90s, and spent a summer trying to find breaking test cases for their C++ compiler. Templates made it like shooting fish in a barrel — which is nothing against the compiler devs, who were awesome, but just a comment on the deep darkness of the corners of C++.)

Sadly, templates aren’t the only area of modern C++ that have that characteristic… :-) :-)

That experience makes me wonder whether in some cases the Swift proposal process might put the cart before the horse by having a feature written up before it’s implemented. With some of these proposals, at least the more novel ones where the history of other languages isn’t as strong a guide, it could be valuable to have a phase where it’s prototyped on a branch and we all spend a little time playing with a feature before it’s officially accepted.

I’m of two minds about this. On the one hand, it can be challenging that people are proposing lots of changes that are more “personal wishlist” items than things they plan to implement and contribute themselves. On the other hand, we *want* the best ideas from the community, and don’t want to stymie or overly “control” the direction of Swift if it means that we don’t listen to everyone. It’s a hard problem, one that we’ll have to figure out as a community.

Another way of looking at it: Just because you’re a hard core compiler engineer, it doesn’t mean your ideas are great. Just because you’re not a hard core compiler engineer, it doesn’t mean your ideas are bad.

One of my favorite features of Swift so far has been its willingness to make breaking changes for the health of the language. But it would be nice to have those breaking changes happen _before_ a release when possible!

+1. I think that this is the essential thing that enables Swift to be successful over the long term. Swift releases are time bound (to a generally yearly cadence), Swift is still young, and we are all learning along the way. Locking it down too early would be bad for its long term health -- but it also clearly needs to settle over time (and sooner is better than later).

Overall, we knew that it would be a really bad idea to lock down swift before it was open source. There are a lot of smart people at Apple of course, but there are also a lot of smart people outside, and we want to draw on the best ideas from wherever we can get them.

I forgot the most important part. The most important aspect of evaluating something new is to expose it to ridiculously smart people, to see what they think.

Well, I don’t have the impression that the Swift core team is exactly hurting on _that_ front. But…

Frankly, one of my biggest surprises since we’ve open sourced swift is how “shy” some of the smartest engineers are. Just to pick on one person, did you notice today that Slava covertly fixed 91% of the outstanding practicalswift compiler crashers today? Sheesh, he makes it look easy! Fortunately for all of us, Slava isn’t the only shy one…

This is one of the biggest benefits of all of swift being open source - public design and open debate directly leads to a better programming language.

…yes, hopefully many eyes bring value that’s complementary to the intelligence & expertise of the core team. There’s also a lot to be said for the sense of ownership and investment that comes from involving people in the decision making. That certainly pays dividends over time, in so many different community endeavors.

Yes it does. The thing about design in general and language design in particular is that the obviously good ideas and obviously bad ideas are both “obvious". The ones that need the most debate are the ones that fall in between. I’ll observe that most ideas fall in the middle :-)

I’m grateful and excited to be involved in thinking about the language, as I’m sure are many others on this list. When it comes right down to it, I trust the core team to do good work because you always have — but it’s fun to be involved, and I do hope that involvement indeed proves valuable to the language.

I’m glad you’re here!

-Chris

···

On Dec 14, 2015, at 1:46 PM, Paul Cantrell <cantrell@pobox.com> wrote: