I see a pretty broad range of what people are doing and what isn’t working well … This drives my personal priorities, and explains why I obsess about weird little things like getting implicit conversions for optionals right, how the IUO model works…
It’s no “weird little thing” — that’s been huge. Confusing implicit optional conversions (or lack thereof) + lack of unwrapping conveniences + too many things unnecessarily marked optional in Cocoa all made optionals quite maddening in Swift 1.0. When I first tried the language, I thought the whole approach to optionals might be a mistake.
Yet with improvements on all those fronts, I find working with optionals in Swift 2 quite pleasant. In 1.0, when optionals forced me to stop and think, it was usually about the language and how to work around it; in 2.x, when optionals force me to stop and think, it’s usually about my code, what I’m modeling with it, and where there are gaps in my reasoning. Turns out the basic optionals approach was solid all along, but needed the right surrounding details to make it play out well. Fiddly details had a big impact on the language experience.
Right, but what I’m getting at is that there is more work to be done in Swift 3 (once Swift 2.2 is out of the way). I find it deeply unfortunate that stuff like this still haunt us:
let x = foo() // foo returns an T!
let y = [x, x] // without looking, does this produce "[T!]" or "[T]” ???
There are other similar problems where the implicit promotion from T to T? interacts with sametype constraints in unexpected ways, for example, around the ?? operator. There is also the insane typechecker complexity and performance issues that arise from these implicit conversions. These need to be fixed, as they underly many of the symptoms that people observe.
Still, it seems like a lot of people fall back on forced unwrapping rather than trying to fully engage with the type system and think through their unwrappings. Is this a legacy of 1.x? Or does the language still nudge that way? I see a lot of instances of “foo!” in the wild, especially from relative beginners, that seem to be a reflexive reaction to a compiler error and not a carefully considered assertion about invariants guaranteeing safe unwrapping.
Unclear. I’m aware of many unfortunate uses of IUOs that are the result of language limitations that I’m optimistic about fixing in Swift 3 (e.g. two phase initialization situations like awakeFromNib that force an property to be an IUO or optional unnecessarily), but I’m not aware of pervasive use of force unwraps. Maybe we’re talking about the same thing where the developer decided to use T? instead of T!.
This discussion makes me wonder: conversely to the decision of making “let” as short as “var,” perhaps “foo!” is too easy to type. Should the compiler remove fixits that suggest forced / implicit unwraps? Should it even be something ugly like “forceUnwrap!(foo)”? (OK, probably not. But there may be more gentle ways to tweak the incentives.) So there’s the notion of the “programmer model” playing out in practice.
It depends on “how evil” you consider force unwrap to be. If you draw an analogy to C, the C type system has a notion of const pointers. It is a deeply flawed design for a number of reasons :-), but it does allow modeling some useful things. However, if you took away the ability to "cast away" const (const_cast in C++ nomenclature), then the model wouldn’t work (too many cases would be impossible to express). I put force unwrap in the same sort of bucket: without it, optionals would force really unnatural code in corner cases. It is “bad” in some sense, but its presence is visible and greppable enough to make it carry weight. The fact that ! is a unifying scary thing with predictable semantics in swift is a good thing IMO. From my perspective, I think the Swift community has absorbed this well enough :-)
Here is another (different but supportive) way to look at why we treat unsafety in Swift the way we do:
With force unwrap as an example, consider an API like UIImage(named:"foo”). It obviously can fail if “foo.png" is missing, but when used in an app context, an overwhelming use-case is loading an image out of your app bundle. In that case, the only way it can fail is if your app is somehow mangled. Should we require developers to write recovery code to handle that situation?
To feature creep the discussion even more, lets talk about object allocation in general. In principle, malloc(16 bytes) can fail and return nil, which means that allocation of any class type can fail. Should we model this as saying that all classes have a failable initializer, and expect callers to write recovery code to handle this situation? If you’re coming from an ObjC perspective, should a class be expected to handle the situation when NSObject’s -init method returns nil?
You can predict my opinion based on the current Swift design: the answer to the both questions is no: in the first case, we want the API to allow the developer to write failure code in situations they want, and situations don’t care they can use !. In the later case, we don’t think that primitive object allocation should ever fail (and if it does, it should be handled by the runtime or some OS service like purgable memory) and thus the app developer should never have to think about it.
This isn’t out of laziness: “error handling” and “recovery” code not only needs to be written, but it needs to be *correct*. Unless there is a good way to test the code that is written, it is better to not write it in the first place. Foisting complexity onto a caller (which is what UIImage is doing) is something that should only be done when the caller may actually be able to write useful recovery code, and this only works (from a global system design perspective) if the developer has an efficient way to say “no really, I know what I’m doing in this case, leave me alone”. This is where ! comes in. Similarly, IUOs are a way to balance an equation involving the reality that we’ll need to continue importing unaudited APIs for a long time, as well as a solution for situations where direct initialization of a value is impractical.
This sort of thought process and design is what got us to the current Swift approach. This is balancing many conflicting goals, in an aim to produce a programming model that leads to reliable code being written the first time. In the cases when it isn’t reliable, it is hopefully testable, e.g. by “failing fast”. (https://en.wikipedia.org/wiki/Fail-fast).
Adding a feature can produce surprising outcomes. A classic historical example is when the C++ added templates to the language without realizing they were a turing complete meta-language. Sometime later this was discovered and a new field of template metaprogramming came into being.
I remember my mixture of delight & horror when I first learned that! (I was an intern for HP’s dev tools group back in the mid-90s, and spent a summer trying to find breaking test cases for their C++ compiler. Templates made it like shooting fish in a barrel — which is nothing against the compiler devs, who were awesome, but just a comment on the deep darkness of the corners of C++.)
Sadly, templates aren’t the only area of modern C++ that have that characteristic… :-) :-)
That experience makes me wonder whether in some cases the Swift proposal process might put the cart before the horse by having a feature written up before it’s implemented. With some of these proposals, at least the more novel ones where the history of other languages isn’t as strong a guide, it could be valuable to have a phase where it’s prototyped on a branch and we all spend a little time playing with a feature before it’s officially accepted.
I’m of two minds about this. On the one hand, it can be challenging that people are proposing lots of changes that are more “personal wishlist” items than things they plan to implement and contribute themselves. On the other hand, we *want* the best ideas from the community, and don’t want to stymie or overly “control” the direction of Swift if it means that we don’t listen to everyone. It’s a hard problem, one that we’ll have to figure out as a community.
Another way of looking at it: Just because you’re a hard core compiler engineer, it doesn’t mean your ideas are great. Just because you’re not a hard core compiler engineer, it doesn’t mean your ideas are bad.
One of my favorite features of Swift so far has been its willingness to make breaking changes for the health of the language. But it would be nice to have those breaking changes happen _before_ a release when possible!
+1. I think that this is the essential thing that enables Swift to be successful over the long term. Swift releases are time bound (to a generally yearly cadence), Swift is still young, and we are all learning along the way. Locking it down too early would be bad for its long term health -- but it also clearly needs to settle over time (and sooner is better than later).
Overall, we knew that it would be a really bad idea to lock down swift before it was open source. There are a lot of smart people at Apple of course, but there are also a lot of smart people outside, and we want to draw on the best ideas from wherever we can get them.
I forgot the most important part. The most important aspect of evaluating something new is to expose it to ridiculously smart people, to see what they think.
Well, I don’t have the impression that the Swift core team is exactly hurting on _that_ front. But…
Frankly, one of my biggest surprises since we’ve open sourced swift is how “shy” some of the smartest engineers are. Just to pick on one person, did you notice today that Slava covertly fixed 91% of the outstanding practicalswift compiler crashers today? Sheesh, he makes it look easy! Fortunately for all of us, Slava isn’t the only shy one…
This is one of the biggest benefits of all of swift being open source - public design and open debate directly leads to a better programming language.
…yes, hopefully many eyes bring value that’s complementary to the intelligence & expertise of the core team. There’s also a lot to be said for the sense of ownership and investment that comes from involving people in the decision making. That certainly pays dividends over time, in so many different community endeavors.
Yes it does. The thing about design in general and language design in particular is that the obviously good ideas and obviously bad ideas are both “obvious". The ones that need the most debate are the ones that fall in between. I’ll observe that most ideas fall in the middle :-)
I’m grateful and excited to be involved in thinking about the language, as I’m sure are many others on this list. When it comes right down to it, I trust the core team to do good work because you always have — but it’s fun to be involved, and I do hope that involvement indeed proves valuable to the language.
I’m glad you’re here!
On Dec 14, 2015, at 1:46 PM, Paul Cantrell <email@example.com> wrote: