On the language itself I don't have much to ask, all its features make it great to express solutions to problems, but I have 2 things I would love to get:
parameter labels in closures. i think it was the worse decision to remove them, it makes sense for the language, but the ergonomics are so bad and it goes against a big selling point of swift.
"mixins", aka, add stored properties from extensoins/protocols. I know why is not there but I still wish there was a way to make it work. it opens a world of possibilities.
The rest is less about the language and more community, investment wise. It still feels like Swift is behind in a lot of ways outside Apple platforms. And it's sad because is the nicest language out there.
My points are less about the language itself and more about the speed and reliability of the tooling, especially for larger projects. I really think highly of Swift as a language and it's made some huge strides over the years, I think a lot of other languages could learn a lot from it, but the tooling continues to let it down.
Inspecting properties after hitting a breakpoint is slow and unreliable. It's much better than it was, but I'm still having to wait 10+ seconds to get anything meaningful out of the debugger... if it works at all.
SPM felt like it had a lot of promise but has just fell short. Tooling like Tuist has had to come along and pick up the pieces for things like module caching etc.
Errors are much more reliable these days but can still be cryptic at times. Rust is a great example of where improvements can be made.
I do kind of agree with some other points above about parameter labels in closures and for me the multiple trailing closures are a bit of a mess which I tend to avoid using.
Whilst these are points where I think improvements could be made, I do want to balance that out by saying a huge thank you to everyone who contributed and continues to contributes to the language. I've learnt a great deal over the years about why certain decisions have been made through the open nature of the evolution process and it has sent me off on research missions to find out more about a particular subject more than once.
When I was learning to program, C++ was just "C with classes". Any language that has been in widespread use for decades will accumulate some complexity and cruft. New languages periodically come along, attempting to combine the best features of their predecessors while also repairing mistakes and modernizing their features for the popular paradigms of the day. Some of these new languages become successful, and go on to accumulate legacy baggage of their own. Then even newer languages appear, and the hipsters scoff at what came before, at least while the newer language is relatively small and elegant. This cycle will repeat forever; it's how our craft gradually improves.
The main problem with C++, in my opinion, is not the complexity per se, but the fundamentally unsafe nature of the language.
Let me run this idea by you of making ABI compatibility story differently:
there are several (say, 10) latest versions of swift runtime preinstalled.
the app specifies what swift version it uses and the appropriate runtime is used for this app.
on an attempt to run a very old app which predates the latest 10 available runtimes, OS gives a prompt "do you want to download an older runtime to run this app?"
It would be significantly different to what we have now. Behaviorally it would work as if the app binary included the required Swift runtime with the only difference that this runtime is not in the app itself but in the OS allowing it to be shared across multiple apps using the same Swift version. Removing a field in a struct, class or enum is not ABI breaking in this model, adding a new field is neither ABI nor API breaking, notion of "frozen" or "@unknown default" is not needed, greater flexibility of making changes we currently can't even dream of, e.g. making Array 16 bytes to store short arrays inline, or changing String size to 32 bytes, reimplementing closures to make them pointer sized, etc.
Suppose you have 10 Swift runtimes, and 5 shared libraries that include at least one line of Swift code each. You'd need to cover all 10*5 combinations, because two shared libraries linked against different versions of the Swift runtime would not work together, at least to do the kinds of ABI changes you're describing, where the fundamental layout of something like Array or String changes, etc.
I would make Swift fast. It's been architected to be slow, and have no real way to make anything complicated performant. I do like the language, and use it alot! But seriously after 10 years its still slower than Java , C++, etc for most use cases, and when the team "engages" on a performance topic, they usually engage in excuse making instead of working on making the performance of the language actually better.
Hint: Make it common and easy to generate local temp objects that don't touch thread safe ARC. Make the collections classes use this. Make the local temp objects use an alloc pool instead of malloc. This is what all performant systems start with.
Once thats cleared up, programs actual architectural performance issues will show up. Right now they are completely obscured by ARC.
No, ARC is not better than GC. GC can be made to have zero cost by carefully managing java objects. When we built java for macOS back in the late 90's, we had zero allocations per render in AWT & Swing rendering. That's why things sang on the platform. You can't do that with ARC- because there is no malloc free Eden heap, and no way to engage with objects without touching thread safe atomic ops.
I'd wanted this to be possible but when Apple starts using in Swift in their frameworks it breaks:
The best summary was @jrose's Swift was always going to be part of the OS // -dealloc I always wondered if there could be a middle road with layers of ABI between these absolutes (where multiple version of SwiftUI could be supported for example) but it would need to be pretty well thought out. The advantages are twofold, apps don't break or change their behaviour when users upgrade the OS and the library designers get to evolve their ABI's (APIs - ABI is something lower level.) more freely.
Most modern languages (at least that compile to machine code; higher level stuff like C# does seem to care) don't really care about ABI compatibility. Swift is one of the few exceptions. Obligatory link to this article.
Yet, C++, which you both admit is much more complex, is also much more successful than Swift. Clearly, complexity has not killed C++.
C++ is used for two simple reasons:
Performance
Portability
Adoption doesn’t make it a good language intrinsically. Otherwise we would all use javascript and Java.
A lot of people dislike C++ with a passion. And Swift is an example of trying to make a better programming language than C++. You could add Rust, Jai, Zig.
So yes, I can see why one would be worried that adding complexity the language overtime could lead to the same path of C++.
And again, not being able to debug Swift reliably nor even have correct diagnostics for so long is worrisome.
Yeah, this is a classic case of unintuitive at the time but clear in retrospect. I don't judge poorly the folks that made this decision - at the time we all thought it was the right one.
Not even knowing you're using Swift or Objective-C implementations underneath these APIs, and having no guarantee that'll be stable over time, is another frustrating aspect of this.
The difference matters because the performance can be very different and sometimes the Swift replacements have bugs, or at least behavioural changes that they really they shouldn't, but because everything is named exactly the same it's far more difficult than it should be to even detect that this is possible, let alone happening.
This is part of the much bigger topic of Swift performance being very difficult to predict.
If doing it again, I'd question at least the syntactic mechanisms of "non-inlined" Generics, if not their existence at all.
Swift tries to have its cake and eat it too by having high-level code full of abstraction (primarily generics and their kin, like protocols), and ostensibly optimising all that away to yield C-like programs without the undefined behaviour & safety bugs, etc.
The problem arises in two ways, the first of which is theoretically fixable, the other perhaps not:
The optimiser often fails. I don't begrudge it its failures, per se, because it's doing a very hard task. But like it or not, Swift [the language] has hoisted a lot of responsibility on the optimiser, and the optimiser is not yet up to the task.
"Dynamic"- rather than "static"-linking of Generics means Generics are very expensive by default across module boundaries. Sometimes more expensive than even the "dreaded" Objective-C message calls that so many people are unreasonably afraid of.
It is theoretically possible to work around this by applying @inline(__always) to everything, but I think that's the wrong default. You should have to opt into non-inline Generics (ideally using a better mechanism than a decorator on every single method & property), making the explicit choice to sacrifice performance for flexibility. And thus making you & the readers aware of that sacrifice.
I don't think it s relevant to judge Swift by its performance as such considerations seldom matter now.
I feel that for all its cruft, Swift at its core is a remarkably clean language (like C++ -- memory safety aside though not perhaps as clean as Java which is positively spartan -- memory consumption aside)
One of the headline features of Swift vs. Objective-C at the time was namespaces. “You no longer need to prefix your types with XYZ and hope no one else picks the same 2–4 letters to avoid name collisions!”
And while that is true from a technical viewpoint, in practice, names still collide. Frameworks like SwiftUI, RegexBuilder, or ArgumentParser use very generic type names. I could name my own type Font or Group, but then I would have to use SwiftUI.Font and MyAppName.Font which is worse than NSFont and MANFont would be. Type aliases could help, but at that point, I will just name the type consistently with a prefix. In my own packages, I now sometimes put everything in an enum “namespace” with a short name which is a hack but works well enough.
Also, module names can conflict with type names.
I am not quite sure how to best address this, but it always stuck out as an early promise that never panned out as I expected it would.
Otherwise, Swift is my favorite language to write code in, and I am looking forward to what’s to come. For my job, I still write a lot of Objective-C, and I can see clearly how a lot of the language complexity in Swift also exists in Objective-C code bases, but as conventions, patterns, and ad-hoc solutions for which there is less consensus, documentation, or compiler help.