You forgot to put “imho”, “for me”, etc. in your post.
I'm not saying this based on my experience, rather on Swift direction, concurrency field and effort put to solve problems there, that's why there is Concurrency is well known to be hard
part.
Do I have problems with the way it been solved? Yes I do. Swift can do it greatly, but there are lot's of problems for end users.
I also don't like single-thread
per module solution, but I need to read proposal once more to have some better opinion.
Still can write "imho Swift is doing great solving lots of pain issues" if you want
I would be very interested in a "Swift concurrency, without actors" document.
I tried using actors (by basically replacing classes with actors in a tiny framework, to offload work from the main thread), but
- I didn't like the way the introduction of actors forced me to change my design,
- it made a simple problem much more complicated,
- and I even managed to introduce a (reentrancy) bug into my tiny example.
So my personal conclusion is to stay away from actors, because
- I no longer understand what's happening, and I wouldn't consider myself capable of confidently code-reviewing a merge request that contained actor code, to be honest.
- I'm not sure what we get for all the added complexity and technicality. Low-level data race safety, Ok, but not data race safety in general, if I understand correctly, so it feels like a lot of pain for little gain.
- When some people say they had to annotate hundreds of types to get rid of the errors in code that was fine before the Swift 6 adoption, that feels like the wrong thing to force onto users of the language.
- Some people have pointed out that actors encourage shared mutable state designs, and I think they have a point. It almost feels like an encouragement to dig yourself into a hole. If everything had value semantics, in contrast, understanding code would be a lot easier in comparison (e.g., local reasoning).
So if users of the language like me, who chose to stay away from actors, can get guidance on how to make good use of Swift's other concurrency features in an easy to use and safe way, that would be very helpful.
Your vision document identifies a bunch of pain points in swift 6.
I think you're absolutely right to focus on these - but I would argue for a radically different approach.
Swift 6 is currently trying to do a technically very hard thing in analysing code, and making data safety guaranteed.
Unfortunately, this has resulted in side effects:
- Significant language complexity
- There are a large number of annotations
- Almost everyone who actually uses Swift doesn't understand the subtleties of how the compiler reasoning and compiler annotations actually work
- Practical pain interfacing with existing/framework code
- protocol conformance issues
- incompatibility with basic language features like codable
- etc
- 'Actor infection'
- once you mark something as @MainActor, it's hard to stop that proliferating around code that interacts with it
- Complexity
- The solutions to 'make the compiler happy' often add ugly complexity (e.g. creating sendable wrapper to pass an NSImage)
- Frustration
- sometimes from not understanding the language features, sometimes from not wanting to jump through the hoops that Swift 6 requires
I'm sure there are more - @hborla does a good job of expanding on many of the details in the pitch.
My reaction to these has been simple. I'm just not using Swift 6 mode. And I'm still plagued by annoying warnings in my swift 5 code.
Having identified the problems, the vision proposes making it easier to move to single threaded programming in large areas (potentially even as a default for many programmers)
I think this is entirely the wrong approach. It is led by what the compiler needs rather than what developers need.
I would propose a very different approach
1) Proper documentation
The first step is to actually document what swift 6 does.
This should be in the swift language guide.
For example - 'sending' solved a problem for me yesterday in swift 6 mode, but searching the Swift Language Guide finds no references to the term. *
I suspect simply the process of documenting it will result in significant insights to areas that can be improved.
The current state of play is that the only way to understand the language is to keep up with a whole series of proposals, and how they build on each other. The folks writing the language naturally do this, so perhaps they assume that regular Swift 6 users do too.
This is wrong - I have been involved in several threads on this forum where the most engaged swift 6 users still have to be corrected about how the language works ('doesn't SE-xxx do yyy...').
Once complete documentation exists, future changes should not be accepted unless they fully explain themselves fully in the documentation.
The complexity of writing user documentation will highlight a lot of the practical complexity.
2) Opt In to concurrency safety
Developers should be given sharp knives
Flip the default. The default should be that guaranteed data race safety is turned off.
If I want to return an NSImage from a method, and consume it in another thread - that's my business.
If I want to opt in to guaranteed thread safety for my module/class/app, then you can give me a compiler error.
The reality of my coding is that data race errors are a vanishingly small part of the bugs I have had to fix over the years. It would be nice for the compiler to help me avoid some of those bugs - but only if the pain is proportional to the benefit. Right now, it is not.
I have a handful of production apps that do significant processing of images. I have never had a data race issue where one thread modifies an NSImage while another thread is trying to use it.
But I have still wasted a bunch of time trying to get rid of warnings about returning an NSImage from a method. That added pain for zero gain.
Moving to an opt-in model will change the dynamic. At the moment, it feels like we're on a forced march to the promised land of Swift 6 safety. If safety is opt in, then developers will choose to use it as it becomes more ergonomic. If the feature has to be worth the pain to convince people to opt in, - the dynamic around design will focus more on real usage.
3) Concurrency tooling
Analyse code was a great tool as we moved towards arc (and even later). Run the tool, examine warnings about memory safety, fix if needed.
It took a very different approach though - the compiler didn't start refusing to compile my old Obj-C code. It gave me a way to gain insights to improve my code.
Concurrency could do the same thing. Analyse could warn me that returning an NSImage is potentially unsafe if the sender keeps and mutates the original - but I can choose to ignore that because I know I'm not doing so.
4) Update the framework/language to better 'play nicely' with concurrency/isolation
e.g. Extend codable so that it can work on a @MainActor struct/class
I know this is going radically against the grain of the current vision. The new vision document recognises much of the pain inherent in the current approach. Apple probably has (or could have) data on how much code is opting in to swift 6. My guess is that number will show the observable choice that developers are making.
The goal is to
make basic use of concurrency simple and easy.
When your solution to a problem is to throw out concurrency entirely for large sections of language use, then you're missing a bigger problem.
- A similar documentation problem exists for @MainActor annotations. The term is critical to current swift code, but barely referenced in the Language guide. As a result, massive errors in understanding what it does, and doesn't guarantee have been common - even in Apple WWDC presentations.
I struggled to parse this paragraph:
- A function value can have a type like
@MainActor () -> Bool
that says it's isolated to a specific global actor. These functions can either be non-isolated or isolated to that actor, and Swift just treats them as the latter unconditionally when calling them.
Totally agree that actor types can be very tough and people reach for them way too much.
I'm very optimistic that the combination of MainActor
-by-default and isolation inheritance for async functions is going to get you pretty far here! Unfortunately reentrancy problems are possible as soon as you have an await
in your code. But they are usually a lot easier to deal with when you have one single collection of state, like you do with MainActor
.
Ultimately though, I think the intention is that the guidance you need is the diagnostics that are produced by the compiler. Code would be largely single-threaded. And in the cases where data does need to passed back/forth to background threads, you'll get explicit feedback that it is happening. Should that be challenging to pull off, that's when you'd need to look into more sophisticated tools like "sending", decide to opt-out with the existing mechanisms, or redesign.
I feel the very same here. (Funny, someone already has flagged it for being too negative... please grow up and accept opinions sigh). Let's acknowledge though that the coreteam seems to have recognised that there is a problem and is open to discuss about possible fixes. I just hope it does not take until Swift 7 for them to land.
I flagged it. It is possible to express opinions without being insulting or disrespectful and that comment is both.
To provide an alternative point of view of this - I've lost count of the number of issues and data race issues that have been caught and fixed by adopting Sendable
across Vapor's codebases. Lots of GitHub issues that were impossible to reproduce, subtle data race issues, and downright stupid coding from myself. Once we adopted Sendable
(and yes it wasn't an easy process, especially doing it as an early adopter) all of those issues went away. We've not had a single issue reporting a crash as a result of a data race issue.
So yes, developers cannot be trusted. We're humans, we all make mistakes, no matter how good or experienced we are. Swift's ethos is safety first and this fits well with it.
And for some iOS developers it may be hard to reason about the changes and see the benefits. But Swift is not just an iOS programming language anymore.
Side note about optionals
I remember the exact same arguments about optionals when Swift was first released. "I know what I'm doing". "The compiler is trying to baby me" etc etc. Understanding optionals is definitely far easier than concurrency, but it's the same thing of a language feature tackling common programming errors and I think we can all agree that it's been a great feature once understood. Ever tried going back to a language that doesn't have them and having to remember that null pointer exceptions are still a thing outside of Swift?
It is worth extracting a takeaway which, frankly, isn't a focus of this vision document and maybe perhaps ought to be considered—
The document tackles a key approachability issue: How does a beginner use Swift >6 for single-threaded work, then step up to multi-threaded work when ready? It is absolutely essential to make improvements in this area.
However, there is another audience for which this document offers no vision, even as progressive disclosure and approachability are just as essential for them: users who are new to Swift concurrency features, but who have extensive experience with multi-threaded programming and an existing (perhaps very large) multi-threaded codebase. I doubt this is merely a transient cohort of folks, either, as interop is and will remain a tentpole feature of Swift and data-race safety remains a live area of exploration in other language ecosystems.
Of course, Dispatch remains available and we've left escape hatches like @unchecked Sendable
, but they will (more and more) need to at least use APIs that have adopted Swift concurrency. A single-threaded mode clearly does not speak to their needs. We have a (very good, IMO) document about migrating to Swift 6, but it is undoubtedly not easy and, also, not a forward-looking vision about how the language (and its ecosystem) can meet users halfway.
Now, it may well be that it's not possible to address both audiences in the scope of one document, but I doubt that problems with incremental adoption are going to be so disparate that there won't be overlap; it'd be short-sighted not to at least keep all of these use cases in mind when we design solutions that address areas of overlap.
There is a section in the vision document about easing incremental migration to data-race safety with changes in the language. I'm happy to expand on that section if folks think there is not enough emphasis on it.
I agree that it's a very important section of the document.
In that the first subheading speaks of "bridging between synchronous and asynchronous code," within the framework of a document that is explicitly centering a specific progressive disclosure path of single-threaded to multi-threaded, though, the target audience is (unintentionally, perhaps, but explicitly) not those users who are embarking on a multi-threaded to multi-threaded progressive disclosure journey.
The vision document says:
As we see it, there should be three phases on the progressive disclosure path for concurrency:
- Write sequential, single-threaded code. [...] By default, programmers writing executable projects will write sequential code [...]
- Write asynchronous code without data-race safety errors. [...] Programmers won’t have to confront data-race safety at this point, because they aren’t yet introducing parallelism into their code. [...]
- Introduce parallelism to improve performance. [...]
[...]
We believe that the right solution to these problems is to allow code to opt in to being “single-threaded” by default, on a module-by-module basis.
I find it hard to follow the premise and the corollary, because I find it hard to program in phases 1 and 2 except for the simplest of all modules.
Consider:
- A GUI program that connects to internet, and wants to decode the downloaded data without blocking the user interface. It looks like this belongs to phase 3.
- Programs that interact with Apple system SDKs such as StoreKit 2, PhotoKit... Many Apple SDKs force developers into parallelism: phase 3.
- Server development: I guess it's phase 3, because one does not handle network requests on a single CPU core.
Our tooling does not make it quite trivial to modularize a code base. To say it differently, building is not the same task as developing, and it requires specific skills.
Was it considered to enable the “single-threaded” mode on individual files, instead of a whole module? That would help me perceive the vision:
- The GUI program that downloads would only have to provide concurrency annotations in the file that performs the download (and outputs the final result on the main actor)
- Programs that use StoreKit would only have to provide concurrency annotations in the files that interact with the App Store.
Oh! That wasn't my intention. The specific usability issue in that section is that in existing concurrent applications that predate Swift concurrency, there's no async
, so adopting async
is difficult because the effect is viral. It's also the case that even in concurrent programs, including but not limited to UI apps, sometimes you still really need to do work synchronously. It's also useful for programmers just starting to learn about concurrency in general, but that's definitely not the only use case. I can clarify this in the vision, and include some text in the top-level heading about the overall goals with easing incremental migration to data-race safety.
That's a super helpful clarification!
As it is, in writing that "as we see it, there should be three phases on the progressive disclosure path" there can read an implication that this is the sole progressive disclosure path we can see, that we are prescribing a strict progression from (1) to (3), and that "single-threaded" by default, module-by-module is the sum total of the solution that we see.
I'd imagine it's fairer to say that we see this as one of the paths (an important, major one of course) and that the solution is one step.
I'm pretty hesitant to go down the direction of allowing modules to specify an arbitrary global actor as the default, because any global actor other than the main actor does not help with progressive disclosure
I totally agree in the context of a new Swift programmer learning the language that all you need is to isolate stuff to MainActor
.
I wanted to clarify though: is the intention that this is basically a "training wheels" mode and that users eventually "graduate" to not needing or wanting this, or that a choice about whether to turn this on is a part of creating any new module or executable target?
Assuming it's the latter: I don't have a concrete argument in favor of this, just a hypothetical. Suppose you're working on an app that handles a huge amount of data, and regularly mutates it: something like Blender, Instruments, Photoshop, or Logic. I assume that the most convenient model to work with the data in Swift Concurrency is to define a "Data" isolation domain with a global actor. That isolation domain might be just as big as your UI, perhaps even bigger. And the engineers working on it might be just as inexperienced at Swift as your beginner app developer; their specialty might be data structures and algorithms, not the Swift language. In that situation, it might make sense to implement the isolation domains at the module level, not in code. (totally open to being told this is a bad concurrency architecture for a "pro" app, I'm quite curious if there are better designs)
Like you said though, y'all can definitely do this after the fact if this turns out to be a real situation that's worth addressing.
One other thing to note is that most iOS apps I'm aware of eventually move all their code out of the executable target and into a module anyway, so they can unit test it without dealing with Xcode starting up their executable. I realize that this isn't the right forum for discussing how Xcode might integrate this mode, but I'm glad to see that the need for IDE authors to carefully consider how to integrate is called out in the proposal.
Yes! One way the per-module isolation inference setting could work is it could implicitly insert a file-level directive in every source file in the module, which you could spell explicitly if you want. I'm personally in favor of this direction, for exactly the reason you describe, and also because a per-file annotation makes it easy to post code snippets that make the isolation inference behavior explicit in source.
Now that I think about it, whenever I had to develop a server app through a server framework (I'm talking HTTP), I have only been executing single-threaded code. My code is called when the framework receives a request, does its job serially, and returns the response to the framework. I've coded in a lot of Ruby on Rails apps.
I'm not sure if this experience matches the way people code with Vapor or Hummingbird, i.e. if user code is mostly executed in a single thread/isolation domain (phase 2 of the vision document, with a bunch of await
for database accesses, for example)?
If I'm not wrong, it would be interesting to be able to enable the simplified language dialect in server apps as well (even if the program, as a whole, is intensely parallel).
I'm not so sure! I think you can totally pull this off while remaining in phase 2.
Is it? The code that decodes the raw data is user code that runs in parallel with the rest of the user code.