I've been fretting over this proposal for a while. The tl;dr is that I support it, but I want to talk about why I support it.
We need to smooth out the progressive disclosure curve for Swift 6, and concurrency is definitely a topic that should come "later" in that curve. Prior to Swift 6, we got away with avoiding talking about concurrency early in the developer's journey with Swift by basically ignoring it. Newer Swift programmers are writing mostly single-threaded things, so ignoring concurrency works... until it doesn't. It was always a false sense of security, because from your first await
or you've now introduced the potential for data races. It might not even be your code that does so, if you're using a library that introduces concurrency. Data races are worse than the other ways in which you can get pushed further along the progressive disclosure curve too early: unlike other advanced concepts, there's no scary-looking syntax like withUnsafeBufferPointer
or repeat each
or ~Copyable
to hint that you're hitting complexity: a simple await
can do it. And unlike other complex features that can be part of a library's implementation without bleeding into the interface, data races can bleed through anywhere.
The balance we need to strike here is to not expose data-race-safety issues to Swift developers until they are ready to tackle concurrency, but without sweeping the problem under the rug as we had before Swift 6. Part of that is not introducing concurrency unless it was asked-for, and I think the under-review SE-0461 handles the only place in the language where we were doing that (but shouldn't have).
Now, much of my struggle with this proposal is that two things are simultaneously true:
- Most code doesn't care what isolation domain it's running in
- Having even a small amount of code that does care where it's running (say, the main actor) can cause ripples through a huge amount of the code base
The struggle we're seeing with progressive disclosure for Swift 6 is that there are some things---the UI stack, global variables, etc.---that put the seeds of main-actor-ness into a code base. These are things you want to do very early on in your programming journey, and since you are starting single-threaded, it's totally fine to make these assumptions.
The problem is that they ripple, so you end up having to sprinkle in @MainActor
or await
or Sendable
everywhere to try to stop the spread of main-actor-ness through your code that (we assumed) is non-isolated. There's no way to stop this spread: whether a function needs to run on a specific actor has to be part of its interface.
This proposal is saying that, because we cannot determine what code needs to be on the main actor (i.e., how far the ripples will go) without forcing the user to do annotation, we should assume everything is on the main actor. That grates against my sensibilities, because I still suspect that most Swift code actually doesn't care where it runs. However, it's okay to assume that it needs to run on the main actor when all of the code around it makes that assumption, because it's a restriction that's safe to lift later.
When the user does dip their toes into concurrency, they'll have to unwind some of these assumptions. However, they'll be able to do so on their terms, at those places where they have introduced concurrency. I remain concerned about how we get out of this mode when it's no longer necessary: can we provide a migration to turn this feature off, if the user decides they want to go multi-threaded in general? It's probably going to require whole-module analysis, but it seems plausible.
I remain concerned about the fact that we'll have two modes forever, but I don't see a way around it. And as long as we provide a smooth, well-lit pathway from the single-threaded assumption to data-race-free use of concurrency, I think we'll be able to meet our goals for progressive disclosure.
Doug