Oh man, do I agree with the diagnosis in this vision document.
However, I'm not sure I agree with (some of) the prescription here. @MainActor-all-the-things by default doesn't work well for image processing and other CPU-intensive work. I fear that this change would set us up for a kind of brick wall at the point where you do want to do something in the background. If you realize late that doing this kind of work on the main actor is a bad idea, your code will just explode with warnings and errors as switch the build mode of a module from single-threaded to anything else.
Essentially I fear a similar experience to what you find when you first switch a module from Swift 5 mode to Swift 6. It's not pleasant. My horror story is using Apollo iOS in Swift 6 mode and trying to bridge its API into async/await with task cancellation ultimately leading me to read posts like this one. I have no hope that junior members on my team would have been able to make this migration work.
One specific pain point I had that this proposal addresses (I think) is one where refactoring an actor into multiple classes/structs causes an absolute explosion in complexity, unless you figure out that you need to declare explicit isolation parameters to stay within the actor's context. This is highly non-obvious, and is never, ever suggested by compiler diagnostics.
In my codebase I solved this issue by forcing my actor to be a global actor and then annotating everything around it that way, but this greatly reduces the reusability of these components--now they're stuck on that specific actor, even if they'd be useful elsewhere. Changing the default of async functions to propagate isolation would make this so much easier.
What's particularly attractive about this is that you can make a break at a specific point in your code where you know you need to delegate to some background thing--say by changing a class to an actor--but the implementation details of that class-now-actor are otherwise unaffected. This is crucial to making incremental adoption so much easier than it is today.
I'm pretty optimistic that the changes in this document will go really far. And that, combined with continued progress on the SDKs will make a huge difference. But yeah, I'm not sure if that would be enough to make a change to "complete" justifiable.
However, I have now had a chance to think on this more. I still believe it is important that more be done to help people avoid failing into the traps made possible by using concurrency without the compiler feedback needed to do it successfully. This is, by far, the most common problem I have seen. This mode allows developers to form an incorrect mental model of the system. In turn, this can push them towards feeling like unsafe opt-outs are the only tool they have to make the language work they way they believe it does (or at least should).
But unsafe opt-outs don't the change semantics. This is the worst possible outcome, because they are now actively working to make their code incorrect. And it is incorrect in a way that may never permit a smooth transition to Swift 6 mode. Yet they continue to run into problems, because of course they do. Their mental mode of how the language works is wrong. This is a cycle that just further increases frustration, encouraging yet-more unsafety.
How about something like this?
minimal is expanded to introduce isolation checking within all async function bodies, including closures. I would consider this more true to the spirit of the term "minimal". And I think this would go much further towards progressive disclosure. Your first use of await would introduce you to the checks that are relevant to actually using it successfully.
However, we still have to acknowledge the state of the world today, so I guess we'd also need a strict-checking mode of none. Or perhaps unsafe? Or maybe it isn't possible to change the meaning of minimal, so instead we introduce a new (and default) mode of usage?
I'm not 100% sure what the right balance would be here. And these changes are more off the top of my head. But I feel pretty strongly that we have to try to help. I know a lot of damage has already been done, but I think we have to do something to limit it from expanding further.
Just finished writing some maths code, and can confirm that my mental model is (in some cases, maybe, was) all over the place... I really wanted to try out the Swift concurrency system, and actually ended up with something that works, albeit after quite a few false starts. What I found most difficult was shifting from GCD's culture of "Design And Build A Subsystem", which was hard, but doable, to this hand-wavy "move some work off the main thread" business. Tiling was interesting - I ended up sizing everything for the caches, and just letting TaskGroup get on with it (fine on Apple Silicon, but I wonder how well this approach would work on less cache-abundant systems). Don't know if that's the right mental model to have, but it seemed to work.
Maths problems are easy in this regard, however. No (or very little) state to maintain. An Actor did the job, when necessary, but again, I didn't need to worry about re-entrancy. What was most frustrating was that this project ought to have been made for Swift concurrency, and it was still tricky in places. And still littered with unsafe opt-outs... Can't expect the compiler to work out that all the operations on regions of shared memory are on different regions (or can I? or will Vector & Co. solve this?).
Anyway, this was very much an experiment, and ought to have been done with Accelerate, but I was curious. And the key newbie question I still have is: when we refer to "structured concurrency", do we mean that it already has a structure, which we should embrace, or that we need to structure it correctly, when we use it?
Also, a good approach to tutorials/documentation might be a "Swift concurrency recipe book" that never ever mentions migrating from GCD, and covers topics like subsystems, parallel execution, ordering, buffering, etc. I quite understand the priority initially given to existing code bases, but it's been a while now, and there must be more to Swift concurrency than wrapping a dispatch queue in an Actor or an AsyncStream...