SE-0466: Control default actor isolation inference

I've been fretting over this proposal for a while. The tl;dr is that I support it, but I want to talk about why I support it.

We need to smooth out the progressive disclosure curve for Swift 6, and concurrency is definitely a topic that should come "later" in that curve. Prior to Swift 6, we got away with avoiding talking about concurrency early in the developer's journey with Swift by basically ignoring it. Newer Swift programmers are writing mostly single-threaded things, so ignoring concurrency works... until it doesn't. It was always a false sense of security, because from your first await or you've now introduced the potential for data races. It might not even be your code that does so, if you're using a library that introduces concurrency. Data races are worse than the other ways in which you can get pushed further along the progressive disclosure curve too early: unlike other advanced concepts, there's no scary-looking syntax like withUnsafeBufferPointer or repeat each or ~Copyable to hint that you're hitting complexity: a simple await can do it. And unlike other complex features that can be part of a library's implementation without bleeding into the interface, data races can bleed through anywhere.

The balance we need to strike here is to not expose data-race-safety issues to Swift developers until they are ready to tackle concurrency, but without sweeping the problem under the rug as we had before Swift 6. Part of that is not introducing concurrency unless it was asked-for, and I think the under-review SE-0461 handles the only place in the language where we were doing that (but shouldn't have).

Now, much of my struggle with this proposal is that two things are simultaneously true:

  1. Most code doesn't care what isolation domain it's running in
  2. Having even a small amount of code that does care where it's running (say, the main actor) can cause ripples through a huge amount of the code base

The struggle we're seeing with progressive disclosure for Swift 6 is that there are some things---the UI stack, global variables, etc.---that put the seeds of main-actor-ness into a code base. These are things you want to do very early on in your programming journey, and since you are starting single-threaded, it's totally fine to make these assumptions.

The problem is that they ripple, so you end up having to sprinkle in @MainActor or await or Sendable everywhere to try to stop the spread of main-actor-ness through your code that (we assumed) is non-isolated. There's no way to stop this spread: whether a function needs to run on a specific actor has to be part of its interface.

This proposal is saying that, because we cannot determine what code needs to be on the main actor (i.e., how far the ripples will go) without forcing the user to do annotation, we should assume everything is on the main actor. That grates against my sensibilities, because I still suspect that most Swift code actually doesn't care where it runs. However, it's okay to assume that it needs to run on the main actor when all of the code around it makes that assumption, because it's a restriction that's safe to lift later.

When the user does dip their toes into concurrency, they'll have to unwind some of these assumptions. However, they'll be able to do so on their terms, at those places where they have introduced concurrency. I remain concerned about how we get out of this mode when it's no longer necessary: can we provide a migration to turn this feature off, if the user decides they want to go multi-threaded in general? It's probably going to require whole-module analysis, but it seems plausible.

I remain concerned about the fact that we'll have two modes forever, but I don't see a way around it. And as long as we provide a smooth, well-lit pathway from the single-threaded assumption to data-race-free use of concurrency, I think we'll be able to meet our goals for progressive disclosure.

Doug

18 Likes

Yes, I think we can provide a migration in both directions. The default nonisolated -> @MainActor direction is pretty straightforward, because the tool should just insert nonisolated on all declarations that use the unspecified isolation. The default @MainActor -> nonisolated direction is tricker, because I think the desired behavior is that the tool only suggests an explicit @MainActor on declarations that would be invalid without it, e.g. on mutable global variables, functions that call @MainActor APIs from dependencies, etc. It seems like the migration could infer whether to suggest @MainActor on a function based on its implementation, which of course requires whole-module analysis because functions called in the implementation that are within the module may also need the same inference. Types might be tricky too, because we might need a rule that keeps @MainActor if the type needed the implicit Sendable conformance provided by the global actor but wouldn't otherwise have an implicit Sendable conformance based on storage. I agree that it seems plausible for us to come up with a strategy that's better than "slap @MainActor on everything that currently uses the default isolation" to migrate to default nonisolated mode.

I understand the motivation and the fact that this will solve a lot of problems but the part that talked about making @mainactor is the default will make a lot of problems when it comes to isolating the methods from the @mainActor.

@MainActor should be the special case for the UI not the default for the whole app. By this proposal we are solving a problem by creating another one. Which is making it difficult to detach methods from @MainActor which is 70% of the time is the case.

2 Likes

This is a great example of how this proposal will hurt codebases. This app will now have different defaults set per module rather than one for the whole codebase, tucked away in a per-module compiler flag no engineer is going to see.

2 Likes

I think between this proposal and "don't touch anything" there's also a third way: encourage and promote functional thinking and functional code when teaching Swift. Apple already recommends to use structs by default, that's one step in that direction. Avoid global vars would be another. Then avoid state machines as much as possible, or if you do use a state machine, then isolate it, i.e. define an actor.

Otherwise, I join those who fear that this duality (full concurrency vs. main actor by default) might never go away and Swift will be stuck with essentially two dialects.

I also think the biggest obstacle for Swift to become a "pure" concurrent language is SwiftUI. As a developer I would really appreciate it if more effort was put into making it more concurrency-friendly, because today it's not, and maybe that's where most of the complaints are originating from.

In fact it's still not very hard to build a UI app in Swift 6 from scratch today, but legacy stuff can pull it back into the main actor. Which is just a matter of time until it all goes away, kind of like Python 2 did.

1 Like

Unless you start using framework like Observation (that I don't consider legacy stuff at this point), that requires class for all Observable types.

Classes still have many useful use cases and are not going away, especially in high level framework, so encouraging to ignore them will not work.

1 Like

The recommendation is to use structs by default, not always, i.e. try struct first and see if it works for you. This is a significant shift from the OOP's "class by default" formula.

Observables aren't going away but the reason you usually mark them as MainActor is again, SwiftUI.

UI apps are stateful by their nature but I think things can be simplified for us by providing some hidden synchronization mechanisms behind @State and the observation to make them more concurrency friendly.