This feels like 'We're warning you now - but if you don't fix it, there's going to be a problem'
As opposed to - some day, we hope you'll choose to opt in to our new shiny - but it will always be optional.
I have working code - I don't like having warnings. I don't know how to make them go away.
But it looks like I have to do work to make the compiler happy.z
Note - I have swift 5 language mode on, minimal checking, and all the optional features turned off.
This doesn't feel very 'opt in'
That's a fantastic article, that actually explains very well that even the apparently simple and most contrived bit of work hides actual essential complexity, that the language, having adopted a sophisticated and safe concurrency model, properly accounts for.
I think that's not the right takeaway from the article. The complexity is real, and it's there, but it's not swept under the rug.
Another recent thread on string subscripting shows a similar pattern: something seems simple, but it's actually complex, and the language correctly accounts for the complexity.
Incidentally, the article shows one key confusing pain point with Swift right now, that has to deal with defaults (rather than actual structural problems with the strict concurrency checking model).
The error
Non-sendable type 'DataModel' returned by implicitly asynchronous call to nonisolated function cannot cross actor boundary
is saying that that an instance of DataModel, which is not Sendable, is "crossing actor boundary" (as mentioned in the article, the implicitly is caused by a bug, already addressed).
But it's not clear which boundary is crossing. The error mentions that there is a call to nonisolated function, and this is the key piece that one needs to understand: the closure passed to .task runs on the MainActor (being a method of a SwiftUI view), but loadModel function is not isolated to the MainActor actor, so everything going in and out must be Sendable (including, counterintuitively but correctly, the store instance).
The problem is the fact that a call a nonisolated async function is causing an isolation switch, which wouldn't be the case if the function was not async (in fact, there would be no error if the loadModel was synchronous).
None of this would be a problem if the loadModel had the same isolation of the closure passed to .task: this seems to be a more sensible "default" and, as the article mentions, it's being addressed. I'm not 100% sure about it, but my impression is that most of the confusion about strict concurrency checking in Swift would not have been there if nonisolated async functions inherited already the isolation, like sync functions do.
This is not a problem with the fundamental design of the concurrency model. Like @ExFalsoQuodlibet said, it's specifically a problem with the default execution semantics of async functions, which is addressed by the vision document.
A model contains a string.
A store asynchronously loads that model and returns it.
Swift 5
No problem. Just call an async function, it'll run on a background thread, and you can use the result.
Be aware that there are some subtleties that in specific edge cases might matter to you.
Swift 6
One does not simply 'load things asynchronously'
Buckle up and put your thinking cap on; This is going to be a long and complicated lesson...
Now as long as Swift 6 is forever opt-in, I have no complaints.
It's great for some people/teams.
I'm just not sanguine that it will be...
By adopting concurrency features at all, you've explicitly opted into minimal concurrency checking (and perhaps more, depending on your compiler settings, despite building in Swift 5 mode). In fact, the easiest version of concurrency is the Swift 6 compiler in Swift 5 mode, set to minimal checking, with region based inference turned on, as well as various other enhancements to concurrency checking in the upcoming features, like key path inference. You can also disable a lot of checking by using @preconcurrency, both in your imports and in the API you vend from other modules.
The compiler can (and will, unless some very major updates to the language are made, that almost no language has) only look at the surface of the types involved, and in that case the surface is telling that the model and the store do not guarantee data race safety, and the method that’s executed (as in, the function could potentially refer to members of the store implicitly) is declared as async, so by default entails an isolation switch, that in this case is not necessary.
The point of a toy example is not to produce an actual case of complex concurrently executing code (that will instead be certainly found in complex codebases, whether we realize it or not) but to point out in the most basic way the rules that the compiler will follow to guarantee safety in a real complex case: the counterintuitive discover is that, when defining things in a certain way at the surface (even if the underlying implementation is very basic) we open ourselves to potential problems in the future that, when inevitably introduced, will likely induce data races. This is conceptually similar to force unwrapping a received Optional that is certainly not nil at the time of writing the the code, but might become nil in the future without anyone realizing it.
Vorlon, I think we understand your perspective, and we'll give it some consideration. Your posts and the responses are kind of dominating the thread now, though, so I'd like to give other people an opportunity to give their own feedback.
This vision document really captures the struggles I'm having with Swift 6.
Write sequential, single-threaded code .
This applies to most of my code, as my focus is on education. Without a simple way to disable concurrency checking for a module or package, I simply cannot adopt Swift 6
Another feature that would improve approachability would be a way to have a more granular control for non-sendable members that are protected by locks/mutexes/... instead of needing to mark the whole class as unchecked @Sendable which is not "future proof". Currently you can be sure everything is data-race safe when you add the unchecked annotation, but future changes to the code can add data-race issues that would be unnoticed because you have basically silenced all the warnings in that class.
If the member is protected by a lock or mutex, then you should use the Mutex type (or OSAllocatedUnfairLock which can back-deploy on Apple platforms) to wrap that property and provide checked semantics. Then there is no need for @unchecked Sendable. If that doesn’t work for you (for example, because you need to share a single mutex between multiple objects), I recommend creating an explicit @unchecked Sendable class/non-copyable struct that implements only the specific semantics that you need, and then make that type a property of your bigger type. Organizing your code that way means that the complex, unchecked functionality remains small, testable, and self-contained.
The problem is similar, just moves the uncertainty from a point to another, it would be great if they could be annotated with the mutex/lock that protects it and that its usage could be validated by the static analyzer
The tools to get static guarantees about safety of the specific lock-protected state are Mutex or (on Apple platforms) OSAllocatedUnfairLock as @j-f1 notes above. And to avoid silencing all the warnings in a particular class you can get per-property granularity with nonisolated(unsafe). I'm not exactly sure what the middle ground you're searching for on this spectrum looks like, or why the existing tools are insufficient.
Google has been developing lock-safety annotations in Clang that are meant to allow the compiler to detect when memory is used without holding the lock that protects it. In theory, all you need to do is add attributes to say that e.g. field X is protected by lock Y, method Z is always called while holding lock Y, and so on. But that's a pretty big feature to duplicate something that we feel is already handled pretty well by the design of Mutex, and in practice it does require a lot of annotations and (probably) some restructuring to fit the limits of the analysis; I don't think we're likely to add anything like that to Swift.
Thanks for writing this up. I only got around to replying to this yet so please excuse me if I am re-iterating points that have been made upthread. In general, I agree with the sentiment that the approachability of data-race safety is currently too hard and requires developers to think about it too early. This goes against the progressive disclosure goal of the language.
One of the largest offenders that leads to developers having to learn concurrency concepts early on is how non-Sendable types interact with async code. I think the pitch to [Pitch] Inherit isolation by default for async functions will solve a huge deal of these issues. The only other remaining thing that I see people running into very early on are mutable global variables (even of Sendable) types. Addressing those two should in my opinion already resolve a great deal of data race errors.
Now the vision doc calls out a new mode that puts a whole module into a @MainActor isolation by default mode . While I understand this from a UI application developer perspective quite well, I am very concerned of this mode for server applications and also for more complicated CLI apps. Specifically, server applications are often expected to run highly concurrent and we don't want to have users automatically opt-in to a single isolation domain. I can already foresee the amount of discussion around slow server performance if we make that the default mode for executableTargets.
If we think having such a mode is good and doesn't create language dialect problem then I would at least suggest that we don't enable this mode by default for executable targets but rather enable the mode by default for iOS/macOS/... apps. This can be decided by IDEs such as Xcode but I wouldn't make it a default in SwiftPM.
I was also very concerned about this and agree it would be highly unfortunate, but reading carefully again the vision document actually states:
We believe that the right solution to these problems is to allow code to opt in to being “single-threaded” by default, on a module-by-module basis.
Empasis added by me - unless SwiftPM or Xcode starts to opt-in automatically it's not so bad. But if they choose to have a default to opt-in to this new single-threaded mode, it would be highly unfortunate I think, especially for SwiftPM.
As a side-note, only tangentially related, the dream would really have been if also the UI frameworks such as SwiftUI, would have supported a different model with e.g. a concurrency domain per Window in the application (or chosen by the developer), rather than the serialised sledgehammer that @MainActor annotation is. I'm probably missing something, but I don't see why so much of the UI layers needs to be serialised at all. Ok, sorry, different soap box.
If you continue reading on in the next section titled Default concurrency rules for executable and library modules the vision document contains the following:
We feel that this amounts to a compelling argument that executable targets should default to inferring main actor isolation.
That's where my concerns are primarily coming from. I don't disagree with inferring @MainActor by default for UI applications but in my opinion this can't be generalised to any executable target.
Thank you for putting this together. It seems well thought of to me. I'm glad those issues are addressed. I think progressive disclosure for this is a pain point and can be improved.
I'm highly in favour of isolation inheritance being the default.
I'm in favour of a single-threaded build option.
I think module is a good scope. I don't think a file-level option would make that much of a difference for readability, because isolation can already be implicit.
Maybe instead of being implicit for executables, it could be set by default by tools when creating a package/target. I think it’s a good default for all executables, but it would help discoverability, especially when starting to split an executable into several modules.
For information: my experience is with one big iPad App, a couple of scripts, and a toy-project server.