Yes, what I want is basically a wrapper around Dispatch.
In my humble and again most likely wrong opinion, this makes reasoning about your code harder. More so about someone else' code.
Yes, what I want is basically a wrapper around Dispatch.
In my humble and again most likely wrong opinion, this makes reasoning about your code harder. More so about someone else' code.
I meant that actors themselves require async (there's no choice to stay sync).
import Foundation
var threads: Set<ObjectIdentifier> = []
func registerCurrentThread() {
let oldCount = threads.count
threads.insert(ObjectIdentifier(Thread.current))
let newCount = threads.count
if newCount != oldCount {
print("threads used: \(newCount)")
}
}
class C {
var c1: C1
var c2: C2
class C1 {
var c2: C2
init(c2: C2) {
registerCurrentThread()
self.c2 = c2
registerCurrentThread()
}
func foo() {
registerCurrentThread()
c2.bar()
registerCurrentThread()
}
}
class C2 {
func bar() {
registerCurrentThread()
}
}
init() {
registerCurrentThread()
c2 = C2()
c1 = C1(c2: c2)
}
func test() {
registerCurrentThread()
c1.foo()
}
}
class A {
var a1: A1
var a2: A2
actor A1 {
var a2: A2
init(a2: A2) {
registerCurrentThread()
self.a2 = a2
registerCurrentThread()
}
func foo() async {
registerCurrentThread()
await a2.bar()
registerCurrentThread()
}
}
actor A2 {
func bar() async {
registerCurrentThread()
print("done")
}
}
init() {
registerCurrentThread()
a2 = A2()
registerCurrentThread()
a1 = A1(a2: a2)
registerCurrentThread()
}
func test() {
Task {
registerCurrentThread()
await a1.foo()
registerCurrentThread()
}
}
}
registerCurrentThread()
let c = A() // change to C
c.test()
RunLoop.current.run(until: .distantFuture)
func main() async {
print(Thread.isMainThread, Thread.current)
print("done")
}
print(Thread.isMainThread, Thread.current)
await main()
outputs:
true <_NSMainThread: 0x6000017101c0>{number = 1, name = main}
false <NSThread: 0x6000017103c0>{number = 2, name = (null)}
done
Can you please provide a link to these recommendations, so that I can learn more?
This has always been the biggest weakness of everything swift and Apple.
First, thanks for putting the work into this document. My TLDR is, that I agree with the direction of the vision document and that it addresses issues I have encountered with the main project I work on by enabling strict concurrency for some of its modules.
My perspective is the one from an iOS (and sometimes macOS) developer who is mainly working with a small-ish team on a large-ish, 6+ years old Swift-only code base.
The option to enable MainActor as default seems very reasonable to me. My personally opinion always was (heavy influenced by blog posts like this) that even with GCD many apps profit when they are developed with a "main queue first" approach. If needed, batches of work can be put on a different queue very easily, but applying GCD by default may (MAY!) increase performance but can and often will hurt stability and local reasoning.
When I got pushback from fellow developers on "main queue first" I encouraged them to follow the pattern of NSURLSession and let the user of a class decide on which queue they want to receive callbacks. In practice this always ended up to be the main queue, because 1 or 2 steps later the calls are expected to run on the main thread anyway (for GUI-heavy apps).
In the last two years we restructured our apps into modules. Our main goal at the time was to allow us to develop new features via small sample apps and encapsulate more complicated logic into small modules for strengthening separation of concerns. This enabled us to be able to iterate faster and run test with less compile time and dependencies. We now have some modules for shared utilities, some for shared services and a bunch of modules for self contained UI features. The utilities and services started to adopt async/await, but they also make use of libraries like Combine.
In the last weeks I started to experimentally activate strict concurrency for some of the modules and I made good progress with the utility modules and some of the services. I focused on the ones that I wanted to make callable from all isolation contexts, but for some classes I just put them on the main actor. I also converted a few into actors, for example when the API was already asynchronous and there was only a small amount of private state for bookkeeping. We default to use structs for data exchange and adding Sendable
to them was quite straight-forward.
But when I tried to enable strict concurrency for some feature modules I lost confidence that I am an the right path, because I felt I was just adding MainActor
annotations onto each and everything and I started wondering, if I'm doing something stupid or if this is not something the language could help me with. The vision document shows me that (maybe despite what others are suggesting) that you work closely with developers like me that face the same issues and you try to find pragmatic solutions, which makes me quite optimistic.
I also hope that the changes following this vision will help us with our current test targets that are based around XCTest
. In many test cases each test class member is now annotated with MainActor
because annotating the whole class is not possible due to inheritance.
I also can see me using the equivalent of asyncAndWait
in rare places. For example I encountered issues with a function that called UIDevice
(which is MainActor only) where introducing async/await was not really practical. I could tackle this by only putting the initializer of the type that was hosting this function on the MainActor and get the value in question early, but it's easy to imagine that this could have caused other code to break.
But this example of a recipe I found to tackle the issue at hand brings me to my final point that I seemingly share with many in this thread and was already acknowledged by some language developers ā And that is that we as a community need more guidance on how to use strict swift concurrency correctly and how to tackle real world issues. IMHO, it's also not helping that frameworks like Combine seem to be out-of-scope in contexts where Swift the language is in the focus, because it's an Apple framework. But usage of Combine is a reality in my migration journey and I have not been able to formulate a migration strategy that I feel very confident in, so I wish that when more documentation is provided it's also addressing developers that have adopted popular Apple frameworks, even though I know there are other audiences.
Nevertheless, thanks for putting this all together. I know I may not have contributed much technical feedback with this post, but I felt there was some pushback towards this vision document, so I wanted to state that I firmly believe it's going into the right direction.
Here are some second thoughts about the prospective vision.
First, I have high hopes in what the vision document calls "Single-threaded code", even if I think it should quickly be renamed "single-isolation-domain code".
Next, I'm not sure that at this stage, a bottom-up approach will give good results. Building a sound system and hope for the best when it is put in the hands of the developers has been useful when concurrency was introduced, because soundness was paramount.
Now a top-down approach can be useful. We could even "design for the worse", i.e. design with the most complex programs in mind, and try hard to be useful to them.
One of those very complex programs are server apps. They are intensely parallel. Still, could we help server frameworks helping their users don't bother much with concurrency? For example, if a framework could specify that app startup is performed on the main thread, and that each request is performed on a specific isolation, could it help simplifying user code? Those are two facets of the framework where the user can be expected to write "single-isolation-domain code", even though the app as a whole runs in multiple isolation domains.
This has me thinking that libraries should be able to define "concurrency contexts". The stdlib would define the context where all user code runs on the main actor. Libraries can define their own contexts, which tell the compiler what can be assumed in the code that opts in for this context.
User code would look like:
import MyServerFramework
context Startup // defined by MyServerFramework
// <- Here code is assumed to run on the `Startup` global actor
context RequestHandling // defined by MyServerFramework
// <- Here all code is assumed to run on the
// isolation of the caller, as if all methods had an
// `isolation: isolated (any Actor)? = #isolation`
// argument).
You can see above that the concurrency context is set for all code that follows a context
directive. Contextualized code is not indented. Being able to use several contexts is a single file helps people write self-contained sample code (e.g. frameworks showcases, or user code reported by people looking for support).
You can see above that [Pitch] Inherit isolation by default for async functions is addressed with the RequestHandling
context.
The stdlib defines the MainActor
context:
context MainActor
// <- Here all code is assumed to run on the main actor
Libs can define a default context. For example, SwiftUI would define MainActor
as its default context:
import Lib
context import SwiftUI // use the default context of SwiftUI
import OtherLib
// <- Here all code is assumed to run on the main actor
Ambiguity is resolved by specifying the module name:
import Lib1
import Lib2
// context Startup // error: ambiguous
context Lib2.Startup
Package targets can choose a default context:
.target(name: "MyTarget", context: "MainActor")
It looks these concurrency contexts would give a lot of space for further design.
I wish that one day Swift concurrency becomes mature enough to embrace the concept of Communicating Sequential Processes.
I also look forward to seeing a coherent specification and guide document that explains and teaches this thing to programmers of all levels, beginners and experienced, without discriminating.
I get where you are coming from, but I strongly advise against this.
Also, your post made me understand more clearly why I keep hand-wavingly shaking my head at so many posts here ; ) I'll try to explain why it triggers me so much.
Building stuff that works in any context is really not rocket science, if you follow a few basic principles: Avoid shared mutable state, and if you need it, have it "thread-safe" (locks or actors). If "parallel" things need to talk to each other, have them talk via messages (ie: sendable types that are exchanged, eg via AsyncSequences). The rest should just be plain old code doing one thing at a time, staying in its lane (async or not, don't make much difference). At least that's how I see it.
Why is it so hard then: because until recently, nobody really told you where you have accidental shared mutable state, and once you do all bets are off.
Swift 6 isolation checking is a god-send (bugs and glitches aside), because once fully in place, you see exactly where your shared mutable state is (which you should minimize anyway). You can then a) not share it, or b) make it safe -> wonderful.
The problem I have with global actor annotations is that they are a very crude hammer, and they basically say "share away, anything goes, we'll dispatch everything back and force sequenced access". But what that does is that suddenly different bits of independent state are now forced into a single isolation. Every type you curse with this annotation is now forever useless in any other context, and you force all code that uses it to queue up on that isolation.
Sure, you can say "why not, what's the problem"? In my mind the real danger is that it just creates bad code and bad designs that are neither reusable nor composable. Just imagine the ecosystem being filled with types where each comes along with its random global isolation, or worse, the main actor, just so they can access each other in a sloppy way more easily.
To me, this feels a bit like placing a ton of global public static var
s in your code so you can easily configure it. Sure, we all do it sometimes, but it's bad design that does not compose well. Just don't share mutable state unless you have to, and don't entangle otherwise independent things.
I am fully aware that the UI basically is (or is driven by) one big pile of shared state that you want to mutate easily from a lot of places, and @MainActor
is a nice tool to easy get access to it. But the basic rules still apply in my mind. Every type you curse with @MainActor
is no longer "just code", but "UI code". If you find yourself annotating all your unit tests with @MainActor
, you should think about why that is.
Whew, sorry for the wall of text, but it seems I had opinions that wanted out ^^
In summary, I think the proposed vision does describe the situation well, and I am generally in favor. I simply want to warn that "@MainActor
everywhere" goes against a few tried and true principles in software development (like reusability, composability, and to a degree testability) and I believe it nudges people to bad design choices.
I don't quite know if you understood my post as suggesting that everything should run on the main actor. It was a little more subtle than that, in order to address, precisely, more needs than the plain and brutal "single-threaded mode" in which you see the risk for some sloppiness, and in which I see nothing but trivial programs that can't address the needs of developers who need to do something useful for getting paid.
In my understanding, the vision document is not trying to push developers into a kindergarten. Instead, it is addressing the usability problems that have all met, if not fully understood, since concurrency was introduced.
Usability is a concern for language designers, and also for API designers. I tend to think that a language design that is too limited will hinder API designers, and won't solve enough usability problems.
My post was not really a response to your idea directly, but actually I did slightly misunderstand your post, sorry ; ) I thought you were suggesting "let's just have more global actors" - but it was more subtle.
An interesting idea for sure, but my "global actors are bad and you should feel bad" attitude remains (with the narrow exception of code that directly drives the UI and needs a way to dispatch back, for which @MainActor
is fantastic).
it gives me no joy to argue this, but this strategy breaks down as soon as you have to use any of the āpopular frameworksā. much has been said of the various Apple frameworks, but this is also true of the SwiftNIO library which is ubiquitous on the server. is ClientBootstrap
a bag of shared mutable state? absolutely! is it āsafeā to share across isolation boundaries? absolutely not! and yet ClientBootstrap
exists and you canāt just not use it just because it has shared mutable state.
part of the reason i find these debates so frustrating is we often hear āwell itās not the languageās fault itās all these popular frameworks that are the problemā, and this is correct, but also not helpful to everyone who still needs to use the āconcurrency-unfriendlyā frameworks. we need to see massive investment from Apple in both its proprietary frameworks and the larger sphere of open source Swift libraries to catch them up to the languageās evolution.
I am not sure I follow how my point would not apply here as well. and in particular, how @MainActor
-all-the-things would improve anything here?
as much as I like to complain about problems in other peoples code as the next person, I would politely suggest (for this thread) that we should all dial down the venting and complaining a bit about whichever Apple framework hurt us the most, and try to focus on what the proposed vision means for the future of the language.
where we disagree, i think, is in the separation between the ālanguageā and āthe librariesā. the future of the language is inextricably linked with the future of the libraries. i donāt think we should operate under the mindset of āwe canāt let the libraries slow down the progress of the languageā. vision planning for the language needs to lay out concrete commitments for transitioning core components of the ecosystem instead of just stating that the ecosystem will need to change.
I think that old inessential blog post is quite instructive. This might be old thinking, but when I read that post I focus on two things:
In terms of the usability of Swift Concurrency, sometimes I feel like we ājumped the lineā and skipped some of the above steps. We have many ways of concisely applying concrete isolations to code, but few ways of joining isolations or helping them interoperate.
In addition (though this might be my inexperience talking), Iām not sure sharing āan isolationā is actually that useful a thing to share. It represents something, it signals something to the compiler? But is it as obvious what I should do with one when I have it in hand as a queue is? I donāt think so.
So personally I think some of this discussion yields to āregularā API design principles as we would ordinarily use in āuser spaceā. Lots of Concurrency APIs bake in access to special objects or concepts by default without any equally expressive way to override that default. We would criticize a āregularā API for doing this, and I think such a criticism might be valid here as well.
I also think it would help understandability if there were attributes and APIs where all of the applicable parameters are right there in front of oneās face, instead of being implicit. If we had those spellings, then we could solve back to the concise APIs we have now by applying sensible defaults. But in this future, all the relevant context would be visible to the programmer if they needed it too, for example for overriding or learning.
Thank you for putting this together. I'm excited about a number of ideas being proposed.
Nonetheless, we are tentatively very excited about the potential for this feature to fix problems with how Swift's generics system interacts with isolated types, especially global-actor-isolated types.
In [Review] SF-0011: Concurrency-Safe Notifications, we propose a protocol Message
and another called MainActorMessage
, the latter existing so that function overloads decorated with @MainActor
are used for Message
types that are effectively main actor-bound. It would be nice to not have MainActorMessage
and instead have the generics system select an overload based on global actor conformance, e.g.:
// General overload
genericFunc<M: Message>(ā¦)
// MainActor-overload
@MainActor genericFunc<M: Message & @MainActor>(ā¦) // some kind of syntax expressing that the generic type has an isolation conformance requirement
This is a very interesting question. We should define what is "main thread" and what is "main actor", and what each of them used for. In the end, they most likely should be separated.
Swift should be approachable on a wide variety of kinds of programs and wide variety of platforms. It's a language, not a GUI framework.
Hi, I'm working with Swift and Objective-C before it since the NeXT era and did concurrency even than using Listener and Speaker to send data between the threads, BUT I have real trouble with Swift Concurrency.
I think I read almost every guide or blog which is freely available about this topic, heard the explanation of its building blocks hundreds of times, but I'm still missing the great picture.
PLEASE provide a guide on how to really use this stuff, about the mindset you have to have to write code using Swift Concurrency, on how to work with older code and concepts apart from using @precocurrency.
This would really help. There is a design guide missing, a concepts cookbook or something like that.
I donāt know much about Swift usage patterns in server-side development, but Iād like to understand this criticism better.
I agree with the general point that the ecosystem is very important (to me, not as the language, but a lot still), and that an evolution plan for the language must take the ecosystem into account. I might be missing something obvious here, but specifically about the strict concurrency checking model, I understand that the compiler will use sendability, and other related concepts, to understand if a piece of code is safe, given its boundaries. About the code at the boundaries, I see 3 options:
Case 3.
creates no issue. Case 2.
can be worked around by using @unchecked
and other tools to tell the compiler to trust the code (until itās restructured to play better with the compiler). Case 1.
tells us that we are doing something wrong, and we were also doing it before, now itās only more evident, but we have clear tools to wrap that code in a concurrency-safe box.
Is the case with, for example, SwiftNIO
, one of the 3 above?
The migration guide published on swift.org doesn't have all of the things that you are asking for here. However, it is available right now, and its repo on GitHub also includes a whole bunch of code with examples in executable form. I just wanted to make sure you were aware.
https://www.swift.org/migration/documentation/migrationguide
This is a pet peeve of mine; every time a new language feature comes along and somebody suggests using an attribute for it, it then turns out that there's a need to metaprogram over it, and the attribute doesn't support that. There's a long discussion of it here: Algebraic Effects. Notably this discussion predates typed throws, and although the solution to typed throws was slightly different, we can see how important moving throws/not into the type system was, to metaprogramming AsyncSequence.