Is @concurrent now the standard tool for shifting expensive synchronous work off the main actor?

I reached for the new @concurrent attribute for the first time just now, and I want to verify that my understanding is correct and that I'm not missing a different recommended way to handle my situation.

I'm writing a method in a SwiftUI view type (MainActor-isolated by default), and it will perform some potentially lengthy synchronous work. I want the method to first update some UI state that indicates that the work is in progress, so I'll need to be on the main actor in the beginning, but then I want to offload the expensive work so as to not tie up the main actor, and lastly return to the main actor to reset the UI state.

My crude instinct was to wrap the synchronous work in a Task so as to free up the main actor, but then I wondered what the standard way would be to remain within structured concurrency for such a normal situation. Here's what I did (simplified a bit):

struct MyView: View {
    @State var exportState: ExportState? = nil

    enum ExportState {
        case exporting
        case exported(URL)
    }

    var body: some View { ... }

    // @MainActor <- implicit, due to conformance to View
    func exportData() async {
        exportState = .exporting
        let urlOfZip = await zipUserData()
        exportState = .exported(urlOfZip)
    }

    @concurrent func zipUserData() async {
        // .. expensive synchronous work, no suspension points
    }
}

The main thing that feels a little weird is marking zipUserData as async when in reality it has no suspension points.

My two question are:

  1. Does this in fact achieve my goals? Have I missed something about how @concurrent works?
  2. Is there a different way to handle this situation that is better for some reason?
1 Like

Yes and no.

The @concurrent annotation will schedule your nonisolated functions to switch off of the main actor, yes. For things where you know the code must run off the main thread, go ahead and do that. In your particular case, where zipUserData is implicitly typed to the main actor, you'll also need to explicitly type it as nonisolated to get it to build.

However, for really expensive, synchronous work, you should consider using dispatch queues instead. Swift Concurrency's cooperative thread pooling approach is built on the assumption that a thread of work will never fully block - where block can include "very long running work that never suspends". So, if you have several long-running non-suspending tasks running in Swift Concurrency, you can eventually reach the point where new tasks end up scheduled for the main actor (or you end up seemingly deadlocked). I've seen this happen on both swift 6.1 and 6.2.

Dispatch queues can easily spin up new threads to handle more and more concurrent work. This can eventually lead to a thread explosion problem (where you have too many threads competing for compute time, slowing progress down to basically a halt), but that is mitigable: you could use serial dispatch queues (the default) to only have 1 running dispatch task per queue, and then create separate queues for managing concurrent dispatch tasks. You can also use higher-level abstractions such as the OperationQueue api to control the total number of concurrent operations.

In your case, if zipUserData is expected to take multiple seconds to run, then you should run it using plain old dispatch queues. You should profile it when the user data is absurdly high to get a number. You can easily bridge from dispatch queues to Swift Concurrency using the Continuation API. If you're feeling really fancy you could try implementing a global actor with a custom executor that's backed by a concurrent dispatch queue. The only actual reason to do that is if your blocking work does have suspension points & you want to call swift concurrency apis from it, otherwise it's just way too much effort.

3 Likes

I believe the second review made nonisolated optional.

Can you elaborate a bit on this? I haven't learned much about the behind-the-scenes of Swift Concurrency, like the cooperative thread pool.

This is the first I've heard DispatchQueue being recommended as superior to Swift Concurrency in a permanent way (rather than just due to temporary limitations of Swift Concurrency). Have you heard any talk of future directions for Swift Concurrency that would prevent the need to use DispatchQueue?

3 Likes

Swift Concurrency runs on top of default executor that is responsible for actually executing the underlying jobs created when you await something. On Apple platforms that default executor is a dynamic, maximum width DispatchQueue, which matches the processor width of the host machine (a ten core system will have a width of up to ten queues). So all the work you create within Swift Concurrency, everything you await, will run on these ten queues, unless they have their own custom executor (like MainActor) or run outside of concurrency altogether (like DispatchQueue). The longer your work takes to finish, the longer one of those ten queues is tied up doing one thing. Technically, this is fine, as long you can make forward progress (i.e. aren't waiting on some async construct like a semaphore), but if you have a large amount of work that never yields the queues back to the concurrency runtime, like long running parsing, your app will eventually fill all the queues and hang. It won't deadlock, it will just feel that way until the long running work clears the queues so other suspended jobs can resume.

If, instead, your long running work is performed on outside queues or another concurrency primitive, you can keep the concurrency executor clear for other work. Easiest way to do this is by wrapping a DispatchQueue in a continuation. Really, this shouldn't be the first thing you reach for unless you know you have a lot of work that will always run for a long time and you can't insert Task.yield() to break it up and allow other concurrency work to proceed. For something like a zip operation, which can be an arbitrary duration and entirely synchronous, it might be a good idea.

4 Likes

It feels like there should be a way to handle this within Swift Concurrency, maybe even without the programmer having to do anything. For example, couldn’t the global executor detect when no thread has yielded for more than half a second or something and if so spawn a new temporary thread which will be destroyed once things clear up?

1 Like

No, the concurrency runtime is specifically engineered to avoid the thread explosion issue that could cause, unlike DispatchQueue.

Swift should offer an easy way to throw work off into a queue, but you can get most of the way there on your own with a continuation.

3 Likes

Sweet! That makes sense, as @concurrent should also imply non isolated. My confusion is because the proposal still hasn't been updated as of when I wrote my earlier response.

Swift Concurrency uses in-process cooperative multitasking schedule tasks. Because this is cooperative, if an async function never yields (uses await), then it will not be preempted by another async function. Sure, the thread can be preempted, but not the actual work itself. A function that is just an infinite loop that never yields will forever tie up the thread.

Swift Concurrency spins up multiple threads for its threadpool. Some platforms (like wasm), this pool only has 1 thread in it because wasm doesn't run in a multithreaded environment. On other platforms, the threadpool is generally related to the number of CPU cores on the system, but that's not a guarantee.

As an example, if the system determines that the correct number of threads to use is 2, then once a long running task that doesn't even yield gets scheduled, every other task will run on the first (main) thread. If a second non-yielding long task gets scheduled, then even if it's annotated @concurrent, it'll still run on the main thread. On iOS and similar platforms, this hang can cause the system to kill your app due to being unresponsive.

Edit: Per @John_McCall's reply here, the above example is incorrect and my mental model was wrong.

It's not that Grand Central Dispatch (dispatch queues) is superior to Swift Concurrency, they're different models for handling concurrency (and, in fact, Swift Concurrency on platforms where GCD is available is implemented using GCD), with their own positives and negatives.

Swift Concurrency has several benefits (far nicer syntax, no thread explosion problem, etc.), with one of the main drawbacks being the potential for hanging when you have many long-running tasks. For the vast majority of work that you do, the drawbacks are worth dealing with. It's just for certain specific problems and edge cases, you need something else, and the Continuation API exists for bridging those "something else"s back to swift concurrency.

I think GCD is a bit of a red herring here. What you need is forward progress and cooperative suspension and cancellation. If it’s at all possible to allow your work to be preempted, it’s as easy as sprinkling in some await Task.yield() and try await Task.checkCancellation() lines whenever you want your process to pause and take a breath (and maybe let some other kids have a turn on the thread) before you press on.

7 Likes

@jeremyabannister – It’s not “superior”, per se. But as others have noted, Swift Concurrency imposes a contract that we must never impede forward progress on the cooperative thread pool. SE-0296 tells us (emphasis added):

Now, SE-0296 doesn’t define “separate context”, but WWDC 2022’s Visualize and optimize Swift concurrency does; it explicitly suggests DispatchQueue:

If we were writing our own long-running computational algorithm, we theoretically could remain within the cooperative thread pool and periodically await Task.yield() within our loop, but if you’re calling a slow and synchronous function, that may not be an option. (And I must note that you pay a significant performance penalty for this Task.yield approach.)

All of this having been said, if it is a short-lived blocking routine (i.e., you are trying to resolve a fleeting hitch in the UI), I personally just run it in a @concurrent function (or an actor, or what have you) and call it a day. This “move it out of the cooperative thread pool” pattern often only becomes critical when the task is extremely slow and/or it is doing a lot of them in parallel or what have you. As SE-0296 said, you really reach for this “separate context” pattern when faced with the “scalability problem”. Bottom line, you have to use your own good judgment.

I haven’t.

5 Likes

I would advise against usage of DispatchQueue if you'd like your code to be portable. import Dispatch is only available on a subset of platforms that Swift supports.

2 Likes

It appears that the overview about concurrency in the Swift book does not suffice as a guide, and there is no such guide listed in the documentation overview.

I think there really is a need for it on swift.org. Maybe some of you could somehow put together your knowledge for that? (Sorry I cannot help because I am also just finding my way into this topic.) Would be great! :+1:

@sspringer As others have pointed out, the knowledge to approach this topic is scattered across various resources like the Swift Evolution proposals and the WWDC videos related to the topic of Swift concurrency. One has but to look to find them. I can provide some resources if you’d like.

It is not only about me, it is that I think that a good coherent documentation on how to use Swift concurrency in practice, including what has been written e.g. in this topic, would be a great thing to have on swift.org.

This is not true; Swift concurrency does not schedule other work onto the main thread just because it’s currently available. If you have a long-running operation that’s blocking the main thread, it’s because you ran it there.

4 Likes

Technically, isn't that a platform decision? If a platform implements its executors on a fixed width pool, and also uses one of those threads as the executor for MainActor, then they aren't violating any language requirement, are they?

And even in the general case, that's not really true anymore, is it? With @MainActor as the default, what was once run on the default executor will now be run on MainActor, automatically. In that case it's not so much, "because you ran it there", but because "you forgot to tell the system not to run it there". With the new project settings in Xcode 26, using existing patterns (such as awaiting data from the network and then parsing it), will now run on MainActor, unbeknownst to the user.

The original claim was specifically about Apple UI programming, in the context of a discussion about needing to use dispatch queues rather than @concurrent. Using @concurrent appropriately in an iOS app to offload work from the main actor will prevent your app from being killed for lack of responsiveness because the main thread will still be able to process events.

2 Likes

I think for the vast majority of iOS use-cases here people are overcomplicating things.

  1. Modern computing devices are unbelievably powerful. Have you verified that there’s actually a problem? “Just do it synchronously” works remarkably often, and “async but on the main actor” covers a surprisingly large chunk of the rest
  2. If there is a problem, have you used Instruments and related tools to profile and optimize the code? Fast is almost always superior to slow-but-async, for battery reasons among others
  3. Having verified that you really do need to be async: do you need to be parallel? Phones have had 6 cores for quite a while, occupying one of them for a bit is no big deal
  4. If you really do need to go 6-wide with long-running tasks, stick some yields in
  5. If you really need >6 long running uninterruptible tasks, is it actually better for your user experience to try to make partial progress on all of them rather than getting some finished?
  6. If you have >6 long-running uninterruptible tasks that are also latency-sensitive and can usefully make partial progress, a) your poor user’s battery, b) now you can finally start considering workarounds like using libdispatch to switch from cooperative to preemptive multitasking
  7. …but if you have unbounded N of them, you can’t use libdispatch to go wide either because you’ll hit the thread cap and things will start falling over. So you need specifically 7-63 tasks of this nature for this to be a viable approach.

Is this actually a situation a lot of y’all are in?

5 Likes

I'll throw it out there for consideration: Task.detached {} should perhaps spin up a dedicated thread.

1 Like

There's already a mechanism for task executors, and it should be fairly straightforward to create an executor owning a dedicated OS thread:


final class ThreadExecutor: TaskExecutor {

    // Use `NSCondition` in a real impl to
    // put the thread to sleep if queue is empty
    private let jobQueue = ...
    
    init() {
        Thread { [self] in
            while true {
                if let job = jobQueue.pop() {
                    job.runSynchronously(on: self.asUnownedTaskExecutor())
                }
            }
        }
        .start()
    }
   
    func enqueue(_ job: UnownedJob) {
        jobQueue.push(job)
    }
}