Do long-running Tasks have to yield?

In my macOS app written in Swift & SwiftUI I'm seeing SPODs in

#0 __psynch_cvwait ()
#1 _pthread_cond_wait ()
#2 -[NSCondition waitUntilDate:] ()
#3 -[NSConditionLock lockWhenCondition:beforeDate:] ()
#4 -[NSDocument(NSDocumentSaving) _waitForUserInteractionUnblocking] ()

when auto-save triggers when the app is doing computation. The work is divided into ProcessInfo.processInfo.processorCount batches which run in parallel. The calculations are done in async functions dispatched with Task.detached(priority: .high) and TaskGroup.addTask

The old C++ code used pthreads, maintained a separate thread pool and work queue for these functions which let them run in dedicated threads off the main thread until completion or cancellation.

I can avoid the SPOD by calling Task.yield during the computation, or by running fewer parallel Tasks. (Edit: or by removing priority: .high)

Since the ported Swift code relies on the Swift runtime to manage the threads and task scheduling, in order to maintain UI responsiveness is it necessary as part of the design that these functions call Task.yield periodically? Doing so avoids the SPOD on auto-save, but I am worried about the cost of task switching overhead that introduces.

I'm new to Swift Concurrency after decades of writing to pthreads. Some questions:

  1. Is it the responsibility of my async functions to yield in order to maintain UI responsiveness?
  2. Is the behavior I'm observing to be expected?
  3. Might I be misinterpreting what I'm seeing and should look for another cause for the SPOD?
  4. Is there another way to create Tasks in Swift so that they do not block the main thread?
  5. Might this be a runtime/os bug in what autosave is doing?
  6. On further investigation, the problem only occurs with .high priority. Does that indicate a scheduling issue?
1 Like

You'll often be able to get away with not doing this if you only have one long running thing going, but in general I think that's a reasonable statement, yes. Task switching is, thankfully, cheaper than thread or queue switching.

Mostly (see answer to 3). Unlike concurrent dispatch queues, the cooperative queues backing Swift Concurrency will not spawn additional threads to maintain execution width when blocked. This avoids thread explosions and is generally great for performance, but does make them vulnerable to being blocked like this.

It's quite possible. In particular, I wouldn't expect the main actor/thread to be subject to this, since it's not a member of the pool that you're blocking. Similarly, I wouldn't expect the priority to matter here, since the main thread is automatically very high priority, so that's puzzling.

As mentioned in my response to 3, I wouldn't expect the main thread to be subject to this, so you'd want to look for places where the main thread blocks on work that's on a Swift Concurrency thread. If you can't find any of those, something else may be going on. If I were to speculate, possibly we have a missing hop_to_executor() somewhere that's causing work to run on the main thread; that should be easy to see in a spindump or lldb if so.

It's very possible there are improvements to be made here in the system frameworks, especially given how new Swift Concurrency is. Please don't hesitate to file a feedback on this.


Might this be the inappropriate actor stickiness in action, where the work hasn't properly detached from the main actor? Lowering the priority lets it work because the main actors native items are processed first?

@RonAvitzur If you can post an abstracted version of how and where you're enqueuing the work, perhaps we can see something.

1 Like

Yeah seems like a reasonable hypothesis. If we can get a full spindump or all threads backtrace it should show where the work is running.

1 Like

In a new Xcode macOS Document App project, the following exhibits the same behavior:

struct ContentView: View {
    @Binding var document: AutosaveSPODDocument

    var body: some View {
        Button("Start") {
            document.text.append("+") // trigger autosave

func startLongRunningTasks() {
    Task.detached(priority: .high) {
        await withTaskGroup(of: Int.self) { group in
            for _ in 1...ProcessInfo.processInfo.processorCount {
                group.addTask {
                    var sum = 0
                    for i in 0 ... 1_000_000_000 { sum += i }
                    return sum

Submitted to Feedback Assistant at FB9852370.

1 Like

What does SPOD mean?

Spinning Pointer of Death?

I understand that there are a limited number of threads used for Swift Tasks. As a result, if my compute Tasks do not yield, they will be serialized if their number exceeds available threads. That much is the desired behavior for my application.

The important question for me is What else shares those threads?

If it is just a framework bug that Autosave depends on those threads, I can await a bug fix. If not, do I need to understand the implementation details behind Task Executors in order to ensure application responsiveness?

Potentially anything. They’re a shared resource just like global dispatch queues.

In the sense that anything in the system could potentially adopt Swift Concurrency, yes. As far as I know the cooperative pool threads aren't shared with non-cooperative-pool work though.

1 Like

Right. It’s the same issue that affects the global dispatch queues: any code running in your process can submit work to them. Which is why they were made overcommit queues, but that just elevates the tragedy of the commons to the pthread level.

The default global queues are actually non-overcommit. Default serial queues are overcommit.

1 Like

Will there be a way for my application to run these Tasks in a separate something so that they don't block anything else using Swift concurrency? Will custom executors do that?

All of the work done by tasks off of the main actor should be concurrent with the main actor. I don't know what's going on in your example; the UI thread really should not be blocked from updating. Even with a lot of high-priority tasks churning away, the UI thread will have higher priority and so should be getting an opportunity to update.

With that said, if your program has any latent priority inversions, doing a ton of computation in high-priority threads might exacerbate that quite badly. The way to investigate this is to figure out what the UI thread is doing instead of updating. But in your reduced example, I don't see what that could be.