Concurrency and CPU-bound tasks

Over the past two weeks, I've been reading through the concurrency proposals and catching up on concurrency-related WWDC videos. One thing I didn't see discussed much is this: how does Swift Concurrency work with CPU-bound tasks, as opposed to IO-bound tasks? Many of the examples describe asynchronous I/O, and that seems like it will work very well with the cooperative threading model.

But for CPU-bound tasks (let's say a parallel sort we want to distribute over multiple CPU's), is async/await a good match? Is there a way for tasks to yield to other threads after finishing a chunk of work? Will they starve other tasks of CPU time?

Or are we better off sticking to threads and GCD queues for this kind of work?

A task that does a long, uninterrupted period of computation can starve other tasks, yeah. There's a function in the Structured Concurrency proposal, currently called Task.yield() but likely to be renamed, which allows other work to interleave.

3 Likes

Is giving a CPU-bound task its own actor sufficient to ensure that it won’t starve other tasks?

What about wrapping it in its own Task { … } or Task.detached { … }?

Oh, nice - I hadn't seen Task.yield. Thanks!

Overall, would you say doing CPU-bound parallelism is stretching the intended purposes of Swift Concurrency? Or is it okay as long as we include some well-placed yield's?

1 Like

By default, actors use the cooperative thread pool. CPU-bound actors might be a good use case for a custom executor that manages threads for compute tasks.

3 Likes

@Joe_Groff It does seem like custom executors could help. Is it possible that will make it into Swift 5.5 or is that on the Swift 6 track?

Thanks, that’s helpful. Checking if I understand correctly: merely creating separate actors will allow long-running partial tasks (i.e. code that runs for a long time without awaiting or yielding) to execute in parallel, but will not introduce extra threads even if all available threads fill with such long-running partial tasks? (And IIRC there is ~1 thread / CPU core in the default cooperative pool?)

And in principle, the runtime could observe that the thread pool is choked up with such tasks, and automatically create new threads to let preemptive multitasking allow blocked tasks to continue — but that runs against the goal of avoiding thread explosions?

Right. Adding new threads to maintain forward progress is how GCD works, for instance, and it has long been a source of unfixable performance problems because it is easy to accidentally trigger unbounded thread explosion.

7 Likes