This is a general idea I'm posting to gather feedback from various experts. It's more a need than a solution at this point, and the solutions discussed may not be possible or correct.
Problem
Today, Swift's concurrency runtime operates on an execution pool (queues, threads, etc.) with a width fixed to the number of logical processors, with perhaps an additional executor for the main actor (queue, thread, etc.). While this protects against thread explosions, it makes any intensive work rather fraught once the desired parallelism exceeds the available thread count. Once that occurs, awaits may not be resumed in a timely manner, eventually affecting the main actor and causing app hiccups, effectively deadlocking the app until execution is finished.
To directly deal with this issue, Swift offers Task.yield()
, which allows the runtime to execute other work before resuming the long running thread. However, this solution is entirely manual, requiring developers to liberally add it at random points in their execution in order to maintain their app's responsiveness. Even if this work is possible, it requires significant expertise to do well, and may not even be compatible with work needed from other dependencies.
Current Solution
The one partial solution currently available for execution outside the standard thread pool is a custom actor executor. Unfortunately, Swift exposes no equivalent to the fixed width executor that can live under such an actor, forcing developers to do so manually. Additionally, an actor is an encapsulation of a single context, so it doesn't help control execution of other tasks created during any async work. This can lead to surprising behaviors where work thought to be performed in an actor is actually behind an await that executes back on the default executor. Custom executors are also rather low level and unsafe, making them inappropriate as a general solution. A more general, flexible solution is desired.
Future Solutions?
First, Swift should expose the various fixed-width executor creation methods that are used under the hood. Something compatible with custom executors but maybe generally useful? We have DispatchQueue.concurrentPerform
, but that allows arbitrary width and requires the system to dynamically limit execution regardless of the work performed. For DispatchQueue
, it would be nice to offer another queue type, in addition to .serial
and .concurrent
, that is automatically limited to processor count.
Second, Swift should add syntax to throw work off onto such a custom context. I'm not sure it's possible, but the core feature here would be scoped structured concurrency, where not just the top level API or a single actor run in the custom context, but every child and unstructured task created within it. I have no idea whether this is possible or what issues it may otherwise have, but here's what I image it would look like.
// Or .logicalCoreCount, .performanceCoreCount, .efficiencyCoreCore, whatever may be appropriate
let executor = FixedWidthExecutor(.explicitWidth(4)) // Runtime limited to core count?
// In a regular context.
await withExecutorContext(executor) {
await intensiveTaskOnOtherExecutor()
}
These executors could be used for long running computations, disk or other blocking hardware access, or could perhaps become the basis of a monitoring service needed to ensure cleanup.
So, is this at all feasible or desirable?