I'm currently delving deeper into Swift and have a question regarding the execution of tasks on separate threads. I'm curious to know if Swift itself determines whether a task is automatically executed on a separate thread and, if so, under what circumstances this occurs.
Can someone explain how Swift handles the distribution of tasks across threads? Are these decisions made automatically by the language, or are there specific guidelines or mechanisms that developers should consider to optimize execution efficiency? Should I, as the developer even care on which thread a task is executed since Task.detached should be avoided if possible?
And what if the task runs in the context of a MainActor? Does Swift use separate threads for tasks if needed or is there no way to run a Task on a separate thread without the use of Task.detached?
It's a hard question to answer in its current form, because threads are essentially an implementation detail of Swift concurrency.
Looking at Swift concurrency separately from actor semantics (which are a level built on top of concurrency, mostly), it breaks down this way:
Each task is divided into pieces that are commonly called "jobs", consisting of sections of synchronous code bounded by statements marked with an await keyword. In Swift, tasks are not really units of asynchronous execution, jobs are.
Jobs can occur, be generated, sequentially in the order you write code in a task, or concurrently by APIs designed to make new tasks (and hence multiple new jobs).
Each job is associated with a specific "executor" which controls its execution context. Originally, there were just 2 executors, one for the "main" context (which causes its jobs to be executed sequentially on the main thread) and one for the "global concurrent" context (which causes its jobs to execute concurrently on a pool of threads, not including the main thread).
Now, there are other possible executors (a non-actor executor that runs jobs not associated with any actor, and custom executors that runs jobs in a custom way). These map onto their own execution contexts.
Custom executors aside, you know where jobs are going to execute, therefore: either on the main thread, or in a pool of non-main threads.
There is no contractual behavior established for the concurrent thread pool, AFAIK, except that on Apple platforms there are about as many threads as CPUs. That's because Swift jobs are always expected to make progress (i.e. not block significantly) even if they take a while to make their progress. In those circumstances, having more threads would not help.
There's no particular ordering of jobs generated from different tasks. Even on the main actor, when your code generates additional jobs (such as at a Task { … } construct that inherits the main actor executor), there's no guarantee of when these additional jobs execute relative the sequence of jobs in the code they were generated from.
Actors introduce an additional constraint in that actor-isolated jobs must be queued and executed sequentially per actor, separately from whatever thread they end up on. This means that some jobs in the global concurrent context may wait on other jobs, even if threads are available.
Someone else can jump in if I've forgotten anything important, but I think that's basically it. In terms of optimizing, about the only thing you really need to think about are inter-job "hops" between different executors, Those involve a context switch, and executor hops on a hot code path could add up to a performance problem, I suppose.