I don't think it is valid to say that cooperative multitasking is a mechanism to improve performance in execution speed at all. In Swift's concurrency model, it is part of a means to allow structured programming, with the result that you, as a developer, are more efficient in writing bug-free, well organized (and thus well-running) code. A potential performance gain is a potential secondary benefit from this (when you can easily parallelize things).
Perhaps it helps a little to look at the history of the term. While I don't know when it was academically conceptualized, the first systems used it to create the illusion of programs running in parallel (which is, in part, the reason why we sometimes argue about the differences between concurrency and parallelism today).
A good example is the old classic MacOS: Processes (i.e. applications) ran "in parallel" by using cooperative multitasking: Every time an application program executed a "system call" , this could introduce a yielding point (in Swift's model today the equivalent of an await
).
The obvious downsides are that a crashed process could take down the entire system and that a "greedy" app could freeze everything up.
The system worked well (excluding crashes), if every process behaved well and did its best to yield when it could: Everyone was supposed to cooperate.
In to the rescue came preemptive multi-tasking: Basically, now the underlying OS (pretty much the kernel) now got the power to "enforce yielding": It can simply "freeze" a process, let another one run for a while, and then "unfreeze" the first one. To each process, it looks like they are running all alone, they never need to actively "cooperate" with another one or the system.
Note that for this to work, we needed a lot of infrastructure, the support goes down to the hardware itself AFAIK (or to put it differently: on older chip architectures preemptive multitasking was not feasible, perhaps even downright impossible there were systems that allowed it, but I believe at least "regular" home-computers, e.g. the Mac, did not have preemptive multitasking).
Now, Swift concurrency (and other concurrency concepts in other languages, btw) "revives" the cooperative multitasking concept, just for threads (or abstractions above them).
The big benefit is that by introducing explicit yielding points into your code, you can write top-to-bottom code flows that include long-running "side tasks" without blocking execution.
The way I see it, that is the main benefit of the whole shebang.
Yes, to achieve this, behind the scenes the runtime parallelizes tasks, but that is more a means to an end and not the end itself. Sure, if you have work than can greatly benefit from parallelization it is also now easier to write this in a readable, efficient way, but that's a secondary win in my book.
So to sum this up, I think the "performance" gain about Swift concurrency is a human one: It is "more performant" to write and, especially, read code that includes asynchronous operations. It is more structured (top to bottom) and has less boilerplate dealing with "avoiding blocking one flow with another".
Even the unstructured Task
helps here as it makes the points where you spawn off a "side piece of code" a little more distinct from a simple scope change, like an if
(this is, ofc, debatable, but the fact we have a specific type to hold code started "in parallel" makes it easier, imo).