Benefits of cooperative multitasking

Hey guys - I have been dreaming of the theory of the cooperative multitasking where we gain the performance of the program by minimizing the time that the threads spend doing nothing. In theory, it should be that the thread pool constantly fills up the space with work, making everything more efficient and fast.

Question: can somebody please provide a piece of code or a project in which we can clearly see the benefit of the cooperative multitasking in Swift, and or maybe, like, some reproducible comparison of the superiority of cooperative multitasking?

I'm all for it. But there is a contradiction here. You suggest doing much of the work in a thread pool, that is in parallel. But cooperative multitasking is more about one-thread concurrency without parallelism.

IMHO the benefit of cooperative multitasking shows itself when using hardware with limited resources (small number of processor cores, tiny battery in a wearable device). Intense parallelism on the other hand is more in demand in different types of applications, like servers or some desktop apps, of course up to mobile apps to some extent.

1 Like

To be honest, Swift concurrency is a strange beast to me. On the one hand it utilizes cooperative multitasking, and on the other it utilizes a thread pool. Maybe it makes it scale better for different types of hardware.

1 Like

This is a significant distinction, but it's not the distinction that defines "cooperative" multitasking.

At least in the Apple world, cooperative multitasking means non-preemptive multitasking. Preemptive multitasking is about ejecting the current execution from a thread on a schedule, while cooperative multitasking is about ejecting the current execution from a thread at a point chosen by the current execution. Both deal with the behavior of single threads.

I dunno if this is the important takeaway on this axis, either. The only time that threads spend time doing nothing is when they're inside a blocking call of some kind, including stuff like synchronous locks.

Preemptive vs. cooperative affects performance more in the arena of thread-switching overheads vs. progress of each task assigned to a thread.

Swift concurrency uses cooperative kinds of behaviors to share a small pool of threads across an arbitrary number of tasks. That makes blocking (which is a very uncooperative behavior) a Bad Thing™ for Swift concurrency. But cooperation without any blocking is probably more efficient than preempting with or without blocking.

Preempting OTOH provides better progress on tasks overall, except when it doesn't. :slight_smile:

5 Likes

It's either cooperative, or preemptive, regardless of platform. Confusingly, Swift Concurrency uses both.

The only benefit of the Swift Concurrency I see is "write once, use anywhere", that is if my app is running on a 1-core device, it will use just cooperative multitasking, and if it runs on a 256-core device, it will use multiple threads very efficiently. I don't have control on cooperative/preemptive things, but the code is more high-level and will scale as needed.

1 Like

I'm not sure if the question of cooperative multitasking being necessarily superior in performance terms is valid [1]. Cooperative part just allows you as a developer gain more control on when something is running or not (roughly speaking), but it also puts a responsibility to handle long-running/blocking work carefully so that your tasks cooperate and give others time to run (despite number of available threads on the device). Thread pool isn't necessarily fully busy as well, because your code is actually can end-up running on a single thread in a concurrent way, which is beneficial as can reduce context switches.


  1. In the context of Swift Concurrency vs GCD ↩︎

1 Like

ughhhh okay, sooo... Is there any way to prove that there are cases where we will see a performance increase from migrating from GCD to Swift concurrency?

What do you mean? Can you please elaborate? This is the first time I'm hearing this

Cooperative vs preemptive if we are talking about OS managed threads or worse processes sure, but I think when people think cooperative these days what they picture is allocating a fixed number of threads based on the CPU cores available (virtual or physical where it matters) and each thread managing a series of jobs cooperatively (software manager scheduling).

In this case you avoid a lot of the cost of regular context switching between system threads (saving and restoring thread state and going up to the OS and back). I believe this is the reason it grew in popularity a lot in consoles (and see it in recent Java releases as they added virtual threads support: Virtual Threads ).

Examples:
https://m.youtube.com/watch?v=HIVBhKj7gQU (Fibers in Naughty Dog engine)
https://www.createursdemondes.fr/wp-content/uploads/2015/03/parallelizing_the_naughty_dog_engine_using_fibers.pdf

1 Like

I don't think it is valid to say that cooperative multitasking is a mechanism to improve performance in execution speed at all. In Swift's concurrency model, it is part of a means to allow structured programming, with the result that you, as a developer, are more efficient in writing bug-free, well organized (and thus well-running) code. A potential performance gain is a potential secondary benefit from this (when you can easily parallelize things).

Perhaps it helps a little to look at the history of the term. While I don't know when it was academically conceptualized, the first systems used it to create the illusion of programs running in parallel (which is, in part, the reason why we sometimes argue about the differences between concurrency and parallelism today).

A good example is the old classic MacOS: Processes (i.e. applications) ran "in parallel" by using cooperative multitasking: Every time an application program executed a "system call"[1] , this could introduce a yielding point (in Swift's model today the equivalent of an await).
The obvious downsides are that a crashed process could take down the entire system and that a "greedy" app could freeze everything up.
The system worked well (excluding crashes), if every process behaved well and did its best to yield when it could: Everyone was supposed to cooperate.

In to the rescue came preemptive multi-tasking: Basically, now the underlying OS (pretty much the kernel) now got the power to "enforce yielding": It can simply "freeze" a process, let another one run for a while, and then "unfreeze" the first one. To each process, it looks like they are running all alone, they never need to actively "cooperate" with another one or the system.

Note that for this to work, we needed a lot of infrastructure, the support goes down to the hardware itself AFAIK (or to put it differently: on older chip architectures preemptive multitasking was not feasible, perhaps even downright impossible there were systems that allowed it, but I believe at least "regular" home-computers, e.g. the Mac, did not have preemptive multitasking[2]).


Now, Swift concurrency (and other concurrency concepts in other languages, btw) "revives" the cooperative multitasking concept, just for threads (or abstractions above them).
The big benefit is that by introducing explicit yielding points into your code, you can write top-to-bottom code flows that include long-running "side tasks" without blocking execution.

The way I see it, that is the main benefit of the whole shebang.

Yes, to achieve this, behind the scenes the runtime parallelizes tasks[3], but that is more a means to an end and not the end itself. Sure, if you have work than can greatly benefit from parallelization it is also now easier to write this in a readable, efficient way, but that's a secondary win in my book.

So to sum this up, I think the "performance" gain about Swift concurrency is a human one: It is "more performant" to write and, especially, read code that includes asynchronous operations. It is more structured (top to bottom) and has less boilerplate dealing with "avoiding blocking one flow with another".
Even the unstructured Task helps here as it makes the points where you spawn off a "side piece of code" a little more distinct from a simple scope change, like an if (this is, ofc, debatable, but the fact we have a specific type to hold code started "in parallel" makes it easier, imo).


  1. things like fetching new input events, etc., but also drawing certain UI elements, IIRC ↩︎

  2. thanks to @ibex10 for pointing that out, btw! :smiley: ↩︎

  3. if it can, i.e. has multiple cores and threads at hand, which is probably the norm these days except on embedded systems ↩︎

8 Likes

If you have a carefully well-designed GCD code, I wouldn't expect migrating to Swift Concurrency increase performance just by this fact. The major benefits of migration I see is

  1. Cleaner code.
  2. Static compiler checks.

This can unveil some issues you might be not aware of and make it simpler to reason about the code. This can end-up with some performance gains, especially if during migration you review design aspects of the solution, maybe got some blocking code to go away, etc. – but, at least to my knowledge, there is no guarantee that you actually will have determined performance impact.

Yeah, that's the question of what we compare to what. I just understood initial question in terms of Swift Concurrency vs (largely) GCD. Yet on a larger scale there are also questions to consider – like the one discussed in this thread a while ago.

2 Likes

And yet, this will become obligatory, to write Swift code in such manner after Swift 5 becomes deprecated...

I honestly don’t understand concerns on Swift 5 depreciation as Xcode (as probably the most representative case) still supports Swift 4 released like 7 years ago. And Swift 6 mode isn’t mandatory and there are no discussions of making it so. And even with Swift 6 you ain’t required to use only new concurrency, you still can write GCD code but with more compiler checks in some places.

4 Likes

well it's more that probably there will be some new sweet language features which will be supported only in Swift 6 mode, but not Swift 5

Disagree. It's entirely possible to have co-operative multitasking with multiple threads. As @QuinceyMorris said, the difference between co-operative and pre-emptive multitasking is that the pre-emptive variety will actively take control from your thread, whereas in co-operative multitasking it is up to you to yield control explicitly.

This is not at all unusual. Most modern GUI systems — indeed, most modern event-driven systems — use a combined approach; they tend to have a co-operative event loop (which blocks, waiting for an event, thus yielding execution), but they run in a pre-emptive environment so if some thread doesn't yield the CPU by the end of its allotted time slice, the scheduler will forcibly switch to some other thread.

The big difference with systems like Swift concurrency is the use of async/await rather than explicit event loops, which facilitates co-operation with the compiler and allows for stackless co-operative behaviour. In event-loop driven co-operative programming, running the event loop will cause events to be dispatched from the current stack frame, and if those events also trigger a nested event loop, the stack will grow. The equivalent to running the event loop in the async/await model is awaiting a result, but because the compiler is able to transform the compiled code, this does not cause stack growth.

Of course, you don't have to use nested event loops, but to avoid them you end up having to transform your code manually into state machines.

The really big win is being stackless. If you try to create a million threads on almost any hardware you're ever likely to be playing with, you'll grind to a halt — you'll probably run out of memory for the thread stacks, and long before that you'll be swapping (assuming swap is enabled). Using lightweight Task objects instead of threads, and scheduling them into a thread pool sized appropriately for the machine on which they are running, you can create huge numbers of tasks without ever hitting this problem.

The upshot is that servers built with Swift NIO, for example, can handle very large numbers of connections with a single server compared to the traditional process-per-connection or thread-per-connection designs. And yes, it's possible with more traditional servers to get fancy and run multiple connections per thread — but doing so is harder, and you typically end up running event loops to do so, which is in fact a form of co-operative multitasking, but essentially requires you to restructure your code as a state machine. When you go down the event loop route, you also have to deal with the fact that sometimes you have a long-running task that is triggered by an event, and if that task is CPU intensive you may decide to run a nested event loop, which then adds in re-entrancy and stack growth problems again.

async/await doesn't save you from re-entrancy (any await can result in other code running), but it does avoid thread explosion and stack growth as well as letting you write straight-line code and handing off the complexity of turning that into a state machine to the compiler.

6 Likes

I wasn't sure, so I just checked. You can use typed throws, a feature introduced in Swift 6, with the Swift 4 language mode.

Given that, I'm not saying your concern is completely invalid. But I think using this as a motivator for moving 6 mode is pretty hard to justify.

6 Likes

oh wow! Maybe this is our future - that Swift 5 will include all the features, and Swift 6 is just to enable the strict concurrency checks

Yeah, as I tried to convey in the other thread, language modes are intended only to gate source-breaking aspects of proposals, while the purely additive portions are implemented without being gated behind a language mode.

10 Likes

When you call an async function from some isolation domain, e.g. main actor, and this async function is isolated to the same domain, you effectively use cooperative multitasking. The function will run in the same thread as its caller. But if the async function is isolated to a different domain (actor), or is non-isolated, it will run in a different thread in a pre-allocated thread pool, that is in parallel to the caller, using preemptive multitasking.

Quote from Wikipedia, see link above:

The term preemptive multitasking is used to distinguish a multitasking operating system, which permits preemption of tasks, from a cooperative multitasking system wherein processes or tasks must be explicitly programmed to yield when they do not need system resources.