Emphasis on "presumably" there, though?
Possibly what you mean is that practically these are just rare edge cases?
By that token, I suppose if you replace 'mutex' with 'spinlock' then I'd be more convinced. But even then, nothing's technically stopping deadlock from happening, when you have finite concurrency and block a thread. When you have third-party code and multiple libraries layering atop each other, things devolve into unknown territory quickly.
With GCD deadlock is also possible in much the same ways, because GCD also has a hard cap on thread pool size. But there are at least two differences:
-
The GCD documentation is very clear that blocking inside a dispatch queue thread is dangerous:
When designing tasks for concurrent execution, do not call methods that block the current thread of execution. When a task scheduled by a concurrent dispatch queue blocks a thread, the system creates additional threads to run other queued concurrent tasks. If too many tasks block, the system may run out of threads for your app.
-
GCD concurrent queues support over-committing the CPU cores, by an order of magnitude or more. I believe it used to be 64 threads per root queue (the four QoS classes) but it looks like it's now 255 per root queue.
So GCD is way more tolerant of "bad" code.
Deadlocking due to the thread exhaustion is rare (in my experience) with GCD, but all too easy with Swift Concurrency.
I don't understand why Swift Concurrency won't over-commit the CPU cores… a little too zealous ideologically, perhaps? I get that it all works fine if everything is Concurrency-savvy, but alas we don't live in that world. Yet, at least.