Isolating "blocking calls" (be it long running or non interruptible IO etc), onto separate threads has the primary purpose not to make anything to faster, but isolate the rest of the system from the un-responsiveness during those operations.
Having to call "bad, long running / blocking API which I don't control" is absolutely a thing, and putting them on a different thread has the primary purpose of keeping the existing threads able to respond rather than "not able to respond at all if we have n (cpu count) of those bad calls happening).
It may be better to have a dedicated thread for those "bad calls" and just execute them serially rather than prevent the entire system from reacting to anything until those calls completed.
I've illustrated this issue way back with an akka http web server and the story is really the same for any shared concurrency pool, such as Swift's default global concurrent executor. Notice the difference between:
"just use the default pool":
and "isolate the blocking to a dedicated threadpool" ("my-blocking-dispacher" in this diagram):
( Turquoise - Sleeping state | Orange - Waiting state | Green - Runnable state)
You'll notice that by isolating the blocking, we kept all the default threads waiting or runnable, the system is able to respond to other jobs/tasks, while in the first example everything is blocker (sleeping) and we'll miss timeouts, health-checks and are basically entirely unresponsive until the work completes. This is why isolating blocking operations matters -- not really for speeding anything up.
I know that on Apple platforms and devices opinions differ if it's OK to do IO on the shared pool -- some would say it's okey and perhaps they're right on a device.
But on a server, where you have much more concurrent requests and clients it really isn't an option -- if you're doing so you're going to timeout requests, healthchecks and other such time sensitive jobs.
So as always it boils down to understanding your application and doing what's right for it.
If you wanted to customize where execution happens, Swift provides two tools for this: custom actor executors since Swift 5.9, and the upcoming feature of custom Task Executors whihc will undergo Swift Evolution discussion soon.
To replicate the above diagrams, it basically comes down to creating more executors dedicated to the IO tasks and using them as task and serial executors.
You can use a dispatch queue as a custom executor as well, since Swift 5.9. So we're slowly providing the necessary control to isolate such work onto dedicated threads/queues Please give the task executors proposal a look as well, it is aimed to help exactly with this.