Apologies for the late reply here, @John_McCall!
Wanted to prepared a detailed writeup but then got pulled into some work stuff... 
Right, worth being more specific here. You got the use-case right: "run specific work off the default pool, and instead on some other specified pool" is the gist of it.
While perhaps worth a separate thread (maybe we can split this out if worth discussing right now? or wait out until we're ready to deep dive into those). I wanted to write up some of what would help addressing this problem though, even if just as food-for-thought:
1. First, one approach here is to make specific actor run on specific thread pool, by overriding the unownedExecutor.
// Now, I'll admit that this is a bit unusual (to me at least, my brain still very used to how akka does this) in Swift actor's current shape where the actor gets the serial execution semantics from the executor: the executor it has must be a SerialExecutor and that's where we get the serial actor execution semantics from.
I'm a bit more used to treating the "executor" (or "dispatcher") as a "dumb" thread pool and the actor's implementation making sure it only submits "run X now" to the dumb pool/dispatcher/executor which just runs things that are given to it. But I'll do my best to stick to Swift wording for consistency!
One approach that helps here is to allow specific actors to declare where they should run all their tasks. This is not unlike Swift's actors and the main actor which happens to use the main thread.
// let's say, in my system I want to dedicate 4 threads (made up number)
// to blocking tasks; and I'll want to make sure all blocking work is done
// on those, rather than on the global pool:
let dedicatedToBlockingTasksPoo = ThreadPoolExecutor(threads: 4)
// This is NOT a SerialExecutor, it is an Executor though:
// public protocol Executor: AnyObject, Sendable {
// func enqueue(_ job: UnownedJob)
// }
Next, I can have any number of actors which I know will be doing blocking, and I make them all use this pool:
actor Blocker {
let resource: Resource
let unownedExecutor: UnownedSerialExecutor
// overrides:
// nonisolated var unownedExecutor: UnownedSerialExecutor { get }
init(r: Resource, executor: ThreadPoolExecutor) {
self.resource = r
self.unownedExecutor = executor.unownedSerialExecutor // "give me an actor (Serial) executor onto this thread pool"
}
// async, but we'll do this work on the blocking executor, good.
func blockingWorkOnResource() async { <do blocking (e.g. IO) on Resource> }
}
So this is nice, because all methods on this actor would hop to the dedicated pool, and it can do whatever kind of blocking on the resource it needs to do, without blocking the shared global width-limited pool.
This is exactly "the akka way"; where we'd tell people to dedicate some threads for their IO, and run their e.g. database calls on such. In Scala/Akka, the "dispatcher" can be used for either Futures, or actors, since it's just a thread pool, the pattern looks like this:
// config file
my-blocking-dispatcher {
type = Dispatcher
executor = "thread-pool-executor"
thread-pool-executor {
fixed-pool-size = 16
}
throughput = 1
}
Which is then given to actors:
// scala / akka, just to examplify the pattern:
context.spawn(<my actor>, "name", .fromConfig("my-blocking-dispatcher"))
// this is equivalent to in Swift:
<MyActor>("name", <the dispatcher>)
But also they can be passed to Futures (so equivalents of our Task {}):
// scala / akka, just to examplify the pattern:
implicit val blockingDispatcher = system.dispatchers.lookup("my-blocking-dispatcher")
Future { Thread.sleep(10000) } // uses blockingDispatcher implicitly
// equivalent to:
// Future({ Thread.sleep(10000) }) (blockingDispatcher)
// https://www.scala-lang.org/api/2.13.x/scala/concurrent/Future.html
Which brings us to the second part, running specific tasks on specific pools:
2. Sometimes it is useful to run a specific task on a specific pool / executor, without having to go all the way to make an actor for it.
Though perhaps we could say we don't do that in Swift, and instead resort to global actors, and one would declare a global MyIOActor, give it the executor like we did in 1. and call it a day? Then we could:
Task { @MyIOActor in blocking() }
which could be nice as well...?
Otherwise, an approach would be to pass an executor to a Task:
Task(runOn: myBlockingExecutor) { ... }
The global actor approach may be worth exploring though...
3. And since we want to push developers to use child tasks whenever possible, they may also need this ability.
In TaskGroups the API can be rather easily extended to provide "where to run the child tasks":
TaskGroup(of: Int.self, childTaskExecutor: myBlockingExecutor) { group in
group.addTask { /*uses myBlockingExecutor*/ }
group.addTask(on: otherExecutor) { /*uses otherExecutor*/ }
}
So we could have this way to kick off some blocking tasks in a structured fashion.
It gets more complicated with async let, where we probably would need to resort to some scope pattern?
Executors.startChildTasks(on: blockingExecutor) {
async let x = { io() on blockingExecutor }
}
// or we'd have to invent some other way to annotate, maybe ideas like
async let x = { @IOActor in ... } // could be a thing...?
could be one way to achieve this... though we'd likely have to carry this executor preference in a task-local, (or make other space for it).
So overall, it all boils down to having to call some blocking code, in various situations, and making sure this code won't execute on the global pool.
Visually, (taken from an ancient writeup I did over here) we never want to have any blocking work on the "default" pool: (cyan=blocking/sleeping, green=running, orange = waiting), like the following bad situation visualizes:
(default pool is completely saturated by sleeps/blocking)
but instead, we'd want to have a world where all the bad things still are happening, but on their own executor/pool separated from the global pool:
(default pool is not busy at all, ready to serve requests, but blocking pool is busy sleeping blocking). This is better since it allows the server to remain responsive and reply with 500 or timeouts or do whatever else it needs to be doing...
Not sure how helpful the writeup is, but I figured I could collect a bunch of ideas and requests regarding this to give the discussion some more concretes -- though also happy to delay diving deeper until we're ready to discuss custom executors more etc.