Algorithm for connection pooling with structured concurrency

traditionally connection pools are implemented in an object-oriented fashion, with a actor protecting the pool state and mediating “check-out” and “check-in” events.

for example, a (terribly underspecified) procedure might look like:

  1. attempt to check out a channel that is already available in the pool. (enqueue this request on the actor loop and wait for its result.)
  2. if a resource is not available, stash an CheckedContinuation somewhere in the pool type, instruct the pool to establish a new channel if not at capacity, otherwise wait for another task to check-in a channel it is done using.
  3. when a channel becomes available, succeed one of the stored continuations, which checks it out again.
  4. trust that the channel-user will check the channel back into the pool once it is finished making its request and awaiting its response, and do this promptly instead of blocking on unrelated work.
  5. somehow integrate deadlines and lifecycle management into this system.

this uses a lot of unstructured concurrency, is difficult to understand, implement, and test, and requires designing and exposing a lot of API for something that should really be an implementation detail. from an external perspective, i think you should never have to think about connection pooling, everything should just “multiplex or time-out”.

which is getting me thinking about ways i can use TaskGroups and AsyncStreams to implement multiplexing, so callers don’t have to think about channels, they only have to think about requests and responses.


  • setting up a channel is very expensive (so pruning is neither needed nor desirable)
  • minimum and maximum multiplexing width is constant
  • callers never need to hold onto channels themselves, they only need to submit requests and await responses.

does anyone have any experience implementing a more-structured form of connection pooling, and if so, what approaches worked for you?