OK, boy do I have feedback for you. In case you're wondering why it comes only now, see [SR-15245] [Concurrency] Compiler crashes on attached input · Issue #57567 · apple/swift · GitHub which has stumped me until recently.
The first item of feedback is: please provide a bidirectional and more practical alternative to SE-0314: Async(Throwing)Stream (which itself I initially missed since it wasn't listed in the initial post, but maybe it hadn't made it to that first beta).
I will say this for AsyncStream: it exists, and as such it can form the basis for more elaborate inter-task exchange primitives; without AsyncStream, those couldn't exist at all. But implementing those from AsyncStream is awkward due to the latter's limitations and idiosyncrasies.
Now, how limited are we talking? The answer is: very. The main limitation is that the iterator has value semantics and its next() method is mutating (on top of being async, of course). That prohibits it from being protected by an actor: an actor (currently?) can't protect across suspension points and so refuses to run such a method on one of its member fields. OK, I guess that's the intent: for the AsyncStream receiving end to have Task affinity to help with priority inheritance for instance, which is a Good Thing. But the continuation (the sending end) does not have the same restrictions, doesn't that prevent identifying the task that will have to call that end to unblock the receiving task if necessary and so needs to have its priority boosted? Unless the engine is able to perform that determination from past usage, but then why aren't those smarts applied to the receiving end to lift those Task affinity restrictions?
Then we have the fact the continuation is not provided directly, but as a parameter to a callback that is invoked asynchronously, by which I mean here from a different Task… but the callback itself isn't async, which means I need to spawn my own Task if I need to await something as part of using the continuation. All that makes the API more suited to the primary use case (adapting existing synchronous code) at the cost of making it less suited to all other use cases: for instance, I can't send the continuation through itself to be recovered through the iterator, in order to wrap that into a saner primitive, because even with a tagged enum the Swift type system won't allow that. And I don't have access to a handle to that internal Task, which would allow me to await on it for the same purpose…
The way it is, that sounds like the situation if the Mach microkernel only ever allowed a Mach port to be created by a call that returned a receive right that couldn't possibly be transmitted to another Mach task, and didn't return a send right but instead passed it as a parameter to a new Mach task that was spawned as part of that call, making it your responsibility to get that send right to the proper place from there. Between that and fork(2) + exec(2) + pipe(2), the latter sound unexpectedly appealing: at least pipe(2) returns both descriptors to the same place and makes no restriction as to where you can use them.
Now you may be wondering: how bad could it possibly be? I think the answer is best provided by one of the types I had to involve to create a bidirectional primitive: AsyncStream<PayloadOrOutOfBand<Response, AsyncStream<Request>.Continuation>>
(Request
and Response
substituted in for legibility): I couldn't figure out any other way for the requesting task to get its continuation handle than by making it go through the AsyncStream that is otherwise used for it to receive responses.
In my humble opinion, it's better for primitives to be bidirectional by default; I believe that to be one of the main takeaways from the success of the sockets API. I recognise bidirectional exchange primitives have a number of implications, one of which is to mandate the ability to wait on multiple events simultaneously (the equivalent of select(2)) which appears to be incompatible with receiving calls that are simultaneously async and mutating/inout, but I also believe these complications to be worth it in the end.
Here's my experiments project so you can take a look as to how I've done it. I have yet to write an explainer for it, so the short version is:
Whenever the recursive system in (Async)Explorers.swift would delegate to iteratePossibleLeftNodes(), it instead calls an injected dispatch function that sees if that delegation could best be performed in a concurrent context at this point (otherwise, that function just delegates directly). When that happens, iteratePossibleLeftNodes() inside that context is injected with a dispatch function that short-circuits that: no point in spawning further tasks from there. The architecture is meant for different concurrency APIs to be able to be injected: the GCD version is probably easier to follow, so read that one first, in DispatchAPI.swift. But DispatchAPI.swift intentionally doesn't make use of async Swift, so then read AsyncAPI.swift, its async Swift counterpart. PullAPI.swift is not an alternative to that, rather it is an experiment in how to conceive a pull-based concurrency API, that AsyncAPI.swift now relies on.