Appreciate that buffering semantics are laid out in the pitch.
The pitch now describes two supported behaviours:
- Buffer + Drop Oldest: Buffer up to N elements, and drop oldest on buffer overflow
- Drop On Yield: Drop the element when there is no awaiting call on
next()
.
These would indeed cover well many existing stream-of-values/PubSub use cases to be bridged to the new Swift Async world. Most of these are "hot" streams, in the sense that they are unilaterally broadcasted by their publisher, regardless of the presence, preference or processing rate of the subscribers. So buffering and/or throttling are sensible options in this context.
However, the pitch does not quite offer an answer to backpressure use cases, that would be opened up (or made easier to implement) with the introduction of Swift async-await. A couple of notable examples:
-
A publisher may want to adapt to a slow subscriber, including any further asynchronous work spawned by the subscriber (under the Structured Concurrency model).
e.g. a database query listener stream, which may observe dirty notifications at a very high frequency, but it does not rerun the database query & yield the result set, until the consumer has finished processing the previous yielded value.
In Kotlin Coroutines, this behaviour maps to the SUSPEND buffer overflow strategy, which suspends the producer's emitting/yielding call, until the buffer has space, or (if no buffer space is configured) until the consumer has done with the previous emission.
-
In some scenarios, we may need the guarantee of a subscriber receiving every single element yielded by the publisher.
e.g. chunked file content processing using an
AsyncStream
In Kotlin Coroutines, this behaviour maps to a non-buffering Rendezvous channel, and is the default behaviour of "cold"
Flow<T>
.
Part of the premise of Swift async-await model, as I understand it, is to bake asynchrony on top of the familiar linear control flow model. In the synchronous control flow world, elements are guaranteed to be walked through one-by-one, and dropping happens only when explicitly requested (via lazy operators). So in my opinion, the asynchronous counterpart should maintain this same mental model and default semantics.
I also wonder how well AsyncStream
can bridge to Combine without backpressure support in one form or another. As far as I understand, an out-of-the-box Combine Publisher is non-buffering, non-dropping.