Should AsyncSequence replace Combine in the future, or should they coexist?

Interesting use cases - I can't tell whether the couple of points I had in the previous review of SE-0314 with regard to conflation/merging as buffering policy and the ability to signal when an initial snapshot of data has been received, so wanted to bring those up again too. (basically the "more advanced buffer type" you mention in the reply down thread there + some signal for "initial data set received")

Buffering is definitely something that falls into that temporal transformation category. One early prototype for AsyncStream had an actor "delegate" as the way to control the buffering. That would fall in the same theme with the suggestion from @JJJ for a more configurable throttle.

Ultimately we ended up feeling that was too complex for folks to have to deal with; but perhaps a lightweight buffering system that offloads the work to an actor might be a good move.

3 Likes

I'd also be interested in sort of the inverse - playing out from a static sequence, but with temporal control. The first thought I had was having a way to go "faster" for testing, but the alternate side of that is to use the sequence playing out as a short-lived timer with different intervals that could act as triggers for visual effects.

Everything I'm thinking of right now would be relatively short - 3 to 5 elements in a sequence, but having the timing between the elements be variable. The first after 1 second, the second and third each 500ms apart, etc. But I could see this same kind of thing being useful in longer-span captures and playbacks of events, where the timing of the events is useful or interesting to capture.

So a big thanks to everyone in this thread: y'all were definitely in my mind when we were working on some of the starts to this: Introducing Swift Async Algorithms

I think we have the starts to something pretty cool; and hopefully with this now open to the community we can build out some of the other neat ideas we have here.

26 Likes