We have addressed all of the major issues with building the Swift Async Algorithms package. I am quite sure folks are still quite interested in getting some of the commonly requested additional features but there's one task left to do; that is to gather remaining feedback for what is present to verify what we have. This will be marked as a tag 1.0.0-beta.1 and will run for two weeks to gather feedback and address any sort of issues remaining. After it emerges from the 1.0 beta phase, we will tag with 1.0.0, cut a release, and make a branch for updates to the 1.0 release. For any critical fixes a review and qualification from CI will then be placed on the main branch and picked into the 1.0 branch thusly cutting a 1.0.1 and so on.
That branching strategy leaves the main branch open for development of new features. There previously were some efforts to introduce features that would share elements across multiple consumers, replay values, produce an enumerated async sequence and more.
So to get to that point I would encourage folks to test out the latest beta and post feedback via the issue tracking there.
Couple of notes so far per differences from the initial:
CI testing for linux and macOS
debounce, buffer, combineLatest, interspersed, merge and zip all now have state machines to provide robust and consistent emissions without utilizing tasks per iteration
throttle was removed due to some unfortunate behavioral complications
all AsyncIteratorProtocol conformances have been marked as non-sendable
AsyncChannel and friends are now able to emit terminal events without awaiting
So the initial implementations of those operations used a loose system of implementing each algorithm by manually handling each transition of state itself. With a lot of work from @FranzBusch, @twittemb, and others they now have internal state machines that are robust to transitions; catching a lot of the subtile edge cases that non-state-machine systems can exhibit.
The one big lesson we learned was that having per-call-to-next Task spawning is quite expensive. Using a singular task and managing demand, and state across that is considerably more efficient; for memory but also execution time and throughput.
Yes, that's the part I was hoping to hear (well, not that it's inherently slow, but that you've lessened the use of it in the library). I've found the overhead of Tasks (spawning & joining) to be surprisingly high, to the point that it dominates execution time for many reasonable algorithms (e.g. asynchronously enumerating lines in an input stream). I don't know if any of the specific types named are ones that will benefit me right now, but nonetheless it's good to know they should be relatively free of this overhead (and hopefully the vanguard to more optimisation of other algorithms?).
Adding to this. I initially started to look at those algorithms because the initial Sendable constraints where to restrictive and it required the iterators of the base asynchronous sequences to be Sendable. This was due to the Task spawning on every next call.
The state machine approach is just something we have grown very fond of in the server ecosystem when implementing network protocols and we are using this pattern nowadays extensively throughout the ecosystem. It also really shines in concurrent problem domains since the state machine can be protected with a lock and all transitions can be exhaustively matched.