Swift Async Algorithms 1.0.0-beta.1

We have addressed all of the major issues with building the Swift Async Algorithms package. I am quite sure folks are still quite interested in getting some of the commonly requested additional features but there's one task left to do; that is to gather remaining feedback for what is present to verify what we have. This will be marked as a tag 1.0.0-beta.1 and will run for two weeks to gather feedback and address any sort of issues remaining. After it emerges from the 1.0 beta phase, we will tag with 1.0.0, cut a release, and make a branch for updates to the 1.0 release. For any critical fixes a review and qualification from CI will then be placed on the main branch and picked into the 1.0 branch thusly cutting a 1.0.1 and so on.

That branching strategy leaves the main branch open for development of new features. There previously were some efforts to introduce features that would share elements across multiple consumers, replay values, produce an enumerated async sequence and more.

So to get to that point I would encourage folks to test out the latest beta and post feedback via the issue tracking there.

Couple of notes so far per differences from the initial:

CI testing for linux and macOS :tada:

debounce, buffer, combineLatest, interspersed, merge and zip all now have state machines to provide robust and consistent emissions without utilizing tasks per iteration

throttle was removed due to some unfortunate behavioral complications

all AsyncIteratorProtocol conformances have been marked as non-sendable

AsyncChannel and friends are now able to emit terminal events without awaiting


Can you elaborate a little more on this? I want to get quite excited about this, but I want to be sure it means what I hope it means, first. :slightly_smiling_face:

So the initial implementations of those operations used a loose system of implementing each algorithm by manually handling each transition of state itself. With a lot of work from @FranzBusch, @twittemb, and others they now have internal state machines that are robust to transitions; catching a lot of the subtile edge cases that non-state-machine systems can exhibit.

The one big lesson we learned was that having per-call-to-next Task spawning is quite expensive. Using a singular task and managing demand, and state across that is considerably more efficient; for memory but also execution time and throughput.


Yes, that's the part I was hoping to hear (well, not that it's inherently slow, but that you've lessened the use of it in the library). I've found the overhead of Tasks (spawning & joining) to be surprisingly high, to the point that it dominates execution time for many reasonable algorithms (e.g. asynchronously enumerating lines in an input stream). I don't know if any of the specific types named are ones that will benefit me right now, but nonetheless it's good to know they should be relatively free of this overhead (and hopefully the vanguard to more optimisation of other algorithms?).

Adding to this. I initially started to look at those algorithms because the initial Sendable constraints where to restrictive and it required the iterators of the base asynchronous sequences to be Sendable. This was due to the Task spawning on every next call.

The state machine approach is just something we have grown very fond of in the server ecosystem when implementing network protocols and we are using this pattern nowadays extensively throughout the ecosystem. It also really shines in concurrent problem domains since the state machine can be protected with a lock and all transitions can be exhaustively matched.


The remaining usage of unstructured concurrency in AsyncAlgorithms is still something that I would love to solve but the current protocol shape doesn’t allow us. Some more context here


Just to clarify, was this due to the discussion in Surprising semantics of throttled sequences or something additional? Is the desire to get it back in the future?

Is this temporarily during the beta or for the 1.0 scope?

throttle is still there but we made it underscored for the time until we figure out what the correct semantics are. You can still access it via _throttle.

Our plan, is to also keep it like that for the 1.0.0 release because we really want to unblock the ecosystem first and don't want to hold everyone back because of the semantics of throttle.

Personally, I don't think we need to change the API bust just the semantics and we didn't want to do a semantic change for a public algorithm after we released a major version of the package.