Winding down to 1.0 for Swift Async Algorithms

We have made some great progress with Swift Async Algorithms. So far there has been some really great discussion, bug fixes, and overall refinement of where we initially started off with. Just recently I finished one of the last proposals from the initial introduction and we have just a few bits of housekeeping to wrap up.

I know there is definitely some excitement to move forward with some additional features like sharing/broadcasting, withLatestFrom, deferred, or buffering variations on AsyncChannel. To get to that point there are a few remaining tasks;

  1. Get CI up and rolling for macOS.
  2. Wrap up the remaining proposals. Particularly gathering the decision notes for each one.
  3. Audit Sendable conformances - done :tada:
  4. Strict concurrency mode clean - done :tada:

Along this journey we have determined a few key tidbits of knowledge. In my view they have solidified into some guidelines that form a basis of rules in which all AsyncSequence types should follow.

Iteration MUST eventually account for cancellation; either by returning nil or throwing from next. This means that iteration of an AsyncSequence must be responsive to cooperative task cancellation but the cost of checking for cancellation or setting up a handler for cancellation does not need to be paid for from each cycle of iteration. Instead, it can be paid for where it makes sense; particularly at suspension points.

Iterating past the end of a terminal state of an iterator MUST return nil from subsequent calls to next. This behavior ensures consistent values produced from the mutation of an AsyncSequence.

AsyncSequence types MAY be Sendable. This is often due to the fundamental behavior that AsyncSequence is a description of how to produce values and not per se the thing that actually does the work. Instead the iterator is the type that represents the work to be done; specifically the call to next. Consequently it is uncommon for an AsyncIteratorProtocol conforming type to be Sendable since it is expected to be consumed on the task it is created on.

Beyond general rules there are number of design patterns and best practices that were discovered along the way. Namely of which, state of an iterator should often be modeled with a state machine transition such that any given iteration should behave with one singular task for any "child" like parallelism. This means that spawning a task for each call to next is inefficient and leads to potential design flaws. There is discussion around this last point that this perhaps indicates an area of potential improvement in the structured concurrency model.

A design pattern that has definitely made a strong showing is to factor out state machines for identifying the exact state for each algorithm. This has worked rather well for optimizing merge, zip, and others. My guess is that pattern will be used heavily for other algorithms to come.

Moving forward we will need to remain vigilant to ensure that we keep changes from being API or ABI breaking. To that end reviews will be limited for at least one week of discussion and the decision will be made on to include, include with alteration, or decline said proposed feature.

In conclusion, we need to focus on the remaining tasks to get to 1.0 and hold up just a little bit on the potential additions for just a bit so we can get to a point at which we feel that the surface area that is exposed is reliable, stable, and available for performance optimization that we may feel is needed. So the burning question some may have is "Where can I pitch in to make this happen?". First and foremost - discussion on the already implemented algorithms that are under review is key, talking through exactly how they can be used or where the base functionality is missing is critical to ensuring what we have so far is of the upmost quality. Second - eyes on finding bits that may have fell in the cracks, for example, making sure documentation is congruent and consistent with the agreed upon behavior. And perhaps most importantly - helping to determine the metric we can use for qualification of what makes a 1.0 or the next dot release.

15 Likes

Are there any plans to set up CI for Linux as well, before 1.0 is tagged?

Great news, well done everyone. Excited this is coming to 1.0.

I do have a question mark over this one. Certainly, for back pressure supporting algorithms that are in the multicast family there is a need to share the iterator somehow.

Whether you do it directly, by only supporting iterators explicitly designed for sharing (Sendable and perhaps a re-entrant proof next), or you do it indirectly, by wrapping the iterator in some kind of protected structure – like an actor or Task wrapping type, you end up sharing the iterator amongst Tasks.

I guess I'm saying iterators to Tasks aren't always 1-1.

For multicast algorithms it would be great if we could not wrap an iterator if it wasn't necessary, i.e. we know the iterator is Sendable with a re-entrant safe next, but I don't know how to express that in Swift. You'd possibly need some kind of @unchecked AsyncAtomicIterator conformance.

Perhaps one to discuss down the line. :slight_smile:

Linux CI already works

2 Likes

Ah, apologies for the wrong assumption. I wasn't familiar with the current CI setup. Great to see this getting closer to 1.0!

Hi @Philippe_Hausler

Do you have a deadline in mind for the 1.0 ?

Thx.