We have made some great progress with Swift Async Algorithms. So far there has been some really great discussion, bug fixes, and overall refinement of where we initially started off with. Just recently I finished one of the last proposals from the initial introduction and we have just a few bits of housekeeping to wrap up.
I know there is definitely some excitement to move forward with some additional features like sharing/broadcasting, withLatestFrom, deferred, or buffering variations on
AsyncChannel. To get to that point there are a few remaining tasks;
- Get CI up and rolling for macOS.
- Wrap up the remaining proposals. Particularly gathering the decision notes for each one.
- Audit Sendable conformances - done
- Strict concurrency mode clean - done
Along this journey we have determined a few key tidbits of knowledge. In my view they have solidified into some guidelines that form a basis of rules in which all
AsyncSequence types should follow.
Iteration MUST eventually account for cancellation; either by returning nil or throwing from next. This means that iteration of an
AsyncSequence must be responsive to cooperative task cancellation but the cost of checking for cancellation or setting up a handler for cancellation does not need to be paid for from each cycle of iteration. Instead, it can be paid for where it makes sense; particularly at suspension points.
Iterating past the end of a terminal state of an iterator MUST return
nil from subsequent calls to next. This behavior ensures consistent values produced from the mutation of an
AsyncSequence types MAY be
Sendable. This is often due to the fundamental behavior that
AsyncSequence is a description of how to produce values and not per se the thing that actually does the work. Instead the iterator is the type that represents the work to be done; specifically the call to next. Consequently it is uncommon for an
AsyncIteratorProtocol conforming type to be
Sendable since it is expected to be consumed on the task it is created on.
Beyond general rules there are number of design patterns and best practices that were discovered along the way. Namely of which, state of an iterator should often be modeled with a state machine transition such that any given iteration should behave with one singular task for any "child" like parallelism. This means that spawning a task for each call to next is inefficient and leads to potential design flaws. There is discussion around this last point that this perhaps indicates an area of potential improvement in the structured concurrency model.
A design pattern that has definitely made a strong showing is to factor out state machines for identifying the exact state for each algorithm. This has worked rather well for optimizing
zip, and others. My guess is that pattern will be used heavily for other algorithms to come.
Moving forward we will need to remain vigilant to ensure that we keep changes from being API or ABI breaking. To that end reviews will be limited for at least one week of discussion and the decision will be made on to include, include with alteration, or decline said proposed feature.
In conclusion, we need to focus on the remaining tasks to get to 1.0 and hold up just a little bit on the potential additions for just a bit so we can get to a point at which we feel that the surface area that is exposed is reliable, stable, and available for performance optimization that we may feel is needed. So the burning question some may have is "Where can I pitch in to make this happen?". First and foremost - discussion on the already implemented algorithms that are under review is key, talking through exactly how they can be used or where the base functionality is missing is critical to ensuring what we have so far is of the upmost quality. Second - eyes on finding bits that may have fell in the cracks, for example, making sure documentation is congruent and consistent with the agreed upon behavior. And perhaps most importantly - helping to determine the metric we can use for qualification of what makes a 1.0 or the next dot release.