Rethinking Async

Before the official concurrency proposal is presented I wanted to quickly sketch what I had on my mind on this topic:

Why not use Combine streams as channels between entities called Reactors? More and more APIs get a publisher method so a concurrency proposal could utilize that.

Reactors would have input and output Ports which could be connected by streams. Once data arrives on an input port a reactor would proceed in its control flow possibly sending data to its output ports until it comes to a point where it waits for the next reaction.

This step wise processing is supported by specialized functions called activities, which allow to wait for the next step via the await statement. Normal functions and methods can be called from activities, but not the other way around.

Besides the capability to await the next instant, activities also support concurrent control-flows and preemption as in other imperative synchronous languages. The causality follows the Sequentially Constructive model and thus allows memory to be used as synchronization mechanism between concurrent trails (as opposed to using signals for synchronization like in Esterel).

In addition to the port based data-interface, reactors might also offer a functional API through service methods. These methods would also have a stream based signature (at least for the return value) and internally spawn new reactors on each invocation to do the processing. The result of these service calls will be handled by a special statement available in activities called receive.

So, the general idea is that of GALS - locally synchronous reactors which are asynchronously connected via Combine streams.

Mkay… I did, but what am I supposed to notice about that commit that relates to my post?


That assumes that Apple will make Combine open-source, or the Core team invests resource in something like OpenCombine, at least for Linux/Windows

The important point here is that the asynchronous mechanism is not restricted to a single value (as with simple async/await) but allows to send and receive a stream of values.

Combine would be compatible with this multi-await approach but at the core a coroutine implementation could be present.

Note also how nicely the backpressure flow-control of Combine matches with the step wise computation of synchronous programs. When awaiting the next step, receive statements would state a new demand of 1 thereby controlling the rate of data producers.

That's all well and good, but how does Combine get introduced into the Swift language without Apple open-sourcing it? And, that decision is not up to the Swift Core Team. The Apple members may advise their management, but, it's up to Apple management to make that decision.

Maybe a small example can help to illustrate the idea of using functions which can yield multiple times (like functions returning a Publisher or a generator or coroutine) from imperative synchronous activities.

Lets assume we have this function which returns count stock quotes for a ticker symbol:

func stockPrice(symbol: String, count: Int? = nil) -> AnyPublisher<Float, UnknownSymbol>

BTW. an alternative signature of that function might look something like this:

func stockPrice(symbol: String, count: Int? = nil) async(price: Float) throws

Now, this function is used in an activity which should print out the stock price a given number of times or until a maximum price is reached:

01 act printStockPrice(symbol: String, count: Int, maxPrice: Float) {
02   print("print stock price for: \(symbol)")
03   var price: Float
04   cobegin {
05     receive(price) stockPrice(symbol: symbol, count: count)
06     print("exceeded count")
07   }
08   with {
09     repeat {
10       await true
11       print("quote: \(price)")
12     } while price < maxPrice
13     print("exceeded maxPrice")
14   }
15   print("done")
16 }

First, the activity proceeds like in a normal function with a statement to print the symbol (line 2).

The call to the asynchronous stockPrice function happens in line 5 by using the receive keyword which allows to bind a variable to the values generated by stockPrice.

As the control-flow stays at this statement, handling of a received value has to happen in a concurrent trail:

The block introduced by the cobegin keyword in line 4 starts a first trail (in which receive is called) and the with keyword in line 8 starts a second trail where the printing of the quote values will happen,

The second trail consists of a loop (lines 9-12) which first hits the statement await true in line 10. This will stop the trail until the whole activity is triggered to react again - which will happen once a new value from the stockPrice publisher is received and bound to the variable price in line 5. The print statement in line 11 will then print the value and the loop will either continue or break dependent on the condition (12).

When either trail finishes, the other will be preempted (weakly). When, for example, the condition in line 12 causes the second trail to finish, the first trail will be preempted causing the subscription to the publisher done in the receive statement of line 5 to be canceled.

Synchronous activities thus nicely allow the imperative processing of functional reactive streams.

I know you said in a few weeks... but it has been a couple of weeks and just want to make sure I haven't missed any preliminary information being available.

There's none (yet) on this forums, to say the least.

Sorry, I overestimated how much time I'd be able to put into the design document. It'll be another week or more. There's a lot of internal and core-team review we want to do to make sure we have some broad acceptance of the approach.


What it may want to say is: as async is added to master right now (like this: your concern is not seen as an immediate blocker for proceeding whit this important and long overdue topic.

How would all of this work with extensions? Would you be able to create an extension of an array where elements are async?

I have already run into issues around not being able to declare an extension of a generic type where a generic argument is optional (since there is no optional protocol one could use to constrain a generic argument), though with Optional you could always create a protocol that you conditionally conform Optional to (and then constrain based on that protocol).

@John_McCall is there an update for the design document for concurrency? I've been seeing a lot of interesting PRs and would love to see the big picture that's being designed behind the scenes!


Sorry, it's been slow going. We have a really densely-detailed design document that's been useful for talking through the design but probably isn't the right way to introduce the community to our vision. On the other hand, incrementally leaking details through PRs isn't the best approach, either. :) We'll see what we can do.


I’d love to see something in the form of a blog post as an introduction to the pitch/proposal. This post could show a “before” and “after” example from something in an existing code base, a summary of the problems the language feature is trying to solve and the principles underlying the design.


For those interested in the topic, here is a bit of background on the ideas of sequentially constructive synchronous programming explained on basis of the Blech programming language:

Let’s give some more time to the core team to fine tune the PR of concurrency.

It’s a big feature and so hard to design a perfect solution to solve all problems we had currently.

So, be patient for this big guy.


@John_McCall Are generator functions going to be introduced as a part of a new concurrency mechanism? I understand a feature like that required a vast amount of time but if you can it would be great to share a vague concept of what you have in mind and trying to implement and also how it's going to be aligned with the feature review? I'm sure it's not only me who keeps watching pull requests with the [Concurrency] tag and as you've said it's not the best approach. We appreciate your work and time but a little bit of clarity would help :)

This is from the „Swift Evolution Process“ doc:

  • Socialize the idea : propose a rough sketch of the idea in the "pitches" section of the Swift forums, the problems it solves, what the solution looks like, etc., to gauge interest from the community.
  • Develop the proposal : expand the rough sketch into a complete proposal, using the proposal template, and continue to refine the proposal on the forums. Prototyping an implementation and its uses along with the proposal is required because it helps ensure both technical feasibility of the proposal as well as validating that the proposal solves the problems it is meant to solve.

So this async implementation under way not quite follows the standard process - but then, I am happy to see any substantial movement here and I can see that Apple wants to optimize this important mechanism for their vast amount of APIs.


Cross-reference to an example of a user being bitten by this problem


BTW - for completeness - this is a little DSL for playing around with synchronous programming ideas in Swift.

Terms of Service

Privacy Policy

Cookie Policy