Rethinking Async

But of course.

Still, the specification can't change behind the scene, and different specifications differ in flexibility. A coroutine-based implementation would have hard time returning incomplete result without some level of Future object. So when I saw async Int and coroutine in the same sentence, it got me intrigued. If it's just a future with state machine, well, so be it, I figured as much. It is still a valid design though.

In any case, I'm not sure having async Value around is a good idea. Especially if we're concerned about execution context.

1 Like

Not much of an improvement really, now you just have asynchronous calls littered in your control flow instead of wrapped nicely in closures.

There seems to be a pretty fundamental disconnect here so I think it would be good to talk it through. Why do you think having async calls "littered" throughout the code is a bad thing? And why is having a series of (possibly nested) closures a good thing? What are the pros and cons of the two approaches in your mind? Do you actually find the closures version easier to write, understand, and maintain? If so then why do you think that is true?


From my point of view there's no

async let x: Int
let x: async Int

Let's use common sense here, the behavior is just as throws with try.

let x: Int = await someFunc()

And that's it, I do not think this needs any more language shanenigans to create the semantics.
Another topic is what we've how is something considered asynchronous. We have several methods: DispatchWorkItem, DispatchQueue, semaphores, mutex, spinlock, etc, that are used today to create concurrency models. I think this is another big topic that's left behind in the discussion.
Also I think we have to take a look at Kotlin concurrency model used for corroutines and learn from the path they've already walked.


I agree with @Chris_Lattner3 here and would like to reiterate the importance of cancelation. Libdill is another amazing concurrency library which introduces the concept of structured concurrency. Venice is a library that wraps libdill and provides the same mechanisms in Swift. One thing that it misses is a way to mark functions that can yield, so that the user always knows which paths the execution might take. 5 years ago, before swift was open sourced, I proposed "yielding functions". Nowadays I see that we can do much more than "yielding functions", like the actor model @Chris_Lattner3 proposed, for example. I believe "structured concurrency" is extremely powerful and, although I haven't thought through enough to assess if it fits preemptive multitasking, I know from experience (in Swift) that cooperative multitasking is valuable enough to have it be supported by the language itself. Many of this concepts are, as @Chris_Lattner3 said, orthogonal, but we should strive to make them compose well together. Since @John_McCall expressed his concerns about the scope of this thread I would like to ask if I should continue the discussion on orthogonal topics here or if I should create other threads and link this and other threads that might be relevant instead. The important thing is that the solutions we come up should compose well. Achieving this without intersections between the proposals would be close to impossible. It's hard to connect all the dots without looking at the big picture.


I've been working on a comprehensive design for concurrency in Swift which I hope to have ready to share in the next few weeks. That is probably the right point at which to pick up this conversation.


Related to @Douglas_Gregor s PR? :slight_smile:


Looks amazing and a really big surprise. Well it be merged into 5.3-release, or would be part of 6+ ?

I believe it's a pretty safe bet that no (additional) major features will be landing in Swift 5.3. Even relatively minor bug fixes don't make the cut at this point.


Pease be actors!


Even though the recent developments are very exciting, let’s please not turn this in to a speculation thread. JMC already said that the details would be made available soon.


I'd just like to throw in a note of concern for the way closures quietly confer reference semantics on the values they capture. If not for this slippery hole in the language guarantees, value semantics and the law of exclusivity would be enough to support the provable thread-safety of most code. In related threads I have seen lots of discussion of actors and queues and other concurrency mediators, but I haven't seen any attention given to this issue with ordinary code they may execute. I think move-only closures may be an important part of the answer and I wonder about attacking concurrency without an ownership model that supports non-copyable types.

/cc @saeta


You should check out the latest merge commit of swift concurrency lib support at

Before the official concurrency proposal is presented I wanted to quickly sketch what I had on my mind on this topic:

Why not use Combine streams as channels between entities called Reactors? More and more APIs get a publisher method so a concurrency proposal could utilize that.

Reactors would have input and output Ports which could be connected by streams. Once data arrives on an input port a reactor would proceed in its control flow possibly sending data to its output ports until it comes to a point where it waits for the next reaction.

This step wise processing is supported by specialized functions called activities, which allow to wait for the next step via the await statement. Normal functions and methods can be called from activities, but not the other way around.

Besides the capability to await the next instant, activities also support concurrent control-flows and preemption as in other imperative synchronous languages. The causality follows the Sequentially Constructive model and thus allows memory to be used as synchronization mechanism between concurrent trails (as opposed to using signals for synchronization like in Esterel).

In addition to the port based data-interface, reactors might also offer a functional API through service methods. These methods would also have a stream based signature (at least for the return value) and internally spawn new reactors on each invocation to do the processing. The result of these service calls will be handled by a special statement available in activities called receive.

So, the general idea is that of GALS - locally synchronous reactors which are asynchronously connected via Combine streams.

Mkay… I did, but what am I supposed to notice about that commit that relates to my post?


That assumes that Apple will make Combine open-source, or the Core team invests resource in something like OpenCombine, at least for Linux/Windows

The important point here is that the asynchronous mechanism is not restricted to a single value (as with simple async/await) but allows to send and receive a stream of values.

Combine would be compatible with this multi-await approach but at the core a coroutine implementation could be present.

Note also how nicely the backpressure flow-control of Combine matches with the step wise computation of synchronous programs. When awaiting the next step, receive statements would state a new demand of 1 thereby controlling the rate of data producers.

That's all well and good, but how does Combine get introduced into the Swift language without Apple open-sourcing it? And, that decision is not up to the Swift Core Team. The Apple members may advise their management, but, it's up to Apple management to make that decision.

Maybe a small example can help to illustrate the idea of using functions which can yield multiple times (like functions returning a Publisher or a generator or coroutine) from imperative synchronous activities.

Lets assume we have this function which returns count stock quotes for a ticker symbol:

func stockPrice(symbol: String, count: Int? = nil) -> AnyPublisher<Float, UnknownSymbol>

BTW. an alternative signature of that function might look something like this:

func stockPrice(symbol: String, count: Int? = nil) async(price: Float) throws

Now, this function is used in an activity which should print out the stock price a given number of times or until a maximum price is reached:

01 act printStockPrice(symbol: String, count: Int, maxPrice: Float) {
02   print("print stock price for: \(symbol)")
03   var price: Float
04   cobegin {
05     receive(price) stockPrice(symbol: symbol, count: count)
06     print("exceeded count")
07   }
08   with {
09     repeat {
10       await true
11       print("quote: \(price)")
12     } while price < maxPrice
13     print("exceeded maxPrice")
14   }
15   print("done")
16 }

First, the activity proceeds like in a normal function with a statement to print the symbol (line 2).

The call to the asynchronous stockPrice function happens in line 5 by using the receive keyword which allows to bind a variable to the values generated by stockPrice.

As the control-flow stays at this statement, handling of a received value has to happen in a concurrent trail:

The block introduced by the cobegin keyword in line 4 starts a first trail (in which receive is called) and the with keyword in line 8 starts a second trail where the printing of the quote values will happen,

The second trail consists of a loop (lines 9-12) which first hits the statement await true in line 10. This will stop the trail until the whole activity is triggered to react again - which will happen once a new value from the stockPrice publisher is received and bound to the variable price in line 5. The print statement in line 11 will then print the value and the loop will either continue or break dependent on the condition (12).

When either trail finishes, the other will be preempted (weakly). When, for example, the condition in line 12 causes the second trail to finish, the first trail will be preempted causing the subscription to the publisher done in the receive statement of line 5 to be canceled.

Synchronous activities thus nicely allow the imperative processing of functional reactive streams.

I know you said in a few weeks... but it has been a couple of weeks and just want to make sure I haven't missed any preliminary information being available.

Terms of Service

Privacy Policy

Cookie Policy