Why stackless (async/await) for Swift?

Hello, blog post author here! (Kind of you to call my blog post an essay, by the way.)

Regardless of your feelings about Dave's intent, I want to stress that as the author of that post I believe Dave is entirely aware of the argument I was trying to make. More importantly, I also think that a careful read of Dave's argument regarding async/await makes it very clear that he and I are in agreement.

Specifically, Dave's original post about async/await (On the proliferation of try (and, soon, await)) explicitly calls out:

We can apply the same line of inquiry to async . First, why should we care that a call is async ? The motivation can't be about warning the programmer that the call will take a long time to complete or that the call would otherwise block in synchronous code, because we'll write async even on calls from an actor into itself. No, AFAICT the fundamental correctness issue that argues for marking is almost the same as it is for error-handling: an async call can allow shared data to be observed partway through a mutation, while invariants are temporarily broken.

So where is that actually an issue? Notice that it only applies to shared data: except for globals (which everybody knows are bad for concurrency, right?), types with value semantics are immune. Furthermore, the whole purpose of actors appears to be to regulate sharing of reference types. Especially as long as actors are re-entrant by default, I can see an argument for awaiting calls from within actors, but otherwise, I wonder what await is actually buying the programmer. It might be the case that something like async_anywhere is called for, but being much less experienced with async / await than with error handling I'd like to hear what others have to say about the benefits of await .

This argument is distinctly in line with the argument I make in the post. Specifically, I quote Glyph's Unyielding:

When you’re looking at a routine that manipulates some state, in a single-tasking, nonconcurrent system, you only have to imagine the state at the beginning of the routine, and the state at the end of the routine. To imagine the different states, you need only to read the routine and imagine executing its instructions in order from top to bottom. This means that the number of instructions you must consider is n , where n is the number of instructions in the routine. By contrast, in a system with arbitrary concurrent execution – one where multiple threads might concurrently execute this routine with the same state – you have to read the method in every possible order, making the complexity nn .

Notice that what Glyph is talking about (and what my post targets) is "shared mutable state pre-emptive concurrency". This is exactly what Dave is talking about: he points out that value semantics prevent shared mutable state, and actors exist to regulate the remaining state. In this world, where all state is either value semantic or an actor, it seems entirely reasonable to me to ask whether the await and async keywords buy you very much at all.

This is not what I was talking about in my original post. I was discussing languages that pervasively use shared mutable state: Python, Go, Node. I think that Dave and I are much more in agreement than we are in divergence, and I like to think that Dave would largely agree with me that a) my post is well-reasoned for languages with pervasive shared mutable state, and that b) in languages that disallow shared mutable state it is reasonable to ask whether we need async/await at all. As a fun example of the latter, consider Rust's "fearless concurrency" pattern, which I think is an excellent example of how to rein in the scariness of threading without needing to add annotations to the code for async/await.

11 Likes