I don't think that's true. An async
function is one that may be run concurrently with other code. It defines a new execution context. There's really no connection between functions that want to define such a context and the functions that wait for them. They are only connected in so far as there's nothing to await
if there are no async
functions.
I anticipate a different model.
In my view, async
functions don't define an execution context, they require one. And you can provide one by using await
(inside another async
function) or by using a language/stdlib-provided executor such as @ddddxxx's RunLoop.current.sync { ... }
.
Or
do{
await ...
}
catch{(provideContext : (Context) -> Void) in
provideContext(RunLoop.current.sync)
}
I would really want Swift to have an at least somewhat standardized design when it comes to 'effects' (i.e. things that can be expressed by monads), even if the effect system it isn't extensible by the community (which I advocate, too).
(Off-topic, somewhat.) I don't want to side-track the discussion too much but since you mention monads, we should understand that monads alone may be too strict an abstraction for concurrency because under an async monad, subsequent await
calls are by definition executed in serial. Indeed a better model may be somewhere between monads and applicative functors.
Intresting design.
Another idea I had was that one could introduce primitives for async-await that are then 'thrown'. Await would take the place of the try operator, while actual requests of execution contexts as in 'I actually need to fetch some data now' or 'I have some tasks that can run independently, please give me some execution environment' would be thrown. A type-aware catch block can then handle this. This design is loosely inspired by the Eff programming (Eff Programming Language) language and some stuff that the Scala community is doing (Jonathan Brachthäuser - Effekt: Extensible Algebraic Effects in Scala | Scala Symposium 2017 - YouTube).
Possible code:
func foo() async -> (A,B){
return throw Concurrent(fetchA, fetchB) //fetchA and fetchB being async functions
}
Edit: For my monadic design idea (or improvements thereof), see also An extensible Syntax for effects like e.g. async-await
This is particular example might be feasible, but I am not sure if that was a coincidence.
If you assume that func a() -> () async
is equivalent to func a() -> Future<Void>
, and that any scheduling is done by the function itself, then you are effectively saying ‘run in the background, but I won’t ever bother to check your result’.
However, say you changed it to let d = await D(), then return d as a String. Now it is effectively func a() -> Future. You could ignore the result, or use it via futureResult.map
.
Imagine instead you blocked the current execution context until the value was ready, rather than using such a call. Because of the way event/dispatch queues work, you are very likely to encounter deadlocks based on assumptions the call made (such as, the main event queue is usable as part of my processing).
I have never used async
/ await
before in any language, and I am having a difficult time understanding what they are meant to do.
The original post in this thread helps a bit, as I am starting to gather the following:
async
means “This function takes a completion handler which will be called exactly once at the end.”
await
means “All code after this in the scope is actually the completion handler for the async
function call on this line.”
However, that is still not enough for me to understand the control flow:
async func foo(_ x: Bool) {
print(1)
if x {
print(2)
await bar()
print(3)
}
print(4)
}
I expect that foo(false)
should print 1 then 4 and return immediately.
But what about foo(true)
? Assuming bar()
takes some non-trivial amount of time, does it print 1, 2, 4, return to the whatever context foo
was called from, continue execution there, and then later when bar()
completes finally print 3?
If so, then it seems we do indeed have two separate codepaths, and concurrency is achieved. But, assuming that bar
is running on some thread (perhaps it performs some complex calculation that should take place in the background), then what thread is it running on?
And how can we achieve that sort of concurrency without the conditional if
statement?
Moreover, suppose we want to begin several operations that execute in parallel. None of them are callbacks for the others, they are simply tasks to be performed in the background. Is this a use-case which async
/ await
supports?
No. The code in this method executes sequentially with respect to the other code in the same method. It will print 1, 2, 3, 4, in that order.
The semantics of await is that it bahaves 'as if' it was blocking, but in reality it isn't. You raise an excellent question regarding control-flow statements like 'if'. These need some special treatment.
Your code could be converted in a first step to
func foo(_ x: Bool, continuation: @escaping () -> Void) {
print(1)
if x {
print(2)
bar[ //these parentheses delimit bar's continuation
print(3)
} //ouch
print(4)
continuation()
]//bar's continuation ends after the if statement
}
The compiler would then have to look up how the scope ending with ouch has been created. In this case, you have an if clause without any else-if or else. The following conversion should do the job:
func foo(_ x: Bool, continuation: @escaping () -> Void){
print(1)
if x {
print(2)
bar{
print(3)
print(4)
continuation()
}
}
else{
print(4)
continuation()
}
}
More complicated cases are while and for clauses, but these should be possible using some recursive scheme as long as it can be ensured ahead of time that the recursion is tail-call optimized.
If it is truly sequential, then I see no need of an “await
” keyword.
Every method call is already sequential. Control flow always waits for each line to complete before moving on to the next.
If we are going to introduce a keyword, then it should be to specify the thing that stands out and is different. Namely, there should be a keyword for “don’t wait”. In other word, a keyword for actual asynchronous execution in the background.
I see no benefit whatsoever to explicitly marking some calls as “wait for this to complete then do the rest of the stuff below it”, when in fact all code works that way.
If the execution order is linear and control flow moves down one line at a time, then that is the standard, normal, existing, basic operation model. That’s how things should work with no annotation, so we should not introduce an annotation that means “this works the normal way, just as it looks”.
If we want to eliminate the pyramid of doom, then just make it so we can eliminate it. As a programmer, I don’t care what happens internally, I don’t care how the compiler makes it happen, I already today don’t truly grok what the compiler does for normal code, I just know the result is indistinguishable from each statement being executed sequentially in source order.
If the compiler secretly transforms
B
C
D
into
B(completion: {
C(completion: {
D(completion: {
})
})
})
there’s no reason for me as a programmer to know about it or think about it or care about it. It should just work. I write B C D
, and my program works as if those statements are performed one after another, just like I wrote them.
There’s no need or benefit to making me add extra cruft to tell the compiler to have them happen one after another, that’s already implied by the fact that they are sequential statements.
By tat logic, we could eliminate throw/try then, too. The code executes from the top to the bottom, so what's the point?
In the throw/try scenario, the benefit is that you can throw errors that you can then handle later in a totally different context. In the async/await scenario, you would as well have primitive async operations that you can only call in an async context and that can then later be handled by providing a proper execution context (like background dispatch queues or concurrent queues, if the async function requested that; the point being that you can as well decide not to do that if there are ressource constraints or if you want to test some logic by making the code run on a single thread without changing the code).
I for one am not even particularly interested in async/await. I am perfectly happy with GCD and either the delegate pattern or ObservableObject. I comment here mostly because I want to promote a design that is consistent with throw/try and also because I want to see if I may miss something because in the thread where I specifically brought up syntax it was implied that I do miss certain things.
However, some people find that feature useful so we should keep an eye on that.
That does not follow. try
is used to indicate that an error may be thrown, which results in a change in execution flow. The fact that it's no longer sequential is precisely why Swift requires explicit handling. When you see that keyword, you know that the statement following may not be reached.
This is exactly what QuinceyMorris wants to avoid.
Well, if all throwing functions succeed, the execution flow is sequential. And in the async case, one could imagine a 'please, don't do that' handler rather than some actual execution context (for whatever reason). At the point where you inject an execution context, the control flow is effectively non-sequential - you have to go through some handler -, but can be can be continued.
You could by the way also change how errors work in a way that throwing doesn't mean aborting everything. Some errors can be recovered. One could (and in other languages something like this has been implemented) make statements like
let defaultValue = throw DivisorIsZero(numerator: ...)
and the compiler would figure out that a catch block that handles DivisorIsZero plus a continuation means that the code below should actually be executed.
To @QuinceyMorris point: As I argued before, one could ease the restriction in case of void async functions if there are default meanings for those primitive operations.
Edit:
A possible semantics for above throwing statement could be:
struct RecoverableError<X,Y,Z> : Error{
let problem : X
let recover : (Y) -> Z
}
You can then write a catch clause looking like that:
catch{(error : RecoverableError<MyX, MyY, MyZ>) in
return error.recover(mySolution)
}
If your recovery does the same as the happy code branch - just with some default value - writing the actual code is kind of pain in the ass, because your completion has to be the same as the code on the happy path. But good compilers can do that if they are told to.
Yes, but the entire point of try
is to indicate to you, at the call site, that it may not be.
But that's not how await
works. The entire point is to make the execution of the call site synchronous with respect to the following code. Nevin's point is that this expectation already exists for all methods today, so there's no information added for the reader.
There's already syntax for this: do { } catch { }
.
At this point I am confused as to what you are proposing with respect to error handling, and why it should be connected to concurrency handling.
Read my edit.
Also, the connection between error handling and async await is relatively simple: both can be implemented via monads. Monads are all about pretending that there is sequential execution while in reality what exactly happens is overridden. Their nickname is 'programmable semicolon' for a reason. They hide some boilerplate that happens between the lines.
I believe QuinceyMorris started this thread, in part, to get away from the discussion of monads. That discussion belongs in its original thread. This thread is about the model for async/await based concurrency. Not a general mechanism for implementing this and other features.
Sure. That's why I only bring it up as a side remark. Nevertheless, it is important to note that these two concepts are inherently linked and therefore, we shouldn't make a mess by assigning async-await some strange meaning that it shouldn't have.
Sure: it would be handy to be able to fire some asynchronous task on a background thread just by calling an async method. For me, it would be important to drop the await keyword then or indicate in some way: 'please handle async calls that depend on each other on a single background queue and for concurrent tasks, use DispatchQueue.concurrentPerform' (an example of a default handler). If would also be important for me to be able to override this behaviour.
In general, sequential execution simply is to be expected. How else would you parse
let a = await getA()
let b = await getB(a)
?
All that await does is indicating that something special - a change of execution context to a background thread - may or may not take place here. It is more similar to try than to throw (which would be hidden in primitive operations or publicly available). The important bit here is that it is expected ahead of time that this workload might be done on some background serial queue (or concurrent queue if we use primitives indicating concurrency), but the precise execution context is specified later on, maybe even implicitly.
Also, it may be worth reviewing Swift Concurrency Manifesto · GitHub . Async-await is not the end goal, it is a starting point for more complex runtime features that implicitly run in the background.
Edit:
What async-await is really about is 'materializing' the arguments of completion handlers. Completion handlers can't return anything. You cannot pass any information from a completion handler to the callsite if you have to expect that the completion handler is executed at some arbitrary time (the only way to pass information back would be a blocking implementation, and that's precisely not what we want here). Async functions can return stuff - but you only get access to the result inside some other completion handler (implicitly given by await). Async-await abstracts away completion handlers, but in order to be able to chain async functions with appropriate types or pass information back and forth between them in a recursive manner, execution has to have sequential semantics.
The answer to this wasn't in my post, because it's the main subject of Concrete proposal for async semantics in Swift · GitHub. A path of execution blocks itself without blocking the thread it's running on by using coroutines. You can read about that in the linked proposal.
On your point about await!
, it isn't necessary, because (unlike try
) there isn't a second behavior needing to be indicated by the !
. The behavior is always: "Sequentialize this execution path."
It's fine to see analogies between try
and await
if you wish, but it makes no sense to reverse the order of argument. If the analogy suggests something that doesn't fit for async
, the analogy just doesn't help.

And how can we achieve that sort of concurrency without the conditional
if
statement?
@Avi already answered, but I want to reinforce the point that we are not, repeat not, trying to achieve concurrency here. We are trying to avoid it.
I know this seems counter-intuitive, but I swear it's the exact truth.

If it is truly sequential, then I see no need of an “
await
” keyword.
Yes, hold on to that thought. The await
operator doesn't truly "mean" or indicate that the code is sequential. As you say, that's obvious from the context.
What the await
operator does is [in effect] convert a function like B(completionHandler: () -> Int)
into a function B() -> Int
. IOW, it converts a function that returns now and computes its result later into a function that returns after it computes its result [in effect, not the actual implementation].
However, we will still have real use cases where we want to call the original B now and get its result later — specifically, when we actually want to get concurrency across B, C, D, E, F, and so on. (These cases are not the topic of this thread.)
That is, we have 2 ways of calling B, so one of them needs a syntactic marker. In this proposal, await
marks one of the ways.

I believe QuinceyMorris started this thread, in part, to get away from the discussion of monads. That discussion belongs in its original thread.
Exactly so. The monad thing is very interesting, but it isn't the solution to this problem.
About the parallelism between throws/try
and async/await
, if you look at the problem from a distance it seems to be about the same shape/pattern applied to two different contexts of "effectful computation", but it's really not.
First, the semantics are completely different, of course. In case of throws/try
we have a computation that might have 2 different outcomes, and with something like try!
, with the exclamation mark meaning "unsafe" (the general rule in Swift, apart from the dreaded negation prefix), we force the code execution to only consider the successful option. Swift enforces the usage of try
on throw
ing functions because a function of shape (A) throws -> B
does not return B
: it returns either B
or Error
, and its return type is perfectly modeled by the Result<B, Error>
type.
But async
simply tells us that a function will return before having completed its execution: this doesn't force any particular signaling at call site, because the expected outcome is the same. A function of type (A) async -> B
always returns B
: if I await
, then I'm telling the compiler that I want to wait for the function to actually complete its execution before moving on with that execution path; if I don't, then the entire context becomes async
(for example, the function where the code is called), because it's going to be like the "outer" function is returning before having completed its execution. In code:
func asynchronous_1() async -> A { ... }
func asynchronous_2(_ a: A) async -> B { ... }
func asynchronous_3(_ b: B) async -> C { ... }
func synchronous() -> C {
let a = await asynchronous_1()
let b = await asynchronous_2(a)
let c = await asynchronous_3(b)
return c
}
func asynchronous() async -> C {
let a = await asynchronous_1()
/// when called, this function will return here, and will completed the execution aynchronously
let b = asynchronous_2(a)
let c = await asynchronous_3(b)
return c
}
func frobulate() {
let c1 = synchronous()
let c2 = await asynchronous() /// if we don't await, frobulate must become async itself
print(c1, c2)
}
If we translate the code above in terms of callbacks, and use DispatchSemaphore
to wait (not a real implementation of async/await
), it would end up being:
func asynchronous_1(_ onA: @escaping (A) -> Void) { ... }
func asynchronous_2(_ a: A, _ onB: @escaping (B) -> Void) { ... }
func asynchronous_3(_ b: B, _ onC: @escaping (C) -> Void) { ... }
func synchronous() -> C {
var asyncA: A?
let sa = DispatchSemaphore(value: 0)
asynchronous_1 {
asyncA = $0
sa.signal()
}
sa.wait()
let a = asyncA!
var asyncB: B?
let sb = DispatchSemaphore(value: 0)
asynchronous_2(a) {
asyncB = $0
sb.signal()
}
sb.wait()
let b = asyncB!
var asyncC: B?
let sc = DispatchSemaphore(value: 0)
asynchronous_3(b) {
asyncC = $0
sc.signal()
}
sc.wait()
let c = asyncC!
return c
}
func asynchronous(_ onC: @escaping (C) -> Void) {
var asyncA: A?
let sa = DispatchSemaphore(value: 0)
asynchronous_1 {
asyncA = $0
sa.signal()
}
sa.wait()
let a = asyncA!
/// when called, this function will return here, and will complete the execution aynchronously
asynchronous_2(a) { b in
var asyncC: C?
let sc = DispatchSemaphore(value: 0)
asynchronous_3(b) {
asyncC = $0
sc.signal()
}
sc.wait()
let c = asyncC!
onC(c)
}
}
func frobulate() {
let c1 = synchronous()
var asyncC2: C?
let sc = DispatchSemaphore(value: 0)
asynchronous {
asyncC2 = $0
sc.signal()
}
sc.wait() /// if we don't wait, frobulate must become async itself
let c2 = asyncC2!
print(c1, c2)
}
Using DispatchSemaphore
is essentially equivalent to using await
.
But there's also another, in my option more important, difference, related to typing and in particular to subtyping, that shows why throws/try
is only a partial starting point to evaluate syntax and functionality of async/await
.
A function type
(A) throws -> B
is 100% a supertype of a function type
(A) -> B
If you ask me for (A) throws -> B
, I can give you (A) -> B
and subtyping and substitution rules would be 100% respected.
This is related to the fact that a type that can be either A
or Error
is a supertype of just A
(in purely theoretical terms, not in the actual Swift implementation). When I try
I'm essentially writing as
and propagating the error upwards if the cast fails: I need to write try
because I would need to write as
to "extract" the type that I'm interested into. Hence, try
is justified from the point of view of types.
But a function of type
(A) async -> B
is not in any way supertype of a function of type
(A) -> B
even if they're technically "compatible". The two types are actually the same type, with parameters moved around. Why?
A function of type
(A) async -> B
is essentially the same as
(A, (B) -> Void) -> Void
because instead of immediately returning B
, the function promises to provide it in the future (hence the "callback" representation).
Now consider the function
(A) -> B
I can define an isomorphic representation of this function by moving around the parameters: what matters is that inputs (domain) and outputs (codomain) are preserved. It turns out that I can move B
in the inputs, but I need to convert it in a function where B
is the input (it would be OT to talk about this any further, please bear with me and ask for clarification in case), that is:
(A) -> B
/// move B on the left, and leave Void on the right
(A, FunctionWithBAsInput) -> Void
/// add the simplest function with B as input
(A, (B) -> Void) -> Void
This is the same as the async
representation with a callback. This means that async
is not about subtyping: it's actually about changing types by moving function parameters around, but without the need of actually doing so in code.
Thus, what we do with await
is basically restoring the original type: the point is, we might actually not want to do so! That's why await
is fundamentally different from try
: its usage is strategic, we're not casting or anything (something that we would be forced to do in other cases).