Protocol compositions are an example of where the language already requires an order purely for style reasons. Protocols can go in any order, but if it includes a class, the class must go first. Class & Protocol
is allowed, but Protocol & Class
is not.
Good point. A counterpoint is that you can only inherit from a single class but can conform to multiple protocols, so it might be unordered but there are two clear groups of things to distinguish. You don't have to specify all the protocols in any particular order there, which is perhaps more comparable.
I think it also just makes sense. try await
could be read to imply that the await
operation itself can throw, which is not the case, so from that perspective await try
is just more accurate.
In additional modifiers that start with @
have to go before non-@ modifiers.
It seems like cancellation is talked about in the Structured Concurrency proposal. But I'm having a hard time putting the concepts in that proposal and the concepts in this proposal together. I'm wondering how a consumer of an asynchronous method would cancel a call to func processImageData2() async throws -> Image
. Is it by wrapping it in a Task and then cancelling the task?
Yes, exactly. Note that async functions are always running as part of a task.
Relatedly to tourultimate's question, I think it would be helpful to have some of the simpler examples also expressed in a way where the Task
and "nursery" entities are visible. It would help bridge the gap between this and the structured-concurrency proposals.
Thank you for such a well written and thorough guide!
I agree with everything in it, I just have some questions concerning the performance implications of async
functions outside of actors. Are such functions executor agnostic, that is they can run on the executor of an async calling function without needing a context switch? Would there be a suspension point between two actor-less async
function calls, an actor to actor-less call, vice-versa, etc?
asynchronous functions are able to completely give up that stack and use their own, separate storage . This additional power given to asynchronous functions has some implementation cost
What kinds of costs are we talking about and in what situations? E.g. register copies vs early-exit calls into the Swift runtime vs heap allocations? Where is this separate storage and when is it allocated/initialized/deinitialized/freed?
“ (In practice, asynchronous functions are compiled to not depend on the thread during an asynchronous call, so that only the innermost function needs to do any extra work.)”
What is this saying? Could you give an example?
“If the callee’s executor is different from the caller’s executor, a suspension occurs and the partial task to resume execution in the callee is enqueued on the callee’s executor.”
Is this determination statically known, or do we have to consult the runtime?
Asynchronous function types are distinct from their synchronous counterparts
Another benefit of allowing overloading on async
is it works around the situation where a type wants to implement two protocols with similarly named requirements that differ in asynchronicity.
A suspension point must not occur within a defer block.
Does that mean that an await
cannot occur in a defer
block, even if the function is async
? Or just an await
that explicitly hops executors?
A situation I'm concerned about is having an asynchronous source of data from which I fill up a buffer, and then synchronously vend many T
s (until I need to refill the buffer).
struct TStream {
var (buffer, source): (Array<UInt8>, DataSource)
func next() async -> T? {
// Only happens 1/1000 times
if buffer.count < threshhold { await source.refill(&buffer) }
... // Synchronous code
}
}
struct ArduousTSteam {
var (buffer, source): (Array<UInt8>, DataSource)
/// Make sure to call this if `next` tells you to refill,
/// otherwise you'll get a premature `nil` result the
/// next time you call `next`.
func refill() async { await source.refill(&buffer) }
func next() -> (T, callRefill: Bool)? {
... // Synchronous code
}
}
TStream
is a much nicer API, but what is the overhead of next()
being async over burdening the user with the buffering concern ala ArduousTStream
?
If the caller of next()
is async
anyways (whether actor or actor-less), is there overhead for TStream
vs ArduousTSteam
?
I currently have no opinion on this debate, but I'm very interested in hearing rationale for why async/await
shouldn't also imply throws/try
.
I think the biggest problem is probably that people reading await
aren’t going to think about it being an unwind point.
Yeah, this. Being able to say "ok there's a catch here, ALL I need to do is look for try to see what might cause it to run" is very powerful for readers.
Do you mean "what might cause it to throw?" I think people would learn that they also need to look for await
pretty quickly. Requiring people to learn that seems like it might be worth it. Requiring await try
everywhere seems like something we might regret someday. That said, this is a change that could be phased in later without breaking code so it could be considered later.
People will learn and adapt to almost any internally consistent set of rules, so ultimately I agree it doesn't matter that much. But the 1:1 correspondence is nice and simple.
This is an interesting question that I'm not sure we have a clear answer to right now, because there's an ambiguity between a function that's "generic" about its executor (e.g. because it takes an argument that can only be safely called or otherwise used on that executor) and a function that's apathetic about its executor (e.g. because it's in an I/O library and is just kicking off work and waiting for a response).
Two things. The first is that the function is split, so there's extra low-level overhead on function suspend/resume; also, spilling values into an async frame is likely to have somewhat worse locality than spilling to the C stack. The second is that the function's frame needs to be allocated off the C stack, which will happen using the task-local allocator.
Our current implementation design for the task-local allocator (in theory, not actually implemented) is that it's a stack-discipline small-slab allocator that generally won't return memory to the general allocator until the task completes. That allocator will be fully torn down when the task completes, so a task handle that's now just a satisfied future does not pin any memory associated with running the task.
You could imagine an implementation strategy that copies the async task's frames on and off the C stack during suspends. That is not the strategy we use; async frames are allocated off the C stack to begin with, and anything that needs to survive an async suspension point is written into that frame.
It is expected to be an efficient inline check, or at least to have an efficient inline component.
Correct.
async
should not imply throws
because that would effectively mean async functions would have implicit unwinding across arbitrarily nested frames, essentially turning Swift error handling into a C++-like exception system.
Having await
imply try
when async
does not imply throws
would be… odd.
My hope was to address the "apathetic about its executor" by having such function be defined on an "MyIOLibActor" which therefore has a place to define executor = MyIOExecutor
.
This way the model becomes:
- non-actor async functions are "generic about executor",
- actor async functions always run on the actor's executor.
And there's no other ways to achieve the second style. With the existence of global actors even free functions can participate in this if they need to.
Would this model make sense?
I'm missing what about MyIOLibActor
makes it apathetic about its executor.

I'm missing what about
MyIOLibActor
makes it apathetic about its executor.
Ok seems I misunderstood the meaning of apathetic. So an actor is definitely not that.
To sum up the semantics for clarity:
- Actor
async func
- always adheres to the general actor rules and as such will execute on that actor's executor
- A non-actor
async func
(those are not really formalized yet):- "executor generic" - the called function, if needed to suspend internally, will resume on the callers executor; say because a parameter must be only accessed from that executor.
- "apathetic" (showing no interest to the calling tasks executor) - e.g. synchronized by other means, does not care where it resumes if it did await internally; could resume on some global pool or anything;
So the difference is between even switching over to the an actor handling all IO submissions, or not switching over anywhere and using some manual implementation (maybe lock protected, maybe otherwise) to schedule the async work. If an apathetic function needs to resume, it specifically "does not care where it resumes" which could offer some optimization space -- i.e. being called directly from where the task it kicked off completed etc.
Note: Those are not fully fleshed out out (!)
Did I get that right now?
How strict is the suspension point as a barrier. If, say, I have a code like this:
await task1()
let x = 1 + 1
can it move x
instantiation (which is executor-apathetic) across await
?
let x = 1 + 1
await task1()
Since we already optimize-out empty partial task, this seems about right, but then comes the question about what to do if the moved function consume a lot of time (either by being blocking, or just cpu-intensive). Maybe we need a notion of something that is executor-independent, but doesn't cross the suspension point.
(since this seems to apply to any executor model, not just actor, please feel free to move the comment if that's not the case)