Jobs and Tasks

Hi everyone,

I've been wondering a bit about jobs within the new concurrency system. Let's imagine two functions:

func foo() async {
  // do some sync work
  let x = await getCachedValue()
  // do some more work

func getCachedValue() async -> Int {
   if let x = cache { return x }
   await compute() // ..

Let's say foo starts executing. As per the structured concurrency proposal, it executes in a job. Now foo arrives at the suspension point and wants to call getCachedValue. For simplicity, let's assume getCachedValue is not an actor method. What happens during the call? I know foo will be suspended, but will the current job also be suspended? I imagine one of these two scenarios would happen:

(A) A new job is created for the call to getCachedValue. If a new job is created, does that mean that other jobs can potentially execute in between?
(B) The call itself is potentially still part of the same job, and the current job is only suspended when getCachedValue needs to suspend (e.g. because of compute).

In other words, once I have a cached value, is the getCachedValue() call essentially the same as a normal function call?

1 Like

Not clear from the proposal, but since each time you reach a suspension point other jobs in the same thread pool have the opportunity to execute and since the first suspension point is reached at let x = await getCachedValue(), I'd say:

(A) Yes
(B) I think if let x = cache { return x } will run in a different job regardless of whether you even use an await in the method. So it should not be the same as calling a normal function (if by that you mean the way scheduling happens)

This is solely based on my understanding of the proposal and playing around with breakpoints inspecting the stack. Hopefully someone with a better understanding of the implementation can help with a definitive answer.

(A) Conceptually, yes, although if getCachedValue() is not on an actor then in practice the implementation will immediately execute the synchronous first part of getCachedValue() (up to the first await, where we ask the same question again).
(B) I think the "job vs. task" distinction is important here. The whole call is part of the same task, which sequentially executes but may suspend along the way when needed (e.g., if one of the awaits does an actor hop). Jobs are effectively the unit of synchronous work that occurs between awaits: when a task suspends, it means the job for the next work gets put on an actor or queue somewhere to be executed later.

When there is no actor hop, yes, and this can be Really Important for performance, because you can have synchronous fast-paths like this in async functions that don't need to involve the concurrency system at all and can (e.g.) be inlined and optimized away by the compiler.



If anyone is interested in seeing an extreme example of this, search for AsyncUnicodeScalarSequence here, and take a look at its next() implementation. After inlining this ends up compiling down to just a handful of arithmetic and branch instructions, with no calls or async overhead. _AsyncBytesBuffer does the same trick, but is a little more tangly to read.


Thanks for the clear explanation @Douglas_Gregor, that is really helpful.

Sorry to be nitpicky here, but that seems to contradict A: would you say the execution of foo including executing the non-async part of getCachedValue is a single job? Or would you say it's two jobs that happen synchronously directly after each other? Because the task would only suspend once getCachedValue does its async thing. I'm asking because I want to be precise in my terminology :slight_smile: .

A follow-up question: how is it decided when to enqueue a separate job? Is this a compile-time decision, a runtime decision or a hybrid? Is this something the caller does, or the callee? I can imagine that you could (and would want to) inline some methods at compile time, and that you can also detect e.g. a call to an actor method. But I imagine sometimes it's a runtime thing. For example, what happens when I do something like let f = cond ? getCachedValue : someActor.method; await f()?


It's two jobs that in practice happen synchronously directly after each other. I won't say that as an absolute because, for example, some future concurrency runtime could be asked by the scheduler to abandon it's thread (e.g., because the thread is needed for higher-priority work), which is could do between the two jobs.

It's a hybrid decision, made at multiple points within the compiler and runtime.

In the language model, every asynchronous call or asynchronous property access is a potential suspension point. Those potential suspensions can be thought of as splitting the code into two different jobs: the job before the suspension point and the job that resumes after the suspension point. You can think of it as breaking up an async function into synchronous chunks, each of which is a job. In the compiler, this operation is called "coroutine splitting", and it occurs fairly late in the compilation process. (At the LLVM IR level, for folks who are familiar with the Swift compiler pipeline)

If the target of an asynchronous call is on a particular actor, then the runtime needs to ensure that we're running on that actor. We call that an actor "hop", and it's one common way for potential suspension points to turn into actual suspensions at run-time because (e.g.) the actor might already be busy on another thread or it might have special execution semantics (like MainActor does).

Now, if the compiler can see that the code you're calling asynchronously at a potential suspension point is either always running on the same actor (e.g., you're calling an async function on your own actor), or the code after a potential suspension point doesn't actually care what actor it's running on, then it can "fuse" those two jobs together into one synchronous block. That's what's happening in the example @David_Smith cites, and would presumably happen in your example as well: the getCachedValue check would get inlined and the potential suspension point would be removed entirely, because the compiler can see that the code has no actor hops in the way.



Thanks @Douglas_Gregor, that's an extremely helpful explanation and makes a lot of sense.