Why not go the other way, allowing "throws async
" instead of banning "try await
"? We would probably need to allow arbitrary orders, including have the call site be in a different order than the declaration site, when/if we move to a general effects mechanic. (Hmm, should we do the general effects system first, then add async
/await
?)
I think enforcing a clear order is good idea. This would save community from writing linters/formatters normalising the order. But it is not clear to why try await
was preferred over await try
.
I'm reading await
and try
as prefix operators, applied from right to left.
-
await try foo()
means "try to start an asynchronous operation, and if it starts wait for the result". Signature offoo
being equivalent tofoo() -> Result<Promise<T>, Error>
. -
try await foo()
, means "launch an asynchronous operation which may fail, wait for it to finish and try to get its result". Signature offoo
being equivalent tofoo() -> Promise<Result<T, Error>>
.
Proposal models the second case, so IMO, try await
and throws async
make more sense.
How to turn a long running (e.g., compute-intensive) synchronous function into a non-blocking async
primitive (to take full the advantages of structured concurrency, run multiples in parallel)? Do we need to manually resort to DispatchQueue/Operation/OperationQueue? Or there’s a Task
API ready for that?
Can we still Task.checkCancellation()
during the long running synchronous function?

Im wondering how I write my first async method that does not call another async method (doesn’t use
await
).
I thought I might have overlooked something (there is much text about concurrency), but as you didn't get a simple answer yet, there really seems to be a hole here...
Has anyone already thought of allowing the following?
let resultA = async takesALongTime(100000000)
...
return await min(resultA, resultB)
(I think there is no explanation needed: Either the idea is obvious, or it does not fly ;-)

How to turn a long running (e.g., compute-intensive) synchronous function into a non-blocking
async
primitive (to take full the advantages of structured concurrency, run multiples in parallel)? Do we need to manually resort to DispatchQueue/Operation/OperationQueue? Or there’s aTask
API ready for that?
Exactly, there is an API for that.
There’s one thing still confuses me:
public static func withUnsafeContinuation<T>(
operation: (UnsafeContinuation<T>) -> ()
) async -> T { ... }
Will the this operation
be running on a different thread than its caller? Otherwise it will block the calling thread.
If the answer is yes, does withGroup
have similar semantics? Will the body
passed to withGroup
also be running on a different thread?
Thanks!

There’s one thing still confuses me:
public static func withUnsafeContinuation<T>( operation: (UnsafeContinuation<T>) -> () ) async -> T { ... }
Will the this
operation
be running on a different thread than its caller? Otherwise it will block the calling thread.
It’s packaging up the “rest of the current function” into a continuation (the UnsafeContinuation instance) that you can use in a completion handler closure. When that completion handler gets called, the rest of your function continues. It’s glue for completion-handler APIs.

If the answer is yes, does
withGroup
have similar semantics? Will thebody
passed towithGroup
also be running on a different thread?
withGroup is very different. It helps you manage a set of child tasks that run concurrently.
Doug
withUnsafeContinuation
is the right choice for converting an old school completion-handler based function into an async
primitive.
But my question is:
Do we need to manually resort to DispatchQueue/OperationQueue to make a long running (e.g., compute-intensive) synchronous function into a non-blocking
async
primitive (to fully take the advantages of structured concurrency, run multiples in parallel)? Or, is there aTask
API ready for that?
I originally thought runDetached
was the way to go before @kirilltitov kindly point out it was withUnsafeContinuation
I should look at. Plus, I don’t want the long running task(s) be detached from the invoking scope, which is what runDetached
offers.
In a trivial sense, if by “blocking” you mean long-running, you can wrap any synchronous function in an async function and then it’s an async function, which you can schedule however you want. (Note that async
by itself doesn’t imply anything about thread scheduling.)
In a deeper sense: no, you can’t take actually-blocking code – i.e., code which puts a system thread in a blocked state – and make it non-blocking without rewriting it. It needs to be audited for internal blocking calls (like synchronous I/O), have those rewritten to use async alternatives, and also be audited for assumptions of atomicity that get violated by the addition of suspension points.
If this could be fully automated, there wouldn’t be a need for special syntax.
Given a long-running pure function (performs no I/O, yields no side effect, only compute-intensive, only depends on the value-typed input arguments, so nothing to audit I guess):
func calculateSync(input: Int) -> Int { /* minutes long */ return 42 }
calculateSync
is synchronous and takes minutes to finish. In order to calculate multiple values in parallel, via the proposed structured concurrency APIs, calculateAsync
is required:
func calculateInParallel() async throws -> [Int] {
await try Task.withGroup(resultType: (Int, Int).self) { group in
var values: [Int] = Array(0..<8)
for idx in values.indices {
await try group.add {
(idx, calculateAsync(values[idx]))
}
}
while let (idx, computed) = await try group.next() {
values[idx] = computed
}
return values
}
}
How to implement calculateAsync
? Can Task.withUnsafeContinuation
alone make it happen? Or DispatchQueue.async
is also needed? If DispatchQueue.async
is still required, can we perform Task.checkCancellation()
there?
Thanks.
I think you could use Task.runDetatched
in this case, then use structured concurrency to manage its lifetime (async let
or a TaskGroup
like in your example). If it’s very compute intensive, I guess you might also want to schedule it on a dispatch queue and use that to manage how many run simultaneously.
First, I don’t think the 8 concurrent child-tasks are detached by nature. Because:
- They need to be cancelled if their parent task is cancelled (they need to perform
Task.checkCancellation()
periodically). - They also need completion observing.
Second, and most important, I’m not sure if Task.runDetatched
starts its closure in another thread. Otherwise, there won’t be 8 concurrent tasks, only 1.
So I’m not sure runDetached
is the correct way to do this. Please correct me if I’m wrong.
There’s another final (off topic) question. Below code still has too many ceremony IMHO:
func calculateInParallel() async throws -> [Int] {
await try Task.withGroup(resultType: (Int, Int).self) { group in
var values: [Int] = Array(0..<8)
for idx in values.indices {
await try group.add {
(idx, calculateAsync(values[idx]))
}
}
while let (idx, computed) = await try group.next() {
values[idx] = computed
}
return values
}
}
I think above code block is equivalent to:
func calculateInParallel() async throws -> [Int] {
await Array(0..<8).map { calculateAsync($0) }
}
Why the language won’t offer such capability? Please let me know if this has been discussed before.
To chime in with one key insight here: TaskGroups are low level by design. They are what is used to build convenient operators. It’s not yet clear what the stdlib will or will not ship in terms of convenience functions, but if parallelism over dynamic numbers of tasks is needed, task groups is what is used to implement them.
Think about it this way:
- streams — ordered signals, “long”, element-by-element processing; back-pressure as known thanks to the reactive-streams standard
- groups — unordered signals, “wide”, parallel processing or “scatter gather” style tasks, with control over the breadth of the processing, backpressure as defined by a specific group (suspending the
add
) - actors — “isolated” data, events linearized by mailbox, but can be sent by any other task, reacting to events from many places; back pressure here is implicit, via the use of async calls to invoke actors we prevent starting too many calls until after the previous ones completed
They are primitives what kinds of operators and higher level abstractions one can build and name on top of them is obviously a very large space, but we are not in the business of defining them all at once. We need to get the indivisible abstractions right, such that the operations can be built on them.
Specific task group application examples:
-
select {}
- spawn many tasks, get the first one — this basically is like Go’s select builtin -
first(n:) { ... }
- spawn some tasks concurrently, this is different than first(n) on a stream; take the first n, cancel others` (many variations here about what to do about errors) -
collect(n:)
- regardless of errors, attempt to collect at least n results; cancel the rest -
map
— with limited parallelism -
quorum(...)
spawns n tasks, awaits n/2 + 1
Makes perfect sense! Thanks @ktoso. Apologize for I rushed a little.
Could you comment on how to convert a long-running synchronous function into an async
function, which can be leveraged by task group for parallel executing?
- Is
runDetached
the correct API to do that? Even the child task is not expected to be detached? - Do we need to add a layer of
DispatchQueue.async
insiderunDetached
? - Can we perform
Task.checkCancellation()
inside above dispatch queue?
You’d have to define what you mean by “long running”. It’s a bit of a “catch all” phrase which makes discussions a bit harder since it’s so imprecise.
For example, if you mean a) “long running because processes 2000 elements” then sure — make it an async task and just checkCancellation
every few elements. if you mean b) “long running because it’s blocking” these proposals can’t change this since they are cooperative cancellation, we cannot prevent a blocking IO operation in some low level C API to randomly respond to our notion of cancellation (without going off process).
I would encourage not thinking about threads but tasks, and everything these proposals talk about be confined to tasks. Tasks may share a thread, or they may not — it’s up to the executors used by the runtime to decide that.
Can you please clarify (or point me to some resource that does clarify) why you're not inclined to swap actor class for actor? To be honest I think this sentence alone is not clear enough. Do you mean you're not inclined to make actors a separate nominal reference type without inheritance like @Chris_Lattner3 proposed or you mean you're just not willing to change the syntax for declaring actors?

Can you please clarify (or point me to some resource that does clarify) why you're not inclined to swap actor class for actor? To be honest I think this sentence alone is not clear enough. Do you mean you're not inclined to make actors a separate nominal reference type without inheritance like @Chris_Lattner3 proposed or you mean you're just not willing to change the syntax for declaring actors?
I meant both. I just posted a detailed response to the pitch to make actors a separate nominal reference type. I recommend that we resolve the semantic issue before debating the syntax.
Doug

This is a good idea. I think there is layering here, based on dependencies amongst the 7 proposals outlined.
I turned this into a little dependency diagram.
Doug