On the proliferation of try (and, soon, await)

How about trying {...} instead of try do {...}? ; not sure about awaiting {...} though

I don't know; that's also an interesting feature idea, but I think it's different from the one that surfaces invariant-preservation as a first-class thing. There are lots of operations that are non-transactional in case of an error and yet preserve all knowable invariants (sorting an array, for example).

I had something like this in mind, very much inspired by DB transactions (once again, i'm out of my field here, so forgive my naivete), but for that to be more interesting than the "do {} catch" construct, "invariant preservation", or at least some aspect of it, needs to be an orthogonal concept.

func updateUser() {
atomic(firstName, lastName) { // captured and reverted to their original values by default in case of failure
 self.firstName = try decode(…) // here try wouldn't add much
 self.lastName = await fetch(…) // here await wouldn't add much (maybe ?)
} rollback {
//lets you get noticed of failure , timeout, etc. and override default rollback behavior 
self.firstName  = "?" 
self.lastName = "?"

@benjamin.g, you can get the effect you're after by modeling the "state-that-you-want-to-update-transactionally" in an standalone struct:

struct FullName {
    var firstName: String
    var lastName: String

var fullName: FullName

func updateUser() async throws {
    // First and last name are set together, or not at all
    fullName = await try FullName(
        firstName: decode(...),
        lastName: fetch(...)

You may need to add appropriate locking / actorification if thread-safety is required.


I actually had the opposite impression. To me, having those awaits in there to emphasize that there is some kind of asynchronicity is valuable. Otherwise it would be easy to look at it and think it's just a regular merge sort.

Although according to my understanding of async/await, that implementation would not actually benefit from any concurrency. Since the only thing being awaited is the recursive call to mergeSort, there's no "real" asynchonicity being introduced. It's telling the compiler "I'm asynchronous because I'm asynchronous" but really it pretty much isn't. As soon as you actually await it the entire thing executes, sequentially, recursive calls and all.

OK, I read up on async let again and it looks like I'm wrong. A task created with async let is run concurrently, which I take to mean executed on a different thread, though the proposal doesn't seem to say that explicitly so I'm a little confused. Is that intentionally ambiguous?

The actual scheduling of tasks is delegated to an executor: https://github.com/DougGregor/swift-evolution/blob/structured-concurrency/proposals/nnnn-structured-concurrency.md#executors

So the actual flavor of concurrency will depend on both the async method, and the executor. The phrasing prefers "concurrency" over "parallelism", for multiple reasons:

  • As a consumer of an async methods with the await keyword (or async let), the language wants you to think about concurrency more than parallelism or threads (lexicon). For example, an async method is allowed to execute and return synchronously. An async method is allowed to schedule some little jobs on the main RunLoop (think I/O). An async method is allowed to spawn a new thread, or use libDispatch, or use other facilities provided by the new Task and Actor apis.
  • async/await are more fundamental than Structured concurrency and Actors, which will, I guess, answer your questions with more details.
  • I expect that "don't mismatch DispatchQueue.async with Swift's async" will soon become a mantra, and a key step in the discovery of the new concurrency features.

But Doug has said that the global executor will run them in parallel (probably on a separate thread but since there's a thread pool behind it, you can't be 100% sure IIUC).

This is an important detail of the executor chosen, which may vary between executors. Some async users may prefer an executor (assuming one comes along) modeled more along the lines of JavaScript, where there is a single thread that user code gets executed on. This still can provide concurrency, just not parallelism.

Of course this means that users must be aware of the executor that is being used. Now, if a couple of third-party executor libraries become common, then I can start to see this becoming a point of contention among Swift libraries where they only work on some executors and not others, leaving a fractured ecosystem. (Which leads me to ask, should Swift define how executors should behave, to some extent?)

Just my 2c, but you're not the only one confused by this. The fundamental issue here is that async as proposed by the current async/await model only means concurrency. However, the "structured concurrency" proposal overloads the keyword to "introduce parallelism" by spawning a new parallel computation, which confuses the model (as you can see from non-Apple people's incorrect interpretation of the proposed model).

I don't understand why we would reuse that keyword here, particularly as a decl modifier. I would recommend something explicit (e.g. let myFuture = spawnTask { ... } or use property wrappers with autoclosure to do @Spawned let myFuture = ...) to make it clear that new concurrency units are being created.

If that were untenable for some reason, we should pick another word like spawn let myFuture = ... to eliminate the confusion and make it clear that this is a really different thing. This is a decl modifier so we can pick any word without intruding on the keyword namespace.

I'm not sure if it is intentional or not, but I'm not aware of a reason to do this.



I’m increasingly of the opinion that async let should behave the way I at first assumed it did work, which is that it only works with async expressions and doesn’t change at all how the called functions are run. I asked on the other thread about how a few different examples would run, but I haven’t gotten an answer. I feel like if these two calls work differently then that’s too subtle:

 // func g() async

// 3
await g()

// 4
async let task = g()
await task

IMO those should do the exact same thing. If they don’t then like Chris said there should be a different keyword use to make that clearer, and then there should be a way to split the await from the call site without changing behavior.

1 Like

I’m sorry but structured concurrency is about concurrency.

As the Structured Concurrency proposal states in its first paragraphs, async/await by itself does not really achieve concurrency, but forces the execution to be sequential (suspending instead of blocking, but semantically”sequential”) and structured concurrency introduces the notion of concurrency, which in turn, enables executors to execute these concurrent “pieces of a program” to run in parallel, if it can and wishes to do so.

It’s the same usual story of: since it introduces concurrency it allows for those concurrent pieces to be executed in parallel, and it just-so-happens that default executors are generally assumed to be multi threaded and as such, would take advantage of this and run these (e.g. async tasks or tasks launched into a task group) in parallel.

There is no inherent promise or need that child tasks will be parallel though. All those concepts work perfectly fine in a single threaded runtime. In practice, yeah, they will often be — that’s why we structure our programs with concurrency in mind. But how parallel or not at all they are, is a different “plane” entirely.

New sugar or keywords might be nice though; Though I may be biased by the work on task groups, which become very verbose (but also... are a low level building block, so maybe that’s fine?), as launching tasks in them becomes await group.add { await thing() }, for such I agree it would be nice to launch { ... } and automatically attach to the current task; If it is a group, dropping the reference to the task would be allowed, since it is possible to collect the completions independently, if the parent is not a group... they would have to be stored. I’m not sure if this is a simpler or harder model to reason about to be honest though.

It is ambigious because it would be wrong to strictly define it — on runtimes which are not multi threaded, these would not execute in parallel.

On runtimes (or executor configurations) which can and want to to leverage parallelism — these are the the “pieces” that might be executed in parallel. The amount of parallelism is not exactly 1:1 with what the source expresses; it can only express the concurrent “skeleton” of the execution.


I think I understand what you are trying to convey, but this explanation doesn't make sense to me. The language model allows executors of different kinds, ones which are implemented serially and ones that are implemented with parallelism. As such, the language model has to be that "pieces of programs" scheduled onto an executor "could" introduce parallelism. As such, I'd summarize the situation as:

  1. Async/await themselves don't introduce parallelism. It is a simple state machine transformation that allows functions to be suspended. I don't think this is controversial.

  2. The structured concurrency proposal provides (more than one) way to put async units of code onto executors, which potentially introduces parallelism. One of these is the async let proposal, which puts computation onto an executor, potentially introducing concurrency.

I think you're arguing with the second part. Do you disagree that this potentially introduces parallelism into the program, or am I wrong about part #1?

Sure, but "might be nice" isn't really the bar to meet. I'd recommend introducing the language and library model without the sugar. Once we understand and accept it we can look at the specific contribution of the sugar, separated from the general contribution of the structured concurrency model (which is a huge progression even in the absence of the sugar). Separating out the concerns helps to evaluate and nudge the various pieces in the right direction independently.

We always know we can add sugar later to many things in the language, but that is usually a more nuanced discussion than the programming model engendered by the larger proposal.

I don't follow this argument. By a similar argument, we shouldn't require "async" because not every invocation will suspend. The question is "what should programmers be forced to anticipate"?

If the executor is "allowed to" implement this with a parallel execution model, then all programmers will be expected to cope with the complexity that that implies, including race conditions or ActorSendable depending on the other design points chosen for the model.


My argument is that the proposal and features of Structured Concurrency do not conflate the terms, and that some statements in this thread add to the confusion, thus the clarification attempt.

Specifically the phrasing of:

is very misleading and confusing people reading the proposals and this thread, by muddying the waters.

You now, correctly, added the "potentially" word to the statements:

Which is makes the statement correct, while the previous one was highly misleading.

And then again... "potentially parallel" is exactly what concurrency is, thus proving and confirming that structured concurrency does not really introduce or speak in terms of parallelism, but merely concurrency.

All I'm saying that the previous statements made here were very confusing and seeing how people in this thread are getting more confused by this thread it necessary to clarify and state what expresses what correctly.

So Dave's take on it, as well as your corrected (1 + 2) statements express the semantics better, and hopefully my clarification will also help other readers of this thread.

This was in direct follow up to your spelling counter proposal. I'm not actually proposing any such sugar, just saying "yeah, a thing similar to what you mention there might be nice", exactly as you say yourself: might be nice but doesn't meet the bar, so let's ignore for now:

just as much sugar as the async let spelling. Doug and the compiler folks are really the ones calling the shot here; but personally, this is really equivalent amounts of sugar and compiler magic needed.

We must enforce that such spawned task must be awaited on, so however it's going to be marked, there still is sugar and magic to enforce that.

It's a direct answer to "will it run on a different thread", to which the answer is: assume the worst (that it might). If it helps we can specify that, but that's IMHO why leaving it undefined "how exactly it executes" is pretty much right.

I find that your explanations are clear, and that drawing limits in the scope of each component of the roadmap is important. Some details that are delegated to a component B are expected to be left "undefined" by another component A. This is all good.

In this thread however, people ask questions that will need to get a precise answer eventually. Actual modes of concurrency have to turn "defined" at some point.

Which pitch of the roadmap will answer those questions? Is it all about executors ? Which ones will ship (default vs. actors, vs. custom, vs others)? What are their precise behavior and guarantees? Can the implementation of an async function control executors? Can an async function manage its concurrency outside of executors (say, using low-level runtime constructs such as threads or dispatch queues)? Is the behavior of one particular async method expected to change depending on the context it is called from (say from the default global one, or from an actor)?

1 Like

@ktoso, by this I mean something precise:

Our beloved LibDispatch is a wonderful tool that comes with gotchas. It is possible to misuse it. Traps that developers might fall into are thread explosion, priority inversion, certainly more.

If reasoning by analogy is not too wrong, here, it is expectable that Swift concurrency as a whole will exhibit gotchas as well, and opportunities for misuses. After all, code runs on physically constrained devices. Demanding tools will push the language to its limits.

In this context, it is normal that people ask for more details. And they should get an answer. Even if the answer is: "details are not ironed yet. Come back when ...".

Another possible answer is: "details are allowed to change from one Swift version to another. The only guarantees are: ..." But please mind that people will be very unhappy if their working code suddenly drains system resources or turns into a snail after a system update. The "assume the worst" mindset applies to many things.


Would this imply that I cannot write code like this because it might introduce a data race if a multi-threaded executor is used?

class DoublePing {

  var running = 0

  func run() async {
    async let result1 = ping()
    async let result2 = ping()
    await [result1, result2]

  func ping() async {
    running += 1
    await networkCall( "/ping")
    running -= 1

My understanding is that yes, you would have to guard against races here. Async code is re-entrant (currently always, there are discussions elsewhere about that), and the operator += is not atomic.

Since that's just an ordinary class — i.e. neither an actor class nor associated with a global actor — you don't really know anything about what executor you might be running on and what executor you might be resuming to after any awaits you make. So yes, that code is as race-prone as if you wrote the analogous code today with completion handlers and no queues or locks.

We do want to eliminate those races, but it's tricky because classes today are largely unrestricted, and it may take some time (or even prove impossible) to figure out something acceptable.

If I understand @John_McCall and the proposals correctly, the implementation of DoublePing would be safe if it was an actor or associated with a global actor—even if the actor is reentrant. The code would run interleaved but not in parallel, which would prevent data races*.

Edit: *(Unless you manually bind a non-exclusive task executor to the actor; not sure if that's possible).

1 Like

Me too. I just went back and read the definition of async let again, and I'm still a little confused. I think it comes down to the definition of concurrent, which I know has already been discussed. I'm looking for example at this quote from the Structured Concurrency proposal:

To make dinner preparation go faster, we need to perform some of these steps concurrently . To do so, we can break down our recipe into different tasks that can happen in parallel.

That makes it sound like "concurrently" and "in parallel" are essentially the same thing, even though IIUC they are not generally considered to be so.

Does a task created with async let execute in parallel (on another thread) or merely concurrently (interleaved with other tasks on the current thread)? Does it depend on the executor? If so, I think the syntax for that should be more explicit, like Task.execute(···) or something. I was assuming mere concurrency.

Which leaves me wondering - was my initial assessment of that merge sort implementation correct? Does it just execute the whole sort as soon as you await it (and less efficiently because of the overhead of creating tasks)? Does it benefit from parallelism because of async let? Or does it depend invisibly on what the executor might be?

1 Like
Terms of Service

Privacy Policy

Cookie Policy