SE-0296: async/await

withUnsafeContinuation only let you create async function from an old system (e.g. callback). Unless I totally misunderstood your concern, they do not help at all. Even async let wouldn't be any less of a problem about resources than general execution is.

You'd need to handle the problem with the stack size the same way you handle any synchronous function (+ async let).

If you want to give review to a proposal it is mandatory for you yourself to be first familiar with the basic and general programming concepts that the proposal builds on. Proposals are not encyclopedias that explain everything since beginning of time.

In this particular proposal it’s mandatory basic knowledge to understand what the typical asynchronous programming paradigms are, in order to comment on the specific version that is proposed for swift.

So basic understanding of the differences between async/await, Futures, Promises and Observables (for example), as they are used in any of the popular programming languages such as C#, Rust, JavaScript, Python, or Kotlin.

If you are not willing to put in effort to understand the basics, then don’t expect anyone spending extra effort to educate you from scratch.

As has been said many times async/await is not totally new or unique concept for swift, it has a lot of prior art in other programming languages, so this review is about honing the details, not questioning the whole concept, which is already proven elsewhere.

6 Likes

Except for the concurrency case, you should still have a single stack. You'll use the same stack when coming back from await-ing, even if you are now under different context.

Better yet, compiler now understands the control flow better and can optimize across the usual "callback boundary".

That's a good question. There are a few possible designs in terms of stack implementation, though I don't know what we ended up using. Maybe @John_McCall can comment on that (sorry for the ping).

self is still captured by the await function so it won't be deallocated. I guess you could use isKnownUniquelyReferenced(_:) as an ad-hoc solution :thinking:. There's also a cancellation story that you might be interested in in the Structured Concurrency pitch.

1 Like

Hmmm, I'm very curious to know how the same stack can be reused across partial tasks that will be presumably used behind the async/await mechanism. Async functions may suspend and resume in any order so you may end up with unused holes in your stack... but then maybe there are techniques I'm not aware of.

Do you mean when there are multiple concurrent threads running async functions?

The partial tasks of an async function runs serially, one after another. That's the point of the async.

2 Likes

Firstly, I presume async functions may run on different kernel threads. If you have a multicore CPU you want to take advantage of that (e.g. Swift NIO does, as do many other systems).

Then there are partial tasks, potentially a great many of them, that are scheduled within the same kernel thread. Say you make N number of async calls to HTTP client tasks. Obviously there is no guarantee in which order the suspended functions will be resumed when the responses arrive. How can these functions possibly be on the same CPU stack (unless leaving holes in the stack is somehow OK)?

You may be interested in the Structured Concurrency pitch, though each execution would be, as you said, be on different stacks.

2 Likes

I think you might be missing the point that under optimised builds Swift compiler in general releases resources early unless constructs such as withExtendedLifetime are used. The lifetimes of objects are thus not bound to the stack unlike in e.g. C++.

2 Likes

I don't understand. A resumed function should be able to continue as if nothing happened and have access to all the local variables, then return and let the callers use their local vars etc. In my mind await is just a frozen state that should be fully restored after resumption. What am I missing?

Yes but if you don't actually use those local variables then the compiler is free to optimise their lifetimes and indeed it will.

2 Likes

I might be wrong, but reading the proposal, my understanding is that this:

func test() async -> Int {
    let result = await a()
    return await b(result)
}

gets sliced into something like this:

func test(completion: (Int) -> ()) {
    a { result in
        b(result) { result in
            completion(result)
        }
    }
}

Each partial task is thus some sort of closure; local variables on the "stack" are closure captures (those that outlive a partial task at least). Surely this can be better optimized than @escaping closure captures since you have some guaranty the closure will execute and that it'll execute exactly once.

(Note: Not present in this code is a hop back to the right queue/executor at the beginning of each closure.)

So the question is: are these two really equivalent? It seems as though the first variant doesn't unwind the stack until it's resumed, while the second one does unwind immediately.

@pyrtsa: I can see how certain optimizations are possible within the function scope, but what about the entire call stack that's under the function being suspended and resumed?

Local variables can be "unwinded" or deinitialized immediately after their last point of use in Swift (subject to the optimizer). They are not guarantied to live until the end of the scope like in C++. This does not change with async/await.

func test() {
    let a = NSObject()
    let b = a.className
    // a no longer used here, may be deallocated
    print(className)
}
1 Like

What I mean by "the stack will be unwound" is all the way down to the very bottom (the RunLoop I suppose), not just this function.

I find this highly patronizing and insulting.

This is an inappropriate straw-man.

Patronizing again.

In Swift today, the asynchronous paradigm is completion handlers. The proposal’s entire motivation section solely discusses completion handlers. So yes, it is necessary to understand completion handlers as a prerequisite.

It is not appropriate to expect everyone to know the entire landscape of asynchronous paradigms. But it is appropriate to expect the authors of a proposal to be familiar with most of the landscape. And it is incumbent upon them to describe in the proposal what parts of the map they looked at, and why they chose one direction over another.

Insulting again.

I already explained the effort I have spent trying to understand.

Plus, there is a massive difference between asking every reviewer individually to learn all possible avenues of asynchronicity in order to compare them, and asking the authors of a proposal to explain what their proposal does and why they chose that design.

No.

No, no, no.

This is not a bikeshedding review.

We are not here simply to “hone the details”.

We are trying to make a fundamental, low-level, long-lasting decision about the shape of asynchronicity in Swift.

The proposal needs to explain exactly what is being proposed, so that there is no ambiguity and no misunderstanding. And it needs to explain why that design is proposed, rather than any other.

At present, multiple people, including some of the very earliest authors of Swift itself, have expressed uncertainty about what is being proposed, and started new threads to ask how certain things will work.

This further indicates that the proposal text in its current form does not adequately communicate what is being proposed and why.

6 Likes

I doubt that it's transformed to closures, that would be highly ineffecient. Coroutines are usually transformed into stack machines, similar to what Regenerator did for JavaScript generators. They don't have to be necessarily stackful.

function *range(max, step) {
  var count = 0;
  step = step || 1;

  for (var i = 0; i < max; i += step) {
    count++;
    yield i;
  }

  return count;
}

transformed into

var _marked = regeneratorRuntime.mark(range);

function range(max, step) {
  var count, i;
  return regeneratorRuntime.wrap(function range$(_context) {
    while (1) {
      switch (_context.prev = _context.next) {
        case 0:
          count = 0;
          step = step || 1;
          i = 0;

        case 3:
          if (!(i < max)) {
            _context.next = 10;
            break;
          }

          count++;
          _context.next = 7;
          return i;

        case 7:
          i += step;
          _context.next = 3;
          break;

        case 10:
          return _context.abrupt("return", count);

        case 11:
        case "end":
          return _context.stop();
      }
    }
  }, _marked);
}

I can't confirm that this is the exact way that the Swift compiler does it though. Also, yield there is not same as await, JavaScript transpilers had another transformation on top that. It transformed async functions into generator functions, and await into a yield statement yielding a promise. But overall, it still looks like transforming "suspendable" functions into state machines seems to be much more efficient than nested closures.

2 Likes

It's a normal return, which will end up returning to the run loop eventually but maybe not immediately. For instance:

func test() {
   _ = Task.runDetached {
	  await a()
   }
   b()
}

This will immediately call a(), but if it gets suspended then control returns to test, not the run loop. test will call synchronously b() and will then return. Eventually control will go back to the run loop. Once in the run loop a() might resume (if it was suspended).

(At least that's what I understand it'll do. Can't get the toolchain to work to test it.)

This looks great! My only (minor) issue is with the ordering of await try. I understand that this was done to mirror the ordering of async throws, as the proposal points out, but the example it shows immediately reveals why the opposite order (try await) would be more natural:

let (data, response) = try await session.dataTask(with: server.redirectURL(for: url)) // error: must be `await try`
let (data, response) = try (await session.dataTask(with: server.redirectURL(for: url))) // okay due to parentheses

In contrast, adding parentheses to await try would require flipping the two: await (try asyncFunction()) is not valid code, unlike try (await asyncFunction()). I think associativity is a more important property here than consistency with the ordering in the declaration (async throws).

(For what it's worth, that ordering in the declaration does feel more natural to me than the alternative—even when a function throws, it throws asynchronously. Thus, asyncFunction() async throws -> Foo can be read as "asyncFunction asynchronously throws or returns a Foo.")

7 Likes

Um … why? Obviously I'm missing something obvious here.

I'd like to note something regarding the async keyword. I think many people are confusing this to mean the task will run in parallel on another thread. It's very natural to think this since DispatchQueue.async is used to hop from one queue/thread to another. But that's completely wrong.

Our async here (either as a function decorator or in the async let syntax detailed elsewhere) has a rather different meaning: it means the function can be sliced into smaller parts that will run one after the other on a run loop, possibly interleaved with other tasks. Sure, you can make this run on a run loop that runs on another thread (like any other function), but this is pretty much orthogonal with async/await.

It doesn't help that async/await tries to be a replacement for callback-based returns, which very often involve calls to DispatchQueue.async to go to the right thread. Now we have a language feature for that same thing... except not really.


I know there's a lot of prior art for "async-await" in other languages, but I wonder if we should be replace the term for "async" with something more descriptive in order to avoid all this confusion. Perhaps suspends/await since the defining feature is the suspension points?

func test() suspends {
    await a() // suspention point here
}

(Edit: changed "suspending" to "suspends"; works better next to throws.)

10 Likes