I think it's much easier to explain async/await by comparing it to error handling in Swift. In that sense async is throws and await is try. Throwing functions can only be called from other throwing functions, otherwise they have to be wrapped with do/catch to handle errors. Similarly, here async functions need to be handled with beginAsync to indicate an entry point into asynchronous code.
Additionally, throws function "attribute" and try statement are useless without a throw statement to actually throw an error. Likewise, with async and await you have to indicate somewhere in the asynchronous code that it can suspend instead of blocking the execution completely. This is done with suspendAsync, which is not a keyword here, but a magic function that provides you a "continuation" closure that allows asynchronous code to resume.
Obviously, you can have asynchronous code that throws errors, which is addressed in the proposal too.
From a certain perspective, both async/await and error handling are just "effects", that's why it makes sense for them to be syntactically similar. It's a whole different question if there's a need for a proper effect system in Swift, which would allow expressing all kinds of effects such as recoverable errors, generators, non-determinism etc, not just asynchrony and simple error handling. See the Eff programming language for an example of how that could work. Or Haskell/PureScript for that matter, which allow expressing effects with monads in a stricter way, but with a convenient do notation.
I think you could say that it does block the flow of execution of that function, but does not block the flow of execution for the OS-level thread. This is similar to how when blocking an OS thread you could say it blocks the thread but doesn't block the overall execution on the CPU, because another thread will run.
Interestingly, Kotlin converged on a very similar paradigm, apparently independently:
But I think it's also important in general to avoid tying the language design to any particular library design, since this is still a rapidly-evolving space, and as trends come and go, we want the library ecosystem to be able to evolve without burning too much of a layer of assumptions into the language itself.
Whether you personally like futures or not, the reality is they fit well with the existing concurrency frameworks we have today. GCD, callbacks, and futures APIs already exist and aren't going away. We have to play well with them, and whatever Swift adds for async/await should hopefully allow working within whatever framework people are already using and allow for composability.
I still haven't seen how more complex examples would work with the proposed solution, and as far as I know we still haven't solved the problem of how the code can express things like "which queue should I be on after the await resumes?", which is a really important problem to solve. If we can't handle something as simple as "I used await on the main queue and want to end up back on the main queue" without requiring the user to specify it every time then the feature won't be successful. It'll just make it really easy to write bugs.
And yet, if we solve that problem by baking in an assumption that users of async/await are using GCD then we will have restricted the feature's usefulness unnecessarily.
So what I'm looking for is how do you solve those kinds of problems? If there's a way to do that without interacting with something like a futures API then great, but I just haven't seen any kind of solution so far.
Yeah, one known deficiency in the old proposal doesn't cover is how to control the execution context, such as what queue/thread/runloop/event loop/etc. the coroutine expects to run on. It is definitely a goal that the mechanism not be tied to GCD or any other specific runtime library either. Kotlin's suspend coroutines ended up very similar to what the proposal describes, but they additionally allow a coroutine context to wrap the continuation callback when it's suspended, and that wrapper can do whatever is necessary to schedule the coroutine for execution in its expected context. That's a nice, event-loop-agnostic way of handling context.
The thing that threw me was the claim that this would fix the "scale to millions of threads" use-case, discussion of segmented stacks etc. This seems completely orthogonal to async functions, that was my only point. Doing something special for captured async function state seems like a separable optimization.
I don't think this is an either/or situation. As you know, I'm a huge fan of adding async functions to Swift, but a good Future API is an important dual for it, just as our throws effect have a dual Result type. The key thing to me is to make sure they get co-designed together. It would be a bad thing to standardize a future type before the full async design is ironed out IMO.
You're correct that the old proposal didn't address this, but it isn't clear that's a bug. An async modifier doesn't need to know anything about the underlying runtime it is running with - and keeping these orthogonal is a huge feature: there are a lot of runtimes out there, not just GCD.
The reason this was complicated before is that I/we had imagined that all existing completion-handler-taking APIs in Cocoa would be auto-imported into async functions, and many of these are known to implicitly queue hop. The solution to this problem is simple: just don't auto import them into async functions. It is simple enough (now that ABI stability exists) to introduce new async-correct entrypoints that are runtime independent. This way the OS works the same way as the rest of the language, and the language design is not bogged down by compatibility issues with GCD.
I agree that the async implementation should support modeling async operations as values. This support should be compatible with suspended effects, not only futures that represent the result of an already running async operation. I'm a huge fan of what Zio is doing for the Scala community and believe Swift would greatly benefit from something similar.
Regardless of whether an effect type is included in the standard library or not, I think the support should be available to third party libraries. It would be a shame if there was a single blessed type in the standard library.
What does an "async-correct entrypoint" look like?
I'm curious what it would look like to write safe async code both from the perspective of someone writing app/UI code and from the perspective of someone writing framework code (i.e., for code that should mostly run on the UI thread and code that may prefer not to run on the UI thread). For the UI code author how hard/easy is it to avoid ending up on the wrong thread? And for the framework author how hard is it to ensure that your caller ends up on the correct thread, whether that caller wants to be on the UI thread or not?
Most runtimes do have some sort of notion of context that programmers would normally expect a computation to remain in, though, be that a dispatch queue, kernel thread, runloop, or what have you. It's valuable for an async function to be associated with a context, and we ought to be able to abstract that enough that we don't tie the feature to any particular kind of context. Kotlin's design here is good prior art to look at. Letting coroutines control their execution context also helps code that deals with resuming coroutines be runtime independent; in a model like our original proposal, then any code that works with a coroutine continuation would need to be careful to schedule it correctly according to the specific needs of the environment. Putting that scheduling responsibility inside the continuation allows the resuming code to remain agnostic to those specifics.
My concern with that approach is that it is very easy to get wrong. The default behavior is that if you start on one thread/queue you can end up on another. I think if we ship that then it will be extremely error prone.
The equivalent in C# would just be this:
async void ButtonClick() {
var data = await loadData(...);
var image = await process(data);
uiElement.image = image;
}
Note that to get this behavior neither the caller (this UI code) nor the callee has to do anything special. The default is to return to the originating "synchronization context", which may be a particular thread (like the UI thread) or a thread pool. You have to go out of your way to override this default:
async void ButtonClick() {
var data = await loadData(...).ConfigureAwait(false); // Don't capture the current context
var image = await process(data);
// Here we may be on the wrong thread so we would have to return to the right thread
BeginInvokeOnMainThread(() => {
uiElement.image = image;
});
}
Like Joe I believe it's possible to make Swift behave this way without tying us to particular API, and I think it should be a requirement in order to make this feature usable. One way to do it would be to use pattern matching in the compiler with a future-like type. There may be another way too. I haven't looked at how Kotlin handles this. My hope is that it ends up being very low friction for app developers and framework authors to adopt.
Yeah, if I was going to revise the proposal today, that example wouldn't be valid. Hopping queues would require explicitly starting a new async task in a new context.
Maybe I'm too comfortable with queues, that I actually don't see what's wrong with it. Do you mean that people could hop to the wrong queue, or forget to hop at all? And the syntax should be more explicit about the execution context, while at the same time support third-party context systems, something along this line?:
The problem is never knowing which queue you're on after using await, which would lead to people hopping to the main queue even if they're already there or not hopping when they need to. It's extremely error prone. Worse, since the code looks serial it's easy to assume that everything you see runs on the same queue.
It just becomes really hard to reason about your code if consecutive lines may run on different synchronization contexts. It's hard enough to reason about multi-threaded code, but if you add to that the complication that "different lines within this same lexical scope may have different unpredictable rules about what's safe to call and what's not safe to call" then all hope is lost.