On the proliferation of try (and, soon, await)

Care to explain how you arrive there? I don't understand where that implication could come from.

In pure functional programming there is no such thing as a broken invariant, because there's no mutation… so I don't understand how that addresses the question at all.

Sorry, typo. That should read "I don't think the importance of maintaining invariants should imply that marking expressions that throw is not useful." Your argument seems to couple the two, suggesting that try annotations distract programmers from thinking about invariants.

Lots of negatives there so I'll restate: IMO marking expressions that throw is useful for reasons unrelated to invariants. I agree that maintaining invariants is of utmost importance. But I don't agree that try annotations make it more difficult to do so or in any way distract from their importance.

3 Likes

If "pure functional" programming were both the statistical and conceptual norm, then I might be prepared to agree that await does not carry its weight.

However, in any programming, the order in which things happen seems to be important regardless of mutation. Suspension points, at least in principle, seem to allow ordering to change, which would mean — at the very least — they're not an implementation detail.

1 Like

Explicit try and await simplify type checking of closures. If the type checker is able to determine whether a closure can throw or yield before generating constraints for its body, it doesn't have to model "effect variables", where like a "type variable", a function type has to be inferred as either throwing or not throwing (or async or not async).

7 Likes

I didn't suggest that marking expressions that throw is never useful. I made the argument that it is not useful when there are no broken invariants.

The current language rules for try annotations do strongly suggest that error propagation points are dangerous or worthy of extra attention even when there are no broken invariants. That, I claim, is false. I also claim that by drawing undue attention to these other points of error propagation, we distract programmers from what is actually important.

IIUC you are making a contrary claim:

You only need to show one example and tell the story of how it prevents a bug to prove that's true, and I'll gladly accept it and crawl back under my rock if it turns out there's really no broken invariant involved. Honestly, I don't need to stir the waters if I'm wrong. So, what is your example?

I have no disagreement with any of that. I'm not even claiming that await doesn't carry its weight. I'm merely asking if it's useful on every single async call, or whether there are broad categories of async calls where we'd lose nothing by omitting it, as there are with try.

Remember how we got here. I asked about limiting await marking to the places where it matters, and @adamkemp wrote:

My response was that it was an overstatement, and gave an example of a category of programs where that statement clearly didn't seem to hold. I'd like to suss out where the boundary between “await matters” and “await doesn't matter” lies, so that we don't end up with a proliferation of useless markings.

2 Likes

Thanks for your post, @adamkemp. It sounds to me like I can learn some valuable things from what you're saying here, but I don't quite understand it yet, so I hope you can clear up a few things.

Sorry, I don't understand how await indicates what you say

  • Can't you always consider doing additional work on the same thread you're currently running?
  • Can't you always queue up asynchronous work?

Also, there are lots of examples in that proposal. Could you point to one and show how those considerations come in to play, and describe how await helps?

If the asynchronous calls (the suspension points) were not called out then it wouldn't be apparent to the person writing or reading the code how that function would behave at runtime from a performance perspective. Once you add in the obligatory await calls it may become obvious that the function is written in a way that is unnecessarily slow.

I think my experience with await is not great enough to understand why that's the case, especially since the proposal is to require await even at some points where there is in practice no suspension (calling back into the same actor). Would you mind showing an example of some code that's obviously inefficient, but only obviously so because of await markings?

With async/await plus the structured concurrency proposal you can do relatively simple refactoring to deliberately interleave work and improve performance without having to introduce a lot of complexity. But if you take out the await keyword then it obscures what's actually happening and cause people to write bad code (either buggy or poorly performing) because they literally can't see how the code they're reading/writing actually behaves at runtime. That's why even though the compiler certainly could work without the await keyword it should still be mandatory at every suspension point. Our primary audience in writing code is for other humans (including ourselves), not the compiler.

I'm all about code being for humans. But before I simply accept that being forced to write await prevents bad code, I'd still like to see and understand some examples (and, I'd like to have some intuition that the examples are representative of the majority of async code). After all, that's what they said about try, and my experience contradicts that.

I never claimed it is useful because it prevents bugs, I think it is very useful as a reader of code. This is independent of whether or not it prevents bugs.

That said, I can imagine a symbol that doesn’t throw being modified to throw in the future while being used by existing code in a throwing context. This would change behavior at the call sites without a compiler error. I don’t think it’s too hard to imagine this silent change in behavior introducing bugs.

2 Likes

Sure. Let's look at some of the specific examples. The structured concurrency spec shows this code:

func makeDinner() async throws -> Meal {
  let veggies = await try chopVegetables()
  let meat = await marinateMeat()
  let oven = await try preheatOven(temperature: 350)

  let dish = Dish(ingredients: [veggies, meat])
  return await try oven.cook(dish, duration: .hours(3))
}

Let's for the sake of argument assume that we could omit the await calls and get the exact same behaviors. That would look like this:

func makeDinner() async throws -> Meal {
  let veggies = try chopVegetables()
  let meat = marinateMeat()
  let oven = try preheatOven(temperature: 350)

  let dish = Dish(ingredients: [veggies, meat])
  return try oven.cook(dish, duration: .hours(3))
}

Now, take a look at the next example in which they modified the function:

func makeDinner() async throws -> Meal {
  async let veggies = try chopVegetables()
  async let meat = marinateMeat()
  async let oven = try preheatOven(temperature: 350)

  let dish = Dish(ingredients: await [veggies, meat])
  return await try oven.cook(dish, duration: .hours(3))
}

Here we are doing the same work, but it does it more efficiently by doing some of the steps concurrently instead of sequentially. Do we really need to sit there and do nothing while the oven is preheating? We could probably do some useful work during that time. The existence of the await in the code made it clearer that a refactoring like that would be beneficial.

(Note that in making this improvement we didn't even necessarily increase the number of threads used by the system. In most cases asynchronous functions are asynchronous because they're waiting on something external that doesn't consume a process thread. Think of networking calls or asynchronous XPC (which is likely in turn doing something else asynchronously). We're not doing more concurrently in-process. We're just actually doing something when our process may otherwise be doing nothing at all. For a really great explanation of this, read this blog post "There Is No Thread", which was written to describe async/await for C# but should apply equally well to this feature in the end.)

It actually matters whether a call is synchronous or asynchronous. You may think it would always be obvious based on the names of methods or something, but in reality it's not always obvious from the name alone which things are inherently asynchronous and which aren't (Does it require reading from disk, or talking to the network? Is there a quick synchronous cached code path and a slow asynchronous cache miss code path under the hood?). When you actually take out the completion handler and make the asynchronous calls look synchronous then you're hiding an important detail about how that code executes.

IMO removing the async would have no benefit. What you think of as noise I would argue is clearly signal.

5 Likes

Okay, well can you explain how it's useful to you as a reader? Walk me through a simple example. I posted a couple examples with lots of trys in them.

Now, granted, these are not examples of the kind of code where I think try matters, and I know such code exists. But I tried to find some on GitHub and failed, which is sorta my point: that code is rare. Anyway, if your argument is that all try's are good for readers, you should be able to describe how the ones I found help you.

That said, I can imagine a symbol that doesn’t throw being modified to throw in the future while being used by existing code in a throwing context. This would change behavior at the call sites without a compiler error. I don’t think it’s too hard to imagine this silent change in behavior introducing bugs.

Correct.

The bugs happen because an invariant was temporarily broken and that breakage becomes permanent because of the new error propagation point. These are the cases where try really pays off. For example:

/// Like Array<(Bool, T)> but avoids internal fragmentation by storing the bools
/// separately.
struct ArrayOfBoolAnd<T> {
  typealias Element = (Bool, T)

  // invariant: ts.count == bools.count
  private var ts: [T]
  private var bools: [Bool]

  mutating func appendEtc(_ x: Element) throws {
    bools.append(x.0) // invariant broken
    thing1()
    ts.append(x.1)    // invariant restored
    try thing2()
  }
}

If thing1() were to become a throwing function, we'd have a bug unless appendEtc were updated (and not just by adding try to silence the compiler).

This is an example of code where using a try { ... } block could easily be harmful if thing1 were updated to throw:

  mutating func appendEtc(_ x: Element) throws
  try {
    bools.append(x.0) // invariant broken.
    thing1()          // if this throws, invariant stays broken.
    ts.append(x.1)    // invariant restored.
    thing2()
  }

But then, I would never use a try block where invariants were going to be broken, and reasonable guidelines for where to avoid it could easily be encoded in a linter.

1 Like

I don’t think an example is necessary here. I find it useful to be able to see control flow represented in code when I’m reading it. IMO, this increases clarity. This is obviously subjective and you’re welcome to disagree.

1 Like

I came to this thread to propose exactly this, except spelled as try { … } instead of try do { … }.

(Non-fluent phrases like if case are IMO one of the few front-and-center surface usability things in Swift that really are an unfortunate historical artifact whose ship has already sailed. Let’s not add another.)

3 Likes

Thanks!

Are you saying that await signals code that would otherwise (in a non-async world) block, and thus provide an opportunity for the current thread to do other useful work?

:thinking:

Hmm, but—correct me if I'm wrong—I got the impression that async let evaluation is allowed to happen on an arbitrary thread. So even if these functions were all synchronous (in which case there would have been no awaits to clue us in), wouldn't this refactoring still make sense?

(Note that in making this improvement we didn't even necessarily increase the number of threads used by the system… read this blog post "There Is No Thread" , which was written to describe async/await for C# but should apply equally well to this feature in the end.)

I do understand the implementation and performance implications of coroutines and cooperative multitasking. What I don't have a good feeling for, yet, is what the programming model is like. The role that await plays in that model is what we're talking about.

(out of order)

You may think it would always be obvious based on the names of methods or something…

Don't worry, I really don't think that!

It actually matters whether a call is synchronous or asynchronous.

It's obvious to me that in highly performance-sensitive (a.k.a. real-time) code it matters, because an asynchronous call is also an invitation for the current task to be suspended in favor of other arbitrary work, no matter how important the current task is. But that is not most code.

It's obvious to me that in most UI code it matters, because it indicates an opportunity for incoming UI events to be processed, which can be a problem if you're not aware of it, because the side-effects of that processing can interfere with computation currently underway. That is certainly a lot of code, and maybe that should be enough for me.

What's not clear is whether there are other kinds of code where it truly doesn't matter where the suspension points are. It's possible that there's simply no analogy to try here, but that would surprise me a lot since the underlying issues are so similar.

Anyway, I clearly need to think on this a bit more. I thank you again for your patient explanations.

1 Like

Let's do as Paul suggests, and allow the try to migrate to the first brace of the function.

Right exactly so - thanks for catching my typo. I fixed it in the post above!

Right, I understand that try has no semantic meaning to the compiler and doesn't generate code. The purpose of try is to inform the writer of code that there is a nonlocal transfer of control going through the function (therefore, if you call malloc, you should think about deferring that free) and to provide a similar signal to people who come back around to maintain the code.

This is one of the downsides of C++ exception safe logic - it often leads to the advice of writing "exception safe" code pervasively with RAII and other techniques, which are often massive overkill and sometimes obfuscate simple logic.

There are many approaches to using exceptions in C++, but the most successful ones I've seen are "-fno-exceptions" or "write code that assumes that anything can throw". These two extremes are what I meant by binary. In practice most code doesn't throw even when using exceptions, and C++ isn't great at conveying that or providing that as a programming model.

Yes, that is roughly what I mean. Now I think your argument (which is reasonable) is that the backpressure is so odious that it makes error handling unpalatable to use in some cases - I think this is a fair concern, which is why I think we can improve things here. I don't think it makes the approach incorrect though, we just never got around to improving sugar to make this sweeter.

I understand that this is something you don't philosophically agree with, but we do use this in multiple places in the language. For example, we require & at the caller side of inout parameters to make the mutation semantics more clear. We require the if let x = x pattern instead of making control flow implicitly rebind things, etc.

Fair enough, I didn't mean to imply that you mean anything malicious, its just that I understand that you're opposed to the concept of try marking and don't find it valuable. I'm was trying to provide the counterpoint that some people (me at least) do find it useful in practice. To be clear, I agree that there are certain patterns (e.g. the codable example) that don't work out well, but we can do things to improve the rough edges.

I would expect that these stack, in the existing case, I would expect that await try do { ... } to work.

Right, but this comes back to the philosophy of marking and whether it is a good idea to put syntactic burden into this. I agree that indentation isn't worth it for the "three try" case, but it would be worth it for the much larger case.

The major concern I have with that it that it would try {...} catch (){} "work" and that would be confusing because it would look too much like C++ and Java and is not something we want people accidentally reaching for.

something like this doesn't have the same problem as I mention above, but has other challenges, I'm curious to know what others think about it.

-Chris

4 Likes

I’m one of those who think Swift got try nearly bang-on. Cases like encode(to:) and simple rethrows functions like map have been the exceptions, not the rule. I actually think it was a mistake to allow try to apply to arbitrary sub-expressions and not just an immediate call.

There is one place where try feels redundant to me: single-expression closures being passed to rethrows functions.

let result = try input.map { try $0.compute() }

There’s no recovery or control flow there, and you’ve already marked a try on the top-level expression. But it’s such a specific rule that I don’t know if it’s worth adding at this point.

Here’s my example:

extension Array {
  mutating func mapInPlace(_ transform: (Element) throws -> Element) rethrows {
    for i in self.indices {
      self[i] = try transform(self[i])
    }
  }
}

No invariants are broken, but that’s just the baseline functionality. The thing is, a throwing function should have the option to make guarantees about postconditions in the failure case beyond just “no invariants have been broken”. In this case, the author of the function might want to restore the original contents of the Array on failure. (That wouldn’t ever be the implementation of a general purpose mapInPlace, because it’s expensive, but there could be specific cases where it’s useful.) Having the try reminds the function author to consider what happens in the failure case, both in terms of invariant restoration / cleanup, but also what postconditions to provide beyond “it failed”.

Now, this isn’t changing the default behavior, so yeah, you could say this is only for cases where it doesn’t matter. But those feel like the exception more than the rule, and even then I think it’s useful to see the control flow even if it technically doesn’t matter.


I admit I don’t have a good enough sense of await to know if it has the same kind of concrete issues. It is useful for knowing when to check for cancellation, I guess?

10 Likes

I think it would be fine, but tbh if we’re going to pull control flow blocks out to the top level I think switch is more important. This would reduce a level of nesting in methods on enums which are much more prevalent in most code I’ve worked with than lots of try.

3 Likes

Thanks for raising this issue. Using Swift for the first time last year, I too have felt its error-handling overly verbose, coming from languages that silently threw exceptions all over the place. So I just took a look at the rationale document you linked for the first time, where your problem is with what's called marked propagation, which isn't justified much as you say.

As I understand it, the main reasons are:

  1. As a clear marker of where the throwing sections are when dealing with invariants, so that you can easily see what might break them.
    If you have every throwing statement memorized, this won't help but I think you'd agree that they're helpful in this case, particularly since the invariants must mostly be handled in the programmer's head and the Swift language and compiler don't enforce most invariants. As a result, it appears the Swift core team chose to spam try everywhere, in case there are invariants it helps you with.

  2. As you point out, there's a lot of code where such invariants aren't involved, but even there it's a marker of control flow and what might throw, which are useful to know when reading code without having to check every function's declaration.

As such, try is a memory aid, like the example Chris gave of "require & at the caller side of inout parameters to make the mutation semantics more clear," an annotation that helps when reading code. Swift users probably have widely varying experiences of its usefulness, depending on what kind of code they mostly deal with or how careful they are or need to be.

On balance, I now think it's worth doing, though that try block idea may help many who don't want it on every statement. I tune it out when just reading code to understand what it primarily does, though it can be helpful when you need to think about errors too.

I'm not familiar enough with await to make the comparison to that situation.

Thanks for this explanation. I just wanted to emphasize this point because I worry it will get lost with all the other discussion going on (and because I had forgotten about it too):

There is a key difference between await and try, which is that await (as proposed) is actually not mandatory when calling an async function, and omitting it has a very different meaning. Calling an async function and awaiting its result are two different operations. They are usually combined, but they don't have to be, and therefore await needs to be explicit.

3 Likes

If you really want to avoid the proliferation of await, and given that omitting it is already meaningful, then we'd need something else that means "don't await" to accomplish the "call and await later" case. defer is already taken... suspend? postpone?

(I do still like await the way it is, but I also wanted to fully explore this line of thinking)

1 Like

async