ABI of throwing

Currently, Swift adds a hidden byref error parameter to propagate thrown errors:

public func foo() throws {
  throw FooError.error
}

define void @_TF4test3fooFzT_T_(%swift.refcounted* nocapture readnone, %swift.error** nocapture) #0 {
entry:
  %2 = tail call { %swift.error*, %swift.opaque* } @swift_allocError(/* snip */)
  %3 = extractvalue { %swift.error*, %swift.opaque* } %2, 0
  store %swift.error* %3, %swift.error** %1, align 8
  ret void
}

This means that call sites for throwing functions must always check if an exception occurred. This makes it essentially equivalent to returning an error code in addition to the function's actual return type.

On the other hand, there are exception handling mechanisms where the execution time cost in the success case is zero, and the error case is expensive. When you throw, the runtime walks through the return addresses on the stack to find out if there's an associated catch block that can handle the current exception. Apple uses this mechanism <https://github.com/apple/swift/blob/master/docs/ErrorHandlingRationale.rst#id29&gt; (with the Itanium C++ ABI) for C++ and Objective-C exceptions, at least on x86_64.

Other compiler engineers, like Microsoft's Joe Duffy <http://joeduffyblog.com/2016/02/07/the-error-model/&gt;, have determined that there actually is a non-trivial cost associated to branching for error codes. In exchange for faster error cases, you get slower success cases. This is mildly unfortunate for throwing functions that overwhelmingly succeed.

As Fernando Rodríguez reports in another thread, you have many options to signal errors right now (I took the liberty to add mechanisms that he didn't cover):

trapping
returning nil
returning an enum that contains a success case and a bunch of error cases (which is really just a generalization of "returning nil")
throwing

With the current implementation, it seems to me that the main difference between throwing and returning an enum is that catch works even when you don't know what you're catching (but I really hope that we can get typed throws for Swift 4, because unless you actually don't know what you're catching, this feels like an anti-feature). However, if throwing and returning an enum had different-enough performance characteristics, the guidance could become:

return an enum value if you expect that the function will fail often or if recovery is expected to be cheap;
throw if you expect that the function will rarely fail or if recovery is expected to be expensive for at least one failure reason (for example, if you'd have to re-establish a connection after some network error, or if you'd have to start over some UI process because the user picked a file that was deleted before it could be opened).

Additionally, using the native ABI to throw means that you can throw across language boundaries, which might be useful in the possible but distant future in which Swift interops with C++. Even though catching from the other language will probably be tedious, that would already be useful in language sandwiches to unwind correctly (like in Swift that calls C++ that calls Swift, where the topmost Swift code throws).

I don't really know what to expect in terms of discussion, especially since it may boil down to "we're experts in this fields and you're just peasants" or "the cost of changing this isn't worth the benefit". Still, I'd like some more insight into why Swift exceptions don't use the same mechanism as C++ exceptions and Objective-C exceptions. The error handling rationale document is very terse <https://github.com/apple/swift/blob/master/docs/ErrorHandlingRationale.rst#id62&gt; on the implementation design, especially given the length of the rest of the document:

Error propagation for the kinds of explicit, typed errors that I've been focusing on should be handled by implicit manual propagation. It would be good to bias the implementation somewhat towards the non-error path, perhaps by moving error paths to the ends of functions and so on, and perhaps even by processing cleanups with an interpretive approach instead of directly inlining that code, but we should not bias so heavily as to seriously compromise performance. In other words, we should not use table-based unwinding.

I find the rationale somewhat lacking. I can't pretend that I've measured the impact or frequency of retuning a error objects in Objective-C or Swift, and given my access to source code, I probably couldn't do a comprehensive study. However, as linked above, someone did for Microsoft platforms (for Microsoft-platform-style errors) and found that there is an impact. The way it's phrased here, it feels like this was chosen as a rule of thumb.

Throwing and unwind tables are all over the place in a lot of languages that have exceptions (C++, Java, C#). They throw for a lot of the same reasons that Objective-C frameworks returns errors, and people usually seem content with the performance. Since Swift is co-opting the exception terminology, I think that developers reasonably expect that exceptions will have about the same performance cost as in these other languages.

For binary size concerns, since Swift functions have to annotate whether they throw or not, unless I'm mistaken, there only needs to be exception handler lookup tables for functions that call functions that throw. Java and C# compilers can't really decide that because it's assumed that any call could throw. (C++ has `noexcept` and could do this for the subset of functions that only call `noexcept` functions, but the design requires you to be conscious of what can't throw instead of what can.)

Finally, that error handling rationale doesn't really give any strong reason to use one of the two more verbose error handling solutions (throwing vs returning a complex enum value) over the other.

Félix

This means that call sites for throwing functions must always check if an exception occurred. This makes it essentially equivalent to returning an error code in addition to the function's actual return type.

Right.

On the other hand, there are exception handling mechanisms where the execution time cost in the success case is zero, and the error case is expensive. When you throw, the runtime walks through the return addresses on the stack to find out if there's an associated catch block that can handle the current exception. Apple uses this mechanism <https://github.com/apple/swift/blob/master/docs/ErrorHandlingRationale.rst#id29&gt; (with the Itanium C++ ABI) for C++ and Objective-C exceptions, at least on x86_64.

Other compiler engineers, like Microsoft's Joe Duffy <http://joeduffyblog.com/2016/02/07/the-error-model/&gt;, have determined that there actually is a non-trivial cost associated to branching for error codes. In exchange for faster error cases, you get slower success cases. This is mildly unfortunate for throwing functions that overwhelmingly succeed.

Well sure. However, if you’re comparing against Objective-C and C++, you have to account for an important structural difference. The compiler for those languages has to assume that *any function* (to a first order approximation) can throw. In Swift, the language strongly disincentives developers from marking stuff “throw” that can not actually throw (by requiring calls to be marked with ‘try’).

“Zero cost” EH is also *extremely* expensive in the case where an error is actually throw in normal use cases. This makes it completely inappropriate for use in APIs where errors are expected in edge cases (e.g. file not found errors).

I don't really know what to expect in terms of discussion, especially since it may boil down to "we're experts in this fields and you're just peasants”

I’m not sure why you think the Swift team would say something that derogatory. I hope there is no specific action that has led to this belief. If there is, then please let me know.

or "the cost of changing this isn't worth the benefit". Still, I'd like some more insight into why Swift exceptions don't use the same mechanism as C++ exceptions and Objective-C exceptions. The error handling rationale document is very terse <https://github.com/apple/swift/blob/master/docs/ErrorHandlingRationale.rst#id62&gt; on the implementation design, especially given the length of the rest of the document.

This is simply because it is of interest to fewer people, and no document can anticipate the interest of all readers. Ask specific questions and we’ll provide specific answers.

However, as linked above, someone did for Microsoft platforms (for Microsoft-platform-style errors) and found that there is an impact.

C++ and Swift are completely different languages in this respect, so the analysis doesn’t translate over.

Throwing and unwind tables are all over the place in a lot of languages that have exceptions (C++, Java, C#).

Yes, and many C++ projects build with -fno-exceptions because of the huge code and metadata bloat associated with them. This is one of many mistakes in C++/Java/etc that we do not want to repeat with Swift. Additionally, unlike Java, Swift doesn’t generally have the benefit of a JIT compiler that can lazily create these structures on demand.

-Chris

···

On Aug 6, 2016, at 7:25 PM, Félix Cloutier via swift-evolution <swift-evolution@swift.org> wrote:

Currently, Swift adds a hidden byref error parameter to propagate thrown errors:

public func foo() throws {
  throw FooError.error
}

define void @_TF4test3fooFzT_T_(%swift.refcounted* nocapture readnone, %swift.error** nocapture) #0 {
entry:
  %2 = tail call { %swift.error*, %swift.opaque* } @swift_allocError(/* snip */)
  %3 = extractvalue { %swift.error*, %swift.opaque* } %2, 0
  store %swift.error* %3, %swift.error** %1, align 8
  ret void
}

This means that call sites for throwing functions must always check if an exception occurred. This makes it essentially equivalent to returning an error code in addition to the function's actual return type.

Note that we don't currently implement the error handling ABI as we eventually envision it. The plan is for LLVM to eventually lower that %swift.error** parameter to a normally callee-preserved register, which is set to zero by the caller before the call. That way, nonthrowing and 'rethrows' functions can cheaply be used where throwing functions are expected, since a nonthrowing callee will just preserve the zero the caller put in the register. And since ARM64 has a handy 'branch if nonzero' instruction, this means that a throwing call would only cost two instructions on the success path:

  movz wError, #0
  bl _function_that_may_throw
  cbnz wError, catch
  ; happy path continues

Non-taken branches are practically free with modern predictors, and (as Chris noted in his reply) there's no need for the compiler to emit massive unwind metadata or the runtime to interpret that metadata, so the impact on the success path is small and the error path is still only a branch away. As Chris also noted, we only want people using 'throw' in places where errors are expected as part of normal operation, such as file IO or network failures, so we don't *want* to overly pessimize failure branches.

-Joe

···

On Aug 6, 2016, at 7:25 PM, Félix Cloutier via swift-evolution <swift-evolution@swift.org> wrote:

On the other hand, there are exception handling mechanisms where the execution time cost in the success case is zero, and the error case is expensive. When you throw, the runtime walks through the return addresses on the stack to find out if there's an associated catch block that can handle the current exception. Apple uses this mechanism (with the Itanium C++ ABI) for C++ and Objective-C exceptions, at least on x86_64.

Other compiler engineers, like Microsoft's Joe Duffy, have determined that there actually is a non-trivial cost associated to branching for error codes. In exchange for faster error cases, you get slower success cases. This is mildly unfortunate for throwing functions that overwhelmingly succeed.

As Fernando Rodríguez reports in another thread, you have many options to signal errors right now (I took the liberty to add mechanisms that he didn't cover):

  • trapping
  • returning nil
  • returning an enum that contains a success case and a bunch of error cases (which is really just a generalization of "returning nil")
  • throwing

With the current implementation, it seems to me that the main difference between throwing and returning an enum is that catch works even when you don't know what you're catching (but I really hope that we can get typed throws for Swift 4, because unless you actually don't know what you're catching, this feels like an anti-feature). However, if throwing and returning an enum had different-enough performance characteristics, the guidance could become:

  • return an enum value if you expect that the function will fail often or if recovery is expected to be cheap;
  • throw if you expect that the function will rarely fail or if recovery is expected to be expensive for at least one failure reason (for example, if you'd have to re-establish a connection after some network error, or if you'd have to start over some UI process because the user picked a file that was deleted before it could be opened).

Additionally, using the native ABI to throw means that you can throw across language boundaries, which might be useful in the possible but distant future in which Swift interops with C++. Even though catching from the other language will probably be tedious, that would already be useful in language sandwiches to unwind correctly (like in Swift that calls C++ that calls Swift, where the topmost Swift code throws).

I don't really know what to expect in terms of discussion, especially since it may boil down to "we're experts in this fields and you're just peasants" or "the cost of changing this isn't worth the benefit". Still, I'd like some more insight into why Swift exceptions don't use the same mechanism as C++ exceptions and Objective-C exceptions. The error handling rationale document is very terse on the implementation design, especially given the length of the rest of the document:

Error propagation for the kinds of explicit, typed errors that I've been focusing on should be handled by implicit manual propagation. It would be good to bias the implementation somewhat towards the non-error path, perhaps by moving error paths to the ends of functions and so on, and perhaps even by processing cleanups with an interpretive approach instead of directly inlining that code, but we should not bias so heavily as to seriously compromise performance. In other words, we should not use table-based unwinding.

I find the rationale somewhat lacking. I can't pretend that I've measured the impact or frequency of retuning a error objects in Objective-C or Swift, and given my access to source code, I probably couldn't do a comprehensive study. However, as linked above, someone did for Microsoft platforms (for Microsoft-platform-style errors) and found that there is an impact. The way it's phrased here, it feels like this was chosen as a rule of thumb.

Throwing and unwind tables are all over the place in a lot of languages that have exceptions (C++, Java, C#). They throw for a lot of the same reasons that Objective-C frameworks returns errors, and people usually seem content with the performance. Since Swift is co-opting the exception terminology, I think that developers reasonably expect that exceptions will have about the same performance cost as in these other languages.

For binary size concerns, since Swift functions have to annotate whether they throw or not, unless I'm mistaken, there only needs to be exception handler lookup tables for functions that call functions that throw. Java and C# compilers can't really decide that because it's assumed that any call could throw. (C++ has `noexcept` and could do this for the subset of functions that only call `noexcept` functions, but the design requires you to be conscious of what can't throw instead of what can.)

Finally, that error handling rationale doesn't really give any strong reason to use one of the two more verbose error handling solutions (throwing vs returning a complex enum value) over the other.

I believe the language in question was a native-compiled C# variant, not C++.

However, I suspect the numbers from Midori's experiment may not hold up in Swift. Midori used a generational mark-and-sweep garbage collector, so it didn't need to write implicit `finally` blocks to release objects owned by stack frames. Swift would. That could easily eat up the promised 7% code size savings, and the reduced ability to jump past frames could similarly damage the speed improvements.

I'm not saying I have the numbers to prove that it does; I don't. But given our different constraints, there are good reasons to doubt we'd see the same results.

···

On Aug 7, 2016, at 9:36 PM, Chris Lattner via swift-evolution <swift-evolution@swift.org> wrote:

However, as linked above, someone did for Microsoft platforms (for Microsoft-platform-style errors) and found that there is an impact.

C++ and Swift are completely different languages in this respect, so the analysis doesn’t translate over.

--
Brent Royal-Gordon
Architechies

“Zero cost” EH is also *extremely* expensive in the case where an error is actually throw in normal use cases. This makes it completely inappropriate for use in APIs where errors are expected in edge cases (e.g. file not found errors).

Anecdote: I work with a web service that gets several million hits a day. Management loves to use the percentage of succeeding web requests as a measure of overall health. The problem with that metric is that when a web request fails, clients fall in an unhealthy state and stop issuing requests for a while. Therefore, one failing request prevents maybe twenty more that would all have failed if the client hadn't bailed out, but these don't show in statistics. This makes us look much better than we actually are.

If I had any amount of experience with DTrace, I'd write a script that logs syscall errors to try and see how the programs that I use react to failures. I'm almost certain that when one thing stops working, most programs backs out of a much bigger process and don't retry right away. When a program fails to open a file, it's also failing to read/write to it, or whatever else people normally do after they open files. These things are also expensive, and they're rarely the type of things that you need to (or even just can) retry in a tight loop. My perception is that the immediate cost of failing, even with expensive throwing, is generally dwarfed by the immediate cost of succeeding, so we're not necessarily losing out on much.

And if that isn't the case, there are alternatives to throwing that people are already embracing, to the point where error handling practices seem fractured.

I don't really know what to expect in terms of discussion, especially since it may boil down to "we're experts in this fields and you're just peasants”

I’m not sure why you think the Swift team would say something that derogatory. I hope there is no specific action that has led to this belief. If there is, then please let me know.

Of course not. All of you have been very nice and patient with us peasants, at least as far as "us" includes me. :) This was meant as a light-hearted reflection on discussing intimate parts of the language, where my best perspective is probably well-understood desktop/server development, whereas the core team has to see that but also needs a high focus on other things that don't even cross my mind (or at least, that's the heroic picture I have of you guys).

For instance, my "expensive" stops at "takes a while". Your "expensive" might mean "takes a while and drains the very finite energy reserves that we have on this tiny device" or something still more expansive. These differences are not always immediately obvious.

However, as linked above, someone did for Microsoft platforms (for Microsoft-platform-style errors) and found that there is an impact.

C++ and Swift are completely different languages in this respect, so the analysis doesn’t translate over.

The analysis was (probably?) done over C++ and HRESULTs but with the intention of applying it to another language (Midori), and it most likely validated the approach of other languages (essentially everything .NET-based). Several findings of the Midori team are being exported to Microsoft's new APIs, notably the async everywhere and exceptions everywhere paradigms, and these APIs are callable from both so-called managed programs (GCed) and unmanaged programs (ref-counted).

Swift operations don't tend to throw very much, which is a net positive, but it seems to me that comparing the impact of Swift throws with another language's throws is relatively fair. C# isn't shy of FileNotFoundExceptions, for instance.

Yes, and many C++ projects build with -fno-exceptions because of the huge code and metadata bloat associated with them. This is one of many mistakes in C++/Java/etc that we do not want to repeat with Swift. Additionally, unlike Java, Swift doesn’t generally have the benefit of a JIT compiler that can lazily create these structures on demand.

Yes. I believe that you are familiar with a couple of such C++ projects. :)

As we've seemingly agreed, Swift only needs metadata and code for functions that call functions that throw. Also, I understand that we don't need EH tables with the current approach, but my understanding is that the cleanup code needs to exist regardless of how control is transferred to it. (I have no notion of how big EH table entries have to be, so I won't attempt any comparison with the size of the equivalent dispatch code. I'm pretty sure that it's unfavorable.)

All in all, I guess that this is an area of Swift that goes against trends that are overall satisfying in my domain. Of course, I'm writing all of this because I like Swift and I want an ABI that will make it the best choice (or close) for the things that I want to do with it. I'm a little concerned that what's best for a watch isn't necessarily what's best for a server and that what's best for a compiler back-end isn't necessarily what's best for a UI program, but Swift tries position itself for all of these.

Félix

However, as linked above, someone did for Microsoft platforms (for Microsoft-platform-style errors) and found that there is an impact.

C++ and Swift are completely different languages in this respect, so the analysis doesn’t translate over.

I believe the language in question was a native-compiled C# variant, not C++.

However, I suspect the numbers from Midori's experiment may not hold up in Swift.

Ah, there is where the study was from/about. There was actually a great blog by one of the main engineers about Project Midolo and the tales of its Safeties :). The reaction to it and the pushback in the Windows community make me think that either they were theoretical baboons or geniuses as sometimes quoting that project can be so polarising :).

···

Sent from my iPhone
On 9 Aug 2016, at 08:27, Brent Royal-Gordon via swift-evolution <swift-evolution@swift.org> wrote:

On Aug 7, 2016, at 9:36 PM, Chris Lattner via swift-evolution <swift-evolution@swift.org> wrote:

Midori used a generational mark-and-sweep garbage collector, so it didn't need to write implicit `finally` blocks to release objects owned by stack frames. Swift would. That could easily eat up the promised 7% code size savings, and the reduced ability to jump past frames could similarly damage the speed improvements.

I'm not saying I have the numbers to prove that it does; I don't. But given our different constraints, there are good reasons to doubt we'd see the same results.

--
Brent Royal-Gordon
Architechies

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

I think you may be missing Chris's point here.

Exception ABIs trade off between two different cases: when the callee throws and when it doesn't. (There are multiple dimensions of trade-off here, but let's just talk about cycle-count performance.) Suppose that a compiler can implement a call to have cost C if it just "un-implements" exceptions, the way that a C++ compiler does when they're disabled. If we hand-wave a bit, we can pretend that all the costs are local and just say that any particular ABI will add cost N to calls that don't throw and cost Y to calls that do. Therefore, if calls throw with probability P, ABI 1 will be faster than ABI 2 if:
   Y_1 * P + N_1 * (1 - P) < Y_2 * P + N_2 * (1 - P)

So what is P? Well, there's a really important difference between programming languages.

In C++ or C#, you have to compute P as a proportion of every call made by the program. (Technically, C++ has a way to annotate that a function doesn't throw, and it's possible under very specific circumstances for a C++ or C# implementation to prove that even without an annotation; but for the most part, every call must be assumed to be able to throw.) Even if exceptions were idiomatically used in C++ for error reporting the way they are in Java and C#, the number of calls to such "failable" functions would still be completely negligible compared to the number of calls to functions that literally cannot throw unless (maybe!) the system runs out of memory. Therefore, P is tiny — maybe one in a trillion, or one in million in C# if the programmer hasn't yet discovered the non-throwing APIs for testing file existence. At that kind of ratio, it becomes imperative to do basically anything you can to move costs out of N.

But in Swift, arbitrary functions can't throw. When computing P, the denominator only contains calls to functions that really can report some sort of ordinary semantic failure. (Unless you're in something like a rethrows function, but most of those are pretty easy to specialize for non-throwing argument functions.) So P is a lot higher just to begin with.

Furthermore, there are knock-on effects here. Error-handling is a really nice way to solve certain kinds of language problem. (Aside: I keep running into people writing things like JSON deserializers who for some reason insist on making their lives unnecessarily difficult by manually messing around with Optional/Either results or writing their own monad + combinator libraries or what not. Folks, there's an error monad built into the language, and it is designed exactly for this kind of error-propagation problem; please just use it.) But we know from experience that the expense (and other problems) of exception-handling in other languages drives people towards other, much more awkward mechanisms when they expect P to be higher, even if "higher" is still just 1 in 100 or so. That's awful; to us, that's a total language failure.

So the shorter summary of the longer performance argument is that (1) we think that our language choices already make P high enough that the zero-cost trade-offs are questionable and (2) those trade-offs, while completely correct for other languages, are known to severely distort the ways that programmers use exceptions in those languages, leading to worse code and more bugs. So that's why we aren't using zero-cost exceptions in Swift.

John.

···

On Aug 9, 2016, at 8:19 AM, Félix Cloutier via swift-evolution <swift-evolution@swift.org> wrote:

“Zero cost” EH is also *extremely* expensive in the case where an error is actually throw in normal use cases. This makes it completely inappropriate for use in APIs where errors are expected in edge cases (e.g. file not found errors).

Anecdote: I work with a web service that gets several million hits a day. Management loves to use the percentage of succeeding web requests as a measure of overall health. The problem with that metric is that when a web request fails, clients fall in an unhealthy state and stop issuing requests for a while. Therefore, one failing request prevents maybe twenty more that would all have failed if the client hadn't bailed out, but these don't show in statistics. This makes us look much better than we actually are.

If I had any amount of experience with DTrace, I'd write a script that logs syscall errors to try and see how the programs that I use react to failures. I'm almost certain that when one thing stops working, most programs backs out of a much bigger process and don't retry right away. When a program fails to open a file, it's also failing to read/write to it, or whatever else people normally do after they open files. These things are also expensive, and they're rarely the type of things that you need to (or even just can) retry in a tight loop. My perception is that the immediate cost of failing, even with expensive throwing, is generally dwarfed by the immediate cost of succeeding, so we're not necessarily losing out on much.

And if that isn't the case, there are alternatives to throwing that people are already embracing, to the point where error handling practices seem fractured.

I don't really know what to expect in terms of discussion, especially since it may boil down to "we're experts in this fields and you're just peasants”

I’m not sure why you think the Swift team would say something that derogatory. I hope there is no specific action that has led to this belief. If there is, then please let me know.

Of course not. All of you have been very nice and patient with us peasants, at least as far as "us" includes me. :) This was meant as a light-hearted reflection on discussing intimate parts of the language, where my best perspective is probably well-understood desktop/server development, whereas the core team has to see that but also needs a high focus on other things that don't even cross my mind (or at least, that's the heroic picture I have of you guys).

For instance, my "expensive" stops at "takes a while". Your "expensive" might mean "takes a while and drains the very finite energy reserves that we have on this tiny device" or something still more expansive. These differences are not always immediately obvious.

However, as linked above, someone did for Microsoft platforms (for Microsoft-platform-style errors) and found that there is an impact.

C++ and Swift are completely different languages in this respect, so the analysis doesn’t translate over.

The analysis was (probably?) done over C++ and HRESULTs but with the intention of applying it to another language (Midori), and it most likely validated the approach of other languages (essentially everything .NET-based). Several findings of the Midori team are being exported to Microsoft's new APIs, notably the async everywhere and exceptions everywhere paradigms, and these APIs are callable from both so-called managed programs (GCed) and unmanaged programs (ref-counted).

Swift operations don't tend to throw very much, which is a net positive, but it seems to me that comparing the impact of Swift throws with another language's throws is relatively fair. C# isn't shy of FileNotFoundExceptions, for instance.

“Zero cost” EH is also *extremely* expensive in the case where an error is actually throw in normal use cases. This makes it completely inappropriate for use in APIs where errors are expected in edge cases (e.g. file not found errors).

Anecdote: I work with a web service that gets several million hits a day. Management loves to use the percentage of succeeding web requests as a measure of overall health. The problem with that metric is that when a web request fails, clients fall in an unhealthy state and stop issuing requests for a while. Therefore, one failing request prevents maybe twenty more that would all have failed if the client hadn't bailed out, but these don't show in statistics. This makes us look much better than we actually are.

If I had any amount of experience with DTrace, I'd write a script that logs syscall errors to try and see how the programs that I use react to failures. I'm almost certain that when one thing stops working, most programs backs out of a much bigger process and don't retry right away. When a program fails to open a file, it's also failing to read/write to it, or whatever else people normally do after they open files. These things are also expensive, and they're rarely the type of things that you need to (or even just can) retry in a tight loop. My perception is that the immediate cost of failing, even with expensive throwing, is generally dwarfed by the immediate cost of succeeding, so we're not necessarily losing out on much.

And if that isn't the case, there are alternatives to throwing that people are already embracing, to the point where error handling practices seem fractured.

I don't really know what to expect in terms of discussion, especially since it may boil down to "we're experts in this fields and you're just peasants”

I’m not sure why you think the Swift team would say something that derogatory. I hope there is no specific action that has led to this belief. If there is, then please let me know.

Of course not. All of you have been very nice and patient with us peasants, at least as far as "us" includes me. :)

Sorry for derailing the actual error handling discussion. I just wanted to say this:

Don't think less of yourself or devalue your experience and opinions because you don't write a compiler or a programming language for a living. I don't do either of those two things either.

A diverse set of perspectives and experiences will only benefit the design of the language and its libraries.

As an example, later when it's time to discuss concurrency, different contexts and perspectives will have different priorities. For some it will be about responsive UIs, for others it will be parallelism, fault tolerance, distributed programming, and more.

If your perspective isn't represented in those discussions then that could mean that valuable information isn't taken into the appropriate consideration.

Even if certain topics can get heated at times, someone's _idea_ or point of view can be discussed, reasoned about, and be turned down without it in anyway being intended towards that person.

So just like me, try not to get discouraged by all the other smart people on this mailing list that have other experience and know so much about other things than you and I do. :)

That said, back to the discussion about error handling ;)

···

This was meant as a light-hearted reflection on discussing intimate parts of the language, where my best perspective is probably well-understood desktop/server development, whereas the core team has to see that but also needs a high focus on other things that don't even cross my mind (or at least, that's the heroic picture I have of you guys).

For instance, my "expensive" stops at "takes a while". Your "expensive" might mean "takes a while and drains the very finite energy reserves that we have on this tiny device" or something still more expansive. These differences are not always immediately obvious.

However, as linked above, someone did for Microsoft platforms (for Microsoft-platform-style errors) and found that there is an impact.

C++ and Swift are completely different languages in this respect, so the analysis doesn’t translate over.

The analysis was (probably?) done over C++ and HRESULTs but with the intention of applying it to another language (Midori), and it most likely validated the approach of other languages (essentially everything .NET-based). Several findings of the Midori team are being exported to Microsoft's new APIs, notably the async everywhere and exceptions everywhere paradigms, and these APIs are callable from both so-called managed programs (GCed) and unmanaged programs (ref-counted).

Swift operations don't tend to throw very much, which is a net positive, but it seems to me that comparing the impact of Swift throws with another language's throws is relatively fair. C# isn't shy of FileNotFoundExceptions, for instance.

Yes, and many C++ projects build with -fno-exceptions because of the huge code and metadata bloat associated with them. This is one of many mistakes in C++/Java/etc that we do not want to repeat with Swift. Additionally, unlike Java, Swift doesn’t generally have the benefit of a JIT compiler that can lazily create these structures on demand.

Yes. I believe that you are familiar with a couple of such C++ projects. :)

As we've seemingly agreed, Swift only needs metadata and code for functions that call functions that throw. Also, I understand that we don't need EH tables with the current approach, but my understanding is that the cleanup code needs to exist regardless of how control is transferred to it. (I have no notion of how big EH table entries have to be, so I won't attempt any comparison with the size of the equivalent dispatch code. I'm pretty sure that it's unfavorable.)

All in all, I guess that this is an area of Swift that goes against trends that are overall satisfying in my domain. Of course, I'm writing all of this because I like Swift and I want an ABI that will make it the best choice (or close) for the things that I want to do with it. I'm a little concerned that what's best for a watch isn't necessarily what's best for a server and that what's best for a compiler back-end isn't necessarily what's best for a UI program, but Swift tries position itself for all of these.

Félix

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

No, I fully understand this. My point is that this doesn't seem to accurately represent the cost of exceptions.

In a JSON parser, since the topic has been brought up, you don't have Y*P calls that succeed and N*(1-P) calls that fail. You have Y*P calls that succeed and *at most one* call that fails. That's because once you hit the (1-P), you stop parsing. This heavily biases calls in favor of succeeding, which is what I tried to illustrate with my anecdote.

I haven't attempted statistics in a while, but that looks like a geometric distribution to me. That would give something like:

Y_1 * (1/P) + N_1 < Y_2 * (1/P) + N_2

in which Y completely dominates N, especially as P goes smaller.

Félix

···

Le 9 août 2016 à 16:22:08, John McCall <rjmccall@apple.com> a écrit :

On Aug 9, 2016, at 8:19 AM, Félix Cloutier via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

“Zero cost” EH is also *extremely* expensive in the case where an error is actually throw in normal use cases. This makes it completely inappropriate for use in APIs where errors are expected in edge cases (e.g. file not found errors).

Anecdote: I work with a web service that gets several million hits a day. Management loves to use the percentage of succeeding web requests as a measure of overall health. The problem with that metric is that when a web request fails, clients fall in an unhealthy state and stop issuing requests for a while. Therefore, one failing request prevents maybe twenty more that would all have failed if the client hadn't bailed out, but these don't show in statistics. This makes us look much better than we actually are.

If I had any amount of experience with DTrace, I'd write a script that logs syscall errors to try and see how the programs that I use react to failures. I'm almost certain that when one thing stops working, most programs backs out of a much bigger process and don't retry right away. When a program fails to open a file, it's also failing to read/write to it, or whatever else people normally do after they open files. These things are also expensive, and they're rarely the type of things that you need to (or even just can) retry in a tight loop. My perception is that the immediate cost of failing, even with expensive throwing, is generally dwarfed by the immediate cost of succeeding, so we're not necessarily losing out on much.

And if that isn't the case, there are alternatives to throwing that people are already embracing, to the point where error handling practices seem fractured.

I don't really know what to expect in terms of discussion, especially since it may boil down to "we're experts in this fields and you're just peasants”

I’m not sure why you think the Swift team would say something that derogatory. I hope there is no specific action that has led to this belief. If there is, then please let me know.

Of course not. All of you have been very nice and patient with us peasants, at least as far as "us" includes me. :) This was meant as a light-hearted reflection on discussing intimate parts of the language, where my best perspective is probably well-understood desktop/server development, whereas the core team has to see that but also needs a high focus on other things that don't even cross my mind (or at least, that's the heroic picture I have of you guys).

For instance, my "expensive" stops at "takes a while". Your "expensive" might mean "takes a while and drains the very finite energy reserves that we have on this tiny device" or something still more expansive. These differences are not always immediately obvious.

However, as linked above, someone did for Microsoft platforms (for Microsoft-platform-style errors) and found that there is an impact.

C++ and Swift are completely different languages in this respect, so the analysis doesn’t translate over.

The analysis was (probably?) done over C++ and HRESULTs but with the intention of applying it to another language (Midori), and it most likely validated the approach of other languages (essentially everything .NET-based). Several findings of the Midori team are being exported to Microsoft's new APIs, notably the async everywhere and exceptions everywhere paradigms, and these APIs are callable from both so-called managed programs (GCed) and unmanaged programs (ref-counted).

Swift operations don't tend to throw very much, which is a net positive, but it seems to me that comparing the impact of Swift throws with another language's throws is relatively fair. C# isn't shy of FileNotFoundExceptions, for instance.

I think you may be missing Chris's point here.

Exception ABIs trade off between two different cases: when the callee throws and when it doesn't. (There are multiple dimensions of trade-off here, but let's just talk about cycle-count performance.) Suppose that a compiler can implement a call to have cost C if it just "un-implements" exceptions, the way that a C++ compiler does when they're disabled. If we hand-wave a bit, we can pretend that all the costs are local and just say that any particular ABI will add cost N to calls that don't throw and cost Y to calls that do. Therefore, if calls throw with probability P, ABI 1 will be faster than ABI 2 if:
   Y_1 * P + N_1 * (1 - P) < Y_2 * P + N_2 * (1 - P)

So what is P? Well, there's a really important difference between programming languages.

In C++ or C#, you have to compute P as a proportion of every call made by the program. (Technically, C++ has a way to annotate that a function doesn't throw, and it's possible under very specific circumstances for a C++ or C# implementation to prove that even without an annotation; but for the most part, every call must be assumed to be able to throw.) Even if exceptions were idiomatically used in C++ for error reporting the way they are in Java and C#, the number of calls to such "failable" functions would still be completely negligible compared to the number of calls to functions that literally cannot throw unless (maybe!) the system runs out of memory. Therefore, P is tiny — maybe one in a trillion, or one in million in C# if the programmer hasn't yet discovered the non-throwing APIs for testing file existence. At that kind of ratio, it becomes imperative to do basically anything you can to move costs out of N.

But in Swift, arbitrary functions can't throw. When computing P, the denominator only contains calls to functions that really can report some sort of ordinary semantic failure. (Unless you're in something like a rethrows function, but most of those are pretty easy to specialize for non-throwing argument functions.) So P is a lot higher just to begin with.

Furthermore, there are knock-on effects here. Error-handling is a really nice way to solve certain kinds of language problem. (Aside: I keep running into people writing things like JSON deserializers who for some reason insist on making their lives unnecessarily difficult by manually messing around with Optional/Either results or writing their own monad + combinator libraries or what not. Folks, there's an error monad built into the language, and it is designed exactly for this kind of error-propagation problem; please just use it.) But we know from experience that the expense (and other problems) of exception-handling in other languages drives people towards other, much more awkward mechanisms when they expect P to be higher, even if "higher" is still just 1 in 100 or so. That's awful; to us, that's a total language failure.

So the shorter summary of the longer performance argument is that (1) we think that our language choices already make P high enough that the zero-cost trade-offs are questionable and (2) those trade-offs, while completely correct for other languages, are known to severely distort the ways that programmers use exceptions in those languages, leading to worse code and more bugs. So that's why we aren't using zero-cost exceptions in Swift.

John.

I laughed! I've had the same thoughts.

···

Sent from my iPhone

On 10 Aug 2016, at 01:22, John McCall via swift-evolution <swift-evolution@swift.org> wrote:

Aside: I keep running into people writing things like JSON deserializers who for some reason insist on making their lives unnecessarily difficult by manually messing around with Optional/Either results or writing their own monad + combinator libraries or what not. Folks, there's an error monad built into the language, and it is designed exactly for this kind of error-propagation problem; please just use it.

No, I fully understand this. My point is that this doesn't seem to accurately represent the cost of exceptions.

In a JSON parser, since the topic has been brought up, you don't have Y*P calls that succeed and N*(1-P) calls that fail. You have Y*P calls that succeed and *at most one* call that fails. That's because once you hit the (1-P), you stop parsing. This heavily biases calls in favor of succeeding, which is what I tried to illustrate with my anecdote.

This is true of JSON deserialization, where typically you wouldn't do something like prospectively deserialize a value one way and then try something else if that fails. But it's not true of, say, a parser for a more ambiguous language, or for any number of other applications in which non-terminal failures are more common.

As Joe said, our intended ABI here is to get this down to a single branch-on-nonzero-register instruction immediately after the return, which is an overhead we're pretty comfortable with.

John.

···

On Aug 9, 2016, at 7:00 PM, Félix Cloutier <felixcca@yahoo.ca> wrote:

I haven't attempted statistics in a while, but that looks like a geometric distribution to me. That would give something like:

Y_1 * (1/P) + N_1 < Y_2 * (1/P) + N_2

in which Y completely dominates N, especially as P goes smaller.

Félix

Le 9 août 2016 à 16:22:08, John McCall <rjmccall@apple.com <mailto:rjmccall@apple.com>> a écrit :

On Aug 9, 2016, at 8:19 AM, Félix Cloutier via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

“Zero cost” EH is also *extremely* expensive in the case where an error is actually throw in normal use cases. This makes it completely inappropriate for use in APIs where errors are expected in edge cases (e.g. file not found errors).

Anecdote: I work with a web service that gets several million hits a day. Management loves to use the percentage of succeeding web requests as a measure of overall health. The problem with that metric is that when a web request fails, clients fall in an unhealthy state and stop issuing requests for a while. Therefore, one failing request prevents maybe twenty more that would all have failed if the client hadn't bailed out, but these don't show in statistics. This makes us look much better than we actually are.

If I had any amount of experience with DTrace, I'd write a script that logs syscall errors to try and see how the programs that I use react to failures. I'm almost certain that when one thing stops working, most programs backs out of a much bigger process and don't retry right away. When a program fails to open a file, it's also failing to read/write to it, or whatever else people normally do after they open files. These things are also expensive, and they're rarely the type of things that you need to (or even just can) retry in a tight loop. My perception is that the immediate cost of failing, even with expensive throwing, is generally dwarfed by the immediate cost of succeeding, so we're not necessarily losing out on much.

And if that isn't the case, there are alternatives to throwing that people are already embracing, to the point where error handling practices seem fractured.

I don't really know what to expect in terms of discussion, especially since it may boil down to "we're experts in this fields and you're just peasants”

I’m not sure why you think the Swift team would say something that derogatory. I hope there is no specific action that has led to this belief. If there is, then please let me know.

Of course not. All of you have been very nice and patient with us peasants, at least as far as "us" includes me. :) This was meant as a light-hearted reflection on discussing intimate parts of the language, where my best perspective is probably well-understood desktop/server development, whereas the core team has to see that but also needs a high focus on other things that don't even cross my mind (or at least, that's the heroic picture I have of you guys).

For instance, my "expensive" stops at "takes a while". Your "expensive" might mean "takes a while and drains the very finite energy reserves that we have on this tiny device" or something still more expansive. These differences are not always immediately obvious.

However, as linked above, someone did for Microsoft platforms (for Microsoft-platform-style errors) and found that there is an impact.

C++ and Swift are completely different languages in this respect, so the analysis doesn’t translate over.

The analysis was (probably?) done over C++ and HRESULTs but with the intention of applying it to another language (Midori), and it most likely validated the approach of other languages (essentially everything .NET-based). Several findings of the Midori team are being exported to Microsoft's new APIs, notably the async everywhere and exceptions everywhere paradigms, and these APIs are callable from both so-called managed programs (GCed) and unmanaged programs (ref-counted).

Swift operations don't tend to throw very much, which is a net positive, but it seems to me that comparing the impact of Swift throws with another language's throws is relatively fair. C# isn't shy of FileNotFoundExceptions, for instance.

I think you may be missing Chris's point here.

Exception ABIs trade off between two different cases: when the callee throws and when it doesn't. (There are multiple dimensions of trade-off here, but let's just talk about cycle-count performance.) Suppose that a compiler can implement a call to have cost C if it just "un-implements" exceptions, the way that a C++ compiler does when they're disabled. If we hand-wave a bit, we can pretend that all the costs are local and just say that any particular ABI will add cost N to calls that don't throw and cost Y to calls that do. Therefore, if calls throw with probability P, ABI 1 will be faster than ABI 2 if:
   Y_1 * P + N_1 * (1 - P) < Y_2 * P + N_2 * (1 - P)

So what is P? Well, there's a really important difference between programming languages.

In C++ or C#, you have to compute P as a proportion of every call made by the program. (Technically, C++ has a way to annotate that a function doesn't throw, and it's possible under very specific circumstances for a C++ or C# implementation to prove that even without an annotation; but for the most part, every call must be assumed to be able to throw.) Even if exceptions were idiomatically used in C++ for error reporting the way they are in Java and C#, the number of calls to such "failable" functions would still be completely negligible compared to the number of calls to functions that literally cannot throw unless (maybe!) the system runs out of memory. Therefore, P is tiny — maybe one in a trillion, or one in million in C# if the programmer hasn't yet discovered the non-throwing APIs for testing file existence. At that kind of ratio, it becomes imperative to do basically anything you can to move costs out of N.

But in Swift, arbitrary functions can't throw. When computing P, the denominator only contains calls to functions that really can report some sort of ordinary semantic failure. (Unless you're in something like a rethrows function, but most of those are pretty easy to specialize for non-throwing argument functions.) So P is a lot higher just to begin with.

Furthermore, there are knock-on effects here. Error-handling is a really nice way to solve certain kinds of language problem. (Aside: I keep running into people writing things like JSON deserializers who for some reason insist on making their lives unnecessarily difficult by manually messing around with Optional/Either results or writing their own monad + combinator libraries or what not. Folks, there's an error monad built into the language, and it is designed exactly for this kind of error-propagation problem; please just use it.) But we know from experience that the expense (and other problems) of exception-handling in other languages drives people towards other, much more awkward mechanisms when they expect P to be higher, even if "higher" is still just 1 in 100 or so. That's awful; to us, that's a total language failure.

So the shorter summary of the longer performance argument is that (1) we think that our language choices already make P high enough that the zero-cost trade-offs are questionable and (2) those trade-offs, while completely correct for other languages, are known to severely distort the ways that programmers use exceptions in those languages, leading to worse code and more bugs. So that's why we aren't using zero-cost exceptions in Swift.

John.