Throws? and throws!

I really hate to bring Java up, but I really do think it got at least one thing right with its error system, namely that one subset of error (namely `RuntimeException`), wouldn't be enforced by the compiler, but could optionally be caught.

Namely, all of the following would then be valid:
// Explicitly declared to throw, but doesn't have to be caught
func runtimeThrowingFunc1() throws {
  throw SubclassOfRuntimeException
}

// Not explicitly declared to throw, and doesn't have to be caught
func runtimeThrowingFunc2() {
  throw SubclassOfRuntimeException
}

// Doesn't have to be caught...
runtimeThrowingFunc1()
do {
  // But you can if you want
  try runtimeThrowingFunc1()
} catch (SubclassOfRuntimeException) {
} catch (RuntimeException) {
}

// Doesn't have to be caught...
runtimeThrowingFunc2()
do {
  // But you can if you want
  try runtimeThrowingFunc2()
} catch (SubclassOfRuntimeException) {
} catch (RuntimeException) {
}
I'm not advocating explicitly for this model, it is based on Java's exception system after all, but it think it's a place to start at the very least.

-thislooksfun (tlf)

···

On Jan 16, 2017, at 7:59 PM, Kenny Leung via swift-evolution <swift-evolution@swift.org> wrote:

It would also enable the testing of fatal conditions, which would be great.

-Kenny

On Jan 16, 2017, at 5:25 PM, Chris Lattner via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Jan 16, 2017, at 3:57 PM, David Waite via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

My interpretation is that he was advocating a future where a precondition’s failure killed less than the entire process. Instead, shut down some smaller portion like a thread, actor, or container like .Net's app domains (which for those more familiar with Javascript could be loosely compared with Web Workers).

Today - if you wanted a Swift server where overflowing addition didn’t interrupt your service for multiple users, you would need to use something like a pre-fork model (with each request handled by a separate swift process)

That's the difference between CLI and desktop apps where the process is providing services for a single user, and a server where it may be providing a service for thousands or millions of users.

Agreed, I’d also really like to see this some day. It seems like a natural outgrowth of the concurrency model, if it goes the direction of actors. If you’re interested, I speculated on this direction in this talk:
http://researcher.watson.ibm.com/researcher/files/us-lmandel/lattner.pdf

-Chris

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

I don’t entirely agree for two reasons:

1. If a runtime error is thrown and caught, there is no way to guarantee the logical consistency of the state of the running process because who knows which stack frames were unwound without cleaning up properly. There is no way to safely catch a run time exception and recover.

2. People abuse RuntimeException to simplify their error handling code: “if I throw a RuntimeException I don’t need a zillion catch clauses or throws declarations”. Furthermore, if a library uses RuntimeExceptions where it should be using Exceptions there is no way to know if its API has changed except by reading the (hopefully up to date) documentation.

Problem 2 makes me particularly bitter because JEE programmers seem to have learned that allowing code to throw null pointer exceptions has no real consequences for them so they become very cavalier about doing their null checks. The user sees an 500 error page, the sys admin gets a 200 line stack trace in the log, but the system carries on. If you are lucky enough to have the source code to diagnose the problem, it usually turns out that the exception was thrown on a line with eight chained method calls. When you do track the problem down, it turns out you forgot a line in the properties file or something similar but the programmer couldn’t be bothered to help you out because it was easier just to let the null pointer exception happen.

I like Swift’s error handling because programming errors (like force unwrapping nil) are punished mercilessly by process termination and errors caused by external factors cannot be completely ignored. You have to at least put an empty catch block somewhere.

···

On 17 Jan 2017, at 02:38, thislooksfun via swift-evolution <swift-evolution@swift.org> wrote:

I really hate to bring Java up, but I really do think it got at least one thing right with its error system, namely that one subset of error (namely `RuntimeException`), wouldn't be enforced by the compiler, but could optionally be caught.

I totally support the idea of emergency shutown measures
(e.g. save document for recovery),

In general, though, when a precondition is violated, it means your
program state is compromised in an arbitrarily bad way. Unfortunately,
that applies equally across process boundaries as it does across thread
boundaries, if there's some kind of static guarantee of safety as might
be provided by actors. This means you need a way to make decisions
about which kinds of precondition violations should be considered
recoverable as long as you're willing to abandon the job, and which
really do need to be fatal for the whole process... and I don't know if
anyone's really ever figured that problem out. It'd be cool if Swift
could solve it.

···

on Mon Jan 16 2017, Chris Lattner <swift-evolution@swift.org> wrote:

On Jan 16, 2017, at 3:57 PM, David Waite via swift-evolution > <swift-evolution@swift.org> wrote:

My interpretation is that he was advocating a future where a
precondition’s failure killed less than the entire process. Instead,
shut down some smaller portion like a thread, actor, or container

like .Net's app domains (which for those more familiar with
Javascript could be loosely compared with Web Workers).

Today - if you wanted a Swift server where overflowing addition
didn’t interrupt your service for multiple users, you would need to
use something like a pre-fork model (with each request handled by a
separate swift process)

That's the difference between CLI and desktop apps where the process
is providing services for a single user, and a server where it may
be providing a service for thousands or millions of users.

Agreed, I’d also really like to see this some day. It seems like a
natural outgrowth of the concurrency model, if it goes the direction
of actors.

If you’re interested, I speculated on this direction in this talk:
http://researcher.watson.ibm.com/researcher/files/us-lmandel/lattner.pdf
<http://researcher.watson.ibm.com/researcher/files/us-lmandel/lattner.pdf>

--
-Dave

Swift prefers that potential runtime crash points be visible in the code. You can ignore a thrown error and crash instead, but the code will say `try!`. You can force-unwrap an Optional and crash if it is nil, but the code will say `!`.

Allowing `try` to be omitted would obscure those crash points from humans reading the code. It would no longer be possible to read call sites and be able to distinguish which ones might crash due to an uncaught error.

(There are exceptions to this rule. Ordinary arithmetic and array access are checked at runtime, and the default syntax is one that may crash.)

···

On Jan 12, 2017, at 4:46 PM, Xiaodi Wu via swift-evolution <swift-evolution@swift.org> wrote:

On Thu, Jan 12, 2017 at 6:27 PM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

Also, ‘try’ is still required to explicitly mark a potential error propagation point, which is what it was designed to do. You don’t have ‘try’ with the variants because it is by default no longer a propagation point (unless you make it one explicitly with ’try’).

If this is quite safe and more convenient, why then shouldn't it be the behavior for `throws`? (That is, why not just allow people to call throwing functions without `try` and crash if the error isn't caught? It'd be a purely additive proposal that's backwards compatible for all currently compiling code.)

--
Greg Parker gparker@apple.com <mailto:gparker@apple.com> Runtime Wrangler

FWIW, that was meant to be an argument a fortiori (well, ad absurdum, I
suppose). If `throws!` is a good idea (whose purpose is to permit omission
of `try!` at call sites), then how much nicer would it be if we could just
do that with `throws`. It is, after all, technically source compatible. But
it is not, I agree, such a good idea with `throws`, so I'm skeptical of its
worth with the increased burden of new syntax.

···

On Thu, Jan 12, 2017 at 7:34 PM, Greg Parker <gparker@apple.com> wrote:

On Jan 12, 2017, at 4:46 PM, Xiaodi Wu via swift-evolution < > swift-evolution@swift.org> wrote:

On Thu, Jan 12, 2017 at 6:27 PM, Jonathan Hull <jhull@gbis.com> wrote:

Also, ‘try’ is still required to explicitly mark a potential error
propagation point, which is what it was designed to do. You don’t have
‘try’ with the variants because it is by default no longer a propagation
point (unless you make it one explicitly with ’try’).

If this is quite safe and more convenient, why then shouldn't it be the
behavior for `throws`? (That is, why not just allow people to call throwing
functions without `try` and crash if the error isn't caught? It'd be a
purely additive proposal that's backwards compatible for all currently
compiling code.)

Swift prefers that potential runtime crash points be visible in the code.
You can ignore a thrown error and crash instead, but the code will say
`try!`. You can force-unwrap an Optional and crash if it is nil, but the
code will say `!`.

Allowing `try` to be omitted would obscure those crash points from humans
reading the code. It would no longer be possible to read call sites and be
able to distinguish which ones might crash due to an uncaught error.

(There are exceptions to this rule. Ordinary arithmetic and array access
are checked at runtime, and the default syntax is one that may crash.)

--

Greg Parker gparker@apple.com Runtime Wrangler

Also, ‘try’ is still required to explicitly mark a potential error propagation point, which is what it was designed to do. You don’t have ‘try’ with the variants because it is by default no longer a propagation point (unless you make it one explicitly with ’try’).

If this is quite safe and more convenient, why then shouldn't it be the behavior for `throws`? (That is, why not just allow people to call throwing functions without `try` and crash if the error isn't caught? It'd be a purely additive proposal that's backwards compatible for all currently compiling code.)

Swift prefers that potential runtime crash points be visible in the code. You can ignore a thrown error and crash instead, but the code will say `try!`. You can force-unwrap an Optional and crash if it is nil, but the code will say `!`.

Allowing `try` to be omitted would obscure those crash points from humans reading the code. It would no longer be possible to read call sites and be able to distinguish which ones might crash due to an uncaught error.

(There are exceptions to this rule. Ordinary arithmetic and array access are checked at runtime, and the default syntax is one that may crash.)

Indirectly Diving by zero [1] // the compiler won’t let you directly divide by zero
Overflow or underflow [2]
Array Index out of range

Aside from those, Are there any other runtime “exceptions"?

[1] https://en.wikipedia.org/wiki/Undefined_(mathematics)#In_arithmetic <https://en.wikipedia.org/wiki/Undefined_(mathematics)#In_arithmetic>

//[2] example:
let number = Int16.max
print(number+1) //crash Illegal instruction

···

On Jan 12, 2017, at 5:34 PM, Greg Parker via swift-evolution <swift-evolution@swift.org> wrote:

On Jan 12, 2017, at 4:46 PM, Xiaodi Wu via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Thu, Jan 12, 2017 at 6:27 PM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

--
Greg Parker gparker@apple.com <mailto:gparker@apple.com> Runtime Wrangler

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

My intended framing of this does not seem to be coming across in my arguments. I am not thinking of this as a way to avoid typing ‘try!’ or ‘try?’. This is not intended to replace any of the current uses of ‘throws’. Rather, it is intended to replace trapping and nil-returning functions where converting it to throw would be burdensome in the most common use cases, but still desirable in less common use cases. In my mind, it is only enabling the author to provide extra information and flexibility, compared to the current behavior.

For example, let’s say I have a failable initializer, which could fail for 2 or 3 different reasons, and that the vast majority of use-cases I only care whether it succeeded or not (which is why nil-returning was chosen)… but there may be a rare case or two where I really would prefer to probe deeper (and changing it to a throwing initializer would inhibit the majority cases). Then using ’throws?’ allows the primary usage to remain unchanged, while allowing users to opt-in to throwing behavior when desired.

Right now I end up making multiple functions, which are identical except for throw vs nil-return, and must now be kept in sync. I’ll admit it isn’t terribly common, but it has come up enough that I think it would still be useful.

The other argument I will make is one of symmetry. We have 3 different types of error handling in swift: throwing, optional-returns, and trapping. There is already some ability to convert between these:

If you have a throwing function:
  ‘try?’ allows you to convert to optional-return
  ‘try!’ allows you to convert to trapping

If you have an optional-return:
  ‘!’ allows you to convert to trapping
  you are unable to convert to throwing (because it requires extra info which isn’t available)

If you have a trapping function, you are unable to convert to either.

With ‘throws?’ you have an optional return which you can convert to throwing with ‘try’

With ‘throws!’ you have a trapping function where:
  ‘try?’ allows you to convert to optional-return
  ‘try’ allows you to convert to throwing

Thus, ‘throws?’ and ‘throws!’ allow you provide optional-return and trapping functions where extra information is provided so that it is possible to convert to throwing when desired. In cases where this conversion is not appropriate, the author would simply continue to use the current methods.

Basically it is useful in designs where optional-return or trapping were ultimately chosen, but there was also a strong case to be made for making it a throwing function. I think the fears of people using it instead of ‘throws’ are unfounded because they already have the ability to use optionals or trapping… this just mitigates some of the losses from those choices.

Does that make more sense?

Thanks,
Jon

···

On Jan 12, 2017, at 5:34 PM, Greg Parker <gparker@apple.com> wrote:

On Jan 12, 2017, at 4:46 PM, Xiaodi Wu via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Thu, Jan 12, 2017 at 6:27 PM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

Also, ‘try’ is still required to explicitly mark a potential error propagation point, which is what it was designed to do. You don’t have ‘try’ with the variants because it is by default no longer a propagation point (unless you make it one explicitly with ’try’).

If this is quite safe and more convenient, why then shouldn't it be the behavior for `throws`? (That is, why not just allow people to call throwing functions without `try` and crash if the error isn't caught? It'd be a purely additive proposal that's backwards compatible for all currently compiling code.)

Swift prefers that potential runtime crash points be visible in the code. You can ignore a thrown error and crash instead, but the code will say `try!`. You can force-unwrap an Optional and crash if it is nil, but the code will say `!`.

Allowing `try` to be omitted would obscure those crash points from humans reading the code. It would no longer be possible to read call sites and be able to distinguish which ones might crash due to an uncaught error.

(There are exceptions to this rule. Ordinary arithmetic and array access are checked at runtime, and the default syntax is one that may crash.)

--
Greg Parker gparker@apple.com <mailto:gparker@apple.com> Runtime Wrangler

I really hate to bring Java up, but I really do think it got at least one thing right with its error system, namely that one subset of error (namely `RuntimeException`), wouldn't be enforced by the compiler, but could optionally be caught.

I don’t entirely agree for two reasons:

1. If a runtime error is thrown and caught, there is no way to guarantee the logical consistency of the state of the running process because who knows which stack frames were unwound without cleaning up properly. There is no way to safely catch a run time exception and recover.

From a Java perspective, clearly that's not true.

It may well be true for the current implementation of Swift, and there's questions about how to clean up objects with reference counts outstanding (since there's no garbage collector). It isn't necessarily the case that it isn't possible, but it does require some additional stack unwinding; and the implementation of that may be too cumbersome/impractical/undesirable to occur.

2. People abuse RuntimeException to simplify their error handling code: “if I throw a RuntimeException I don’t need a zillion catch clauses or throws declarations”. Furthermore, if a library uses RuntimeExceptions where it should be using Exceptions there is no way to know if its API has changed except by reading the (hopefully up to date) documentation.

Problem 2 makes me particularly bitter because JEE programmers seem to have learned that allowing code to throw null pointer exceptions has no real consequences for them so they become very cavalier about doing their null checks. The user sees an 500 error page, the sys admin gets a 200 line stack trace in the log, but the system carries on. If you are lucky enough to have the source code to diagnose the problem, it usually turns out that the exception was thrown on a line with eight chained method calls. When you do track the problem down, it turns out you forgot a line in the properties file or something similar but the programmer couldn’t be bothered to help you out because it was easier just to let the null pointer exception happen.

That's a pretty poor example, and in any case, the individual cause would allow the system to continue on processing subsequent requests, which is generally what's wanted. When you're working on large systems and with large data sets, there are generally always problematic items like this which have to be diagnosed sufficiently in order to be retried, or even handled manually. It has very little to do with the language and more to do with the quality of data which isn't something you always have control over.

I like Swift’s error handling because programming errors (like force unwrapping nil) are punished mercilessly by process termination and errors caused by external factors cannot be completely ignored. You have to at least put an empty catch block somewhere.

This is one of the significant problems in Swift at the moment for server-side logic. It may make sense to do this from a single-user application, but writing a server which is designed to handle hundreds or thousands of simultaneous clients can suffer from the fact that one bad client request can take out everyone's connection. In the server working group and in existing tools like Kitura/Vapor/Perfect etc. it's a non-trivial problem to solve, other than using a CGI like model where each request is handled by a single worker process that can terminate independently of the other requests in flight.

Alex

···

On 17 Jan 2017, at 11:10, Jeremy Pereira via swift-evolution <swift-evolution@swift.org> wrote:

On 17 Jan 2017, at 02:38, thislooksfun via swift-evolution <swift-evolution@swift.org> wrote:

Bringing it back towards the initial post, what if there was a separation from true needs-to-take-down-the-entire-system trapping and things like out-of-bounds and overflow errors which could stop at thread/actor bounds (or in some cases even be recovered)?

The latter were the ones I was targeting with my proposal. They live in this grey area, because honestly, they should be throwing errors if not for the performance overhead and usability issues. My solution was to give the compiler a way to know that this was the desired behavior and optimize the throwing away unless it was explicitly requested.

I guess another option would be to introduce a new concept for this grey type of error. Maybe instead of ‘fatalError’ you have something with a different name saying “this should only take down the current actor”… and then you add a well defined process for cleanup.

I would still really like to see the ability to turn this type of thing into normal throwing error handling, so maybe something like ‘fatalThrow’ which takes the same information as ‘throw’, so that it can be converted to a standard throw by the caller, but otherwise traps and takes down the actor. That would make certain types of algorithms much simpler for me.

Thanks,
Jon

···

On Jan 17, 2017, at 11:49 AM, Dave Abrahams via swift-evolution <swift-evolution@swift.org> wrote:

on Mon Jan 16 2017, Chris Lattner <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Jan 16, 2017, at 3:57 PM, David Waite via swift-evolution >> <swift-evolution@swift.org> wrote:

My interpretation is that he was advocating a future where a
precondition’s failure killed less than the entire process. Instead,
shut down some smaller portion like a thread, actor, or container

like .Net's app domains (which for those more familiar with
Javascript could be loosely compared with Web Workers).

Today - if you wanted a Swift server where overflowing addition
didn’t interrupt your service for multiple users, you would need to
use something like a pre-fork model (with each request handled by a
separate swift process)

That's the difference between CLI and desktop apps where the process
is providing services for a single user, and a server where it may
be providing a service for thousands or millions of users.

Agreed, I’d also really like to see this some day. It seems like a
natural outgrowth of the concurrency model, if it goes the direction
of actors.

If you’re interested, I speculated on this direction in this talk:
http://researcher.watson.ibm.com/researcher/files/us-lmandel/lattner.pdf
<http://researcher.watson.ibm.com/researcher/files/us-lmandel/lattner.pdf>

I totally support the idea of emergency shutown measures
(e.g. save document for recovery),

In general, though, when a precondition is violated, it means your
program state is compromised in an arbitrarily bad way. Unfortunately,
that applies equally across process boundaries as it does across thread
boundaries, if there's some kind of static guarantee of safety as might
be provided by actors. This means you need a way to make decisions
about which kinds of precondition violations should be considered
recoverable as long as you're willing to abandon the job, and which
really do need to be fatal for the whole process... and I don't know if
anyone's really ever figured that problem out. It'd be cool if Swift
could solve it.

--
-Dave

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

My intended framing of this does not seem to be coming across in my
arguments. I am not thinking of this as a way to avoid typing ‘try!’ or
‘try?’. This is not intended to replace any of the current uses of
‘throws’. Rather, it is intended to replace trapping and nil-returning
functions where converting it to throw would be burdensome in the most
common use cases, but still desirable in less common use cases. In my
mind, it is only enabling the author to provide extra information and
flexibility, compared to the current behavior.

For example, let’s say I have a failable initializer, which could fail for
2 or 3 different reasons, and that the vast majority of use-cases I only
care whether it succeeded or not (which is why nil-returning was chosen)…
but there may be a rare case or two where I really would prefer to probe
deeper (and changing it to a throwing initializer would inhibit the
majority cases). Then using ’throws?’ allows the primary usage to remain
unchanged, while allowing users to opt-in to throwing behavior when desired.

Right now I end up making multiple functions, which are identical except
for throw vs nil-return, and must now be kept in sync. I’ll admit it isn’t
terribly common, but it has come up enough that I think it would still be
useful.

As you say, I think this is a pretty niche use case. When you are in
control of the code, it's trivial to write a second function that wraps the
throwing function, returning an optional value on error. The only thing
you'd need to keep in sync would be the declaration, not the function body,
and that isn't truly onerous on the rare occasion when this is at issue.

The other argument I will make is one of symmetry. We have 3 different
types of error handling in swift: throwing, optional-returns, and trapping.

As the Error Handling Rationale document has pointed out, these three
different types of error handling are meant for different _kinds_ of error.
The idea is that ideally the choice of what kind of error handling to use
shouldn't be down to taste or other arbitrary criteria, but should reflect
whether we're dealing with a recoverable error (throws), simple domain
error (return nil), or logical error (trap). That much can be determined at
the point of declaration. At the use site, there are tools to allow the end
user to handle these errors in a variety of ways, but there is a logic
behind allowing conversions between some and not all combinations:

* A logical error is meant to be unrecoverable and thus cannot be converted
to either nil or throw. To call a function that traps is to assert that the
function's preconditions are met. If it's a possibility that the
preconditions cannot be met, it should be handled before calling the
function. A trap represents a programming mistake that should be fixed by
changing the code so as not to trap. There are adequate solutions to the
few instances where an error that currently traps might not be always have
to be fatal: in the case of array indices, for instance, there's been
proposals to allow more lenient subscripting that don't trap, at the cost
of extra overhead--of course, you can already implement this for yourself
in an extension.

* A simple domain error fails in only one obvious way and doesn't need an
error; the end user can always decide that a failure should be handled by
trapping using `!`--in essence, the user is asserting that the occurrence
of a simple domain error at that use site is a logical error. It shouldn't
be useful to convert nil to an error, because a simple domain error should
be able to fail in only one way; if the function fails in more than one
way, the function should throw, as it's no longer a simple domain error.

* A recoverable error can fail in one or more ways, and how you recover may
depend on how it failed; a user can always decide that they'll always
recover in the same way by using `try?`, or they can assert that it's a
logical error to fail at all using `try!`. The choice is up to the user.

As far as I can tell, `throws?` and `throws!` do not change these choices;
it simply says that a recoverable error should be handled by default as a
simple domain error or a logical error, which in the Swift error handling
model should be up to the author who's using the function and not the
author who's declaring it.

There is already some ability to convert between these:

If you have a throwing function:
‘try?’ allows you to convert to optional-return
‘try!’ allows you to convert to trapping

If you have an optional-return:
‘!’ allows you to convert to trapping
you are unable to convert to throwing (because it requires extra info
which isn’t available)

If you have a trapping function, you are unable to convert to either.

With ‘throws?’ you have an optional return which you can convert to
throwing with ‘try’

With ‘throws!’ you have a trapping function where:
‘try?’ allows you to convert to optional-return
‘try’ allows you to convert to throwing

Thus, ‘throws?’ and ‘throws!’ allow you provide optional-return and
trapping functions where extra information is provided so that it is
possible to convert to throwing when desired. In cases where this
conversion is not appropriate, the author would simply continue to use the
current methods.

Basically it is useful in designs where optional-return or trapping were
ultimately chosen, but there was also a strong case to be made for making
it a throwing function.

This is totally the opposite use case from that outlined above. Here, you
don't control the code and the original author decided to return an
optional value or to trap. In essence, you're saying that the original
author made a mistake, and what the author considered to be an
unrecoverable error should be recoverable. However, you won't be able to
squeeze useful errors out of it unless you write additional diagnostic
logic yourself. This is already possible to do in an extension, where you
can add a throwing function that checks the arguments before forwarding to
the failable or trapping function. As far as I can tell, `throws!` doesn't
provide you with any more tools to do so.

I think the fears of people using it instead of ‘throws’ are unfounded

because they already have the ability to use optionals or trapping… this
just mitigates some of the losses from those choices.

Does that make more sense?

Maybe I'm misunderstanding something. An author that writes a function that
throws offers the greatest number of choices to their end users for how to
handle errors. You're saying that in designing libraries you choose not to
use `throws` because you don't want to burden your users with `try?` or
`try!`, which as you say allows users to handle these errors in any way
they choose, even though your functions fail in more than one non-trivial
way. This represents a fundamental disagreement with the Swift error
handling rationale, and again the disagreement boils down to: are the four
letters in `try!` a burden? I would just think of it as making every
throwing function at most four letters longer in name.

Put another way, the Swift error handling design says that at the point of
declaration, the choice of `throws` vs. returning nil should be based on
how many ways there are to fail (or more accurately, how many meaningfully
distinct ways there are to recover from failure), not how often the user
cares about that information. If there are two meaningfully distinct ways
to recover from failure in your function, but users will likely choose to
recover from both failures in the same way 99.9% of the time, still choose
`throws`. If there is only one way to recover, choose to return nil. If
there are none, choose to trap.

Put another way, going back to your original statement of motivation:

There are some cases where it would be nice to throw errors, but errors are

rarely expected in most use cases, so the overhead of ‘try’, etc… would
make things unusable.

I disagree with this statement. The overhead of `try` essentially never
tips the balance between unusable and usable, for the same reason that
making a function name three or four letters longer essentially never tips
the balance between usable and unusable.

Thus fatalError or optionals are used instead.

In the Swift error handling model, the frequency with which a user might
have to write `try!` or `try?` should play no role in the author's choice
of throwing vs. returning nil vs. fatalError.

For example, operators like ‘+’ could never throw because adding ’try’
everywhere would make arithmetic unbearable.

As we discussed above, AFAICT, addition traps for performance reasons, as
Swift aspires to be usable for systems programming.

Even if that weren't the case, it would never throw because there's only
one meaningful way in which addition can fail; thus, if anything, it'd be a
failable operation. This would probably not be terrible (other than for
performance), as nil values could be propagated to the end of any
calculation, at which point a user would write `!` or handle the issue in a
more sophisticated way.

(As a digression, for FP values, NaN offers yet another way of signaling an
error, which due to IEEE conformance Swift is obliged to keep distinct;
however, as can be evidenced by the fact that the NaN payload is pretty
much never used, it can be thought of as a counterpart to nil as opposed to
Error.)

And finally, even if an operator function could fail in multiple ways
(we're really getting to very hypothetical hypotheticals here), writing
`try!` all the time might look silly and non-Swift users might then mock
the language, but I dispute the contention that it would make things
"unbearable."

Thanks,

···

On Sat, Jan 14, 2017 at 8:03 PM, Jonathan Hull <jhull@gbis.com> wrote:

Jon

On Jan 12, 2017, at 5:34 PM, Greg Parker <gparker@apple.com> wrote:

On Jan 12, 2017, at 4:46 PM, Xiaodi Wu via swift-evolution < > swift-evolution@swift.org> wrote:

On Thu, Jan 12, 2017 at 6:27 PM, Jonathan Hull <jhull@gbis.com> wrote:

Also, ‘try’ is still required to explicitly mark a potential error
propagation point, which is what it was designed to do. You don’t have
‘try’ with the variants because it is by default no longer a propagation
point (unless you make it one explicitly with ’try’).

If this is quite safe and more convenient, why then shouldn't it be the
behavior for `throws`? (That is, why not just allow people to call throwing
functions without `try` and crash if the error isn't caught? It'd be a
purely additive proposal that's backwards compatible for all currently
compiling code.)

Swift prefers that potential runtime crash points be visible in the code.
You can ignore a thrown error and crash instead, but the code will say
`try!`. You can force-unwrap an Optional and crash if it is nil, but the
code will say `!`.

Allowing `try` to be omitted would obscure those crash points from humans
reading the code. It would no longer be possible to read call sites and be
able to distinguish which ones might crash due to an uncaught error.

(There are exceptions to this rule. Ordinary arithmetic and array access
are checked at runtime, and the default syntax is one that may crash.)

--
Greg Parker gparker@apple.com Runtime Wrangler

My intended framing of this does not seem to be coming across in my arguments. I am not thinking of this as a way to avoid typing ‘try!’ or ‘try?’. This is not intended to replace any of the current uses of ‘throws’. Rather, it is intended to replace trapping and nil-returning functions where converting it to throw would be burdensome in the most common use cases, but still desirable in less common use cases. In my mind, it is only enabling the author to provide extra information and flexibility, compared to the current behavior.

I'm more or less neutral towards the proposal, but to express my perception, one part seems similar to the use of "!" in variable declarations (like IB does):
It just makes force unwrapping (or, here: assuming that no error happened) the default, but leaves all other options intact.

But Afaics, force unwrapped variables are considered a bad practice that should be avoided (nearly) wherever possible…

The "try?-replacement" could be more useful for me: In my codebase, I have several throwing functions paired with computed properties (returning an Optional of the same type) that directly map to them, and at the call site, I usually don't care about the error.

I'm not sure if it's useful enough to justify new syntactic sugar for it, though — especially as I think it's not very intuitive.

I really hate to bring Java up, but I really do think it got at least one thing right with its error system, namely that one subset of error (namely `RuntimeException`), wouldn't be enforced by the compiler, but could optionally be caught.

I don’t entirely agree for two reasons:

1. If a runtime error is thrown and caught, there is no way to guarantee the logical consistency of the state of the running process because who knows which stack frames were unwound without cleaning up properly. There is no way to safely catch a run time exception and recover.

From a Java perspective, clearly that's not true.

It absolutely is true.

     int someMethod()
     {
         aquireSomeLock();
         doSomethingThatIsNotDeclaredToThrowAnException();
         releaseSomeLock();
     }

It may well be true for the current implementation of Swift, and there's questions about how to clean up objects with reference counts outstanding (since there's no garbage collector). It isn't necessarily the case that it isn't possible, but it does require some additional stack unwinding; and the implementation of that may be too cumbersome/impractical/undesirable to occur.

Not all resources are reference counts or memory allocations.

2. People abuse RuntimeException to simplify their error handling code: “if I throw a RuntimeException I don’t need a zillion catch clauses or throws declarations”. Furthermore, if a library uses RuntimeExceptions where it should be using Exceptions there is no way to know if its API has changed except by reading the (hopefully up to date) documentation.

Problem 2 makes me particularly bitter because JEE programmers seem to have learned that allowing code to throw null pointer exceptions has no real consequences for them so they become very cavalier about doing their null checks. The user sees an 500 error page, the sys admin gets a 200 line stack trace in the log, but the system carries on. If you are lucky enough to have the source code to diagnose the problem, it usually turns out that the exception was thrown on a line with eight chained method calls. When you do track the problem down, it turns out you forgot a line in the properties file or something similar but the programmer couldn’t be bothered to help you out because it was easier just to let the null pointer exception happen.

That's a pretty poor example, and in any case, the individual cause would allow the system to continue on processing subsequent requests, which is generally what's wanted. When you're working on large systems and with large data sets, there are generally always problematic items like this which have to be diagnosed sufficiently in order to be retried, or even handled manually. It has very little to do with the language and more to do with the quality of data which isn't something you always have control over.

No it’s not a poor example. I’ve seen it happen with real software in real production scenarios.

I like Swift’s error handling because programming errors (like force unwrapping nil) are punished mercilessly by process termination and errors caused by external factors cannot be completely ignored. You have to at least put an empty catch block somewhere.

This is one of the significant problems in Swift at the moment for server-side logic. It may make sense to do this from a single-user application, but writing a server which is designed to handle hundreds or thousands of simultaneous clients can suffer from the fact that one bad client request can take out everyone's connection. In the server working group and in existing tools like Kitura/Vapor/Perfect etc. it's a non-trivial problem to solve, other than using a CGI like model where each request is handled by a single worker process that can terminate independently of the other requests in flight.

I agree it’s a non trivial problem to resolve. I’m saying that the Java “solution” of RuntimeExceptions doesn’t resolve it.

···

On 17 Jan 2017, at 11:28, Alex Blewitt <alblue@apple.com> wrote:

On 17 Jan 2017, at 11:10, Jeremy Pereira via swift-evolution <swift-evolution@swift.org> wrote:

On 17 Jan 2017, at 02:38, thislooksfun via swift-evolution <swift-evolution@swift.org> wrote:

Alex

Experience from ARC in Objective-C++ is that exception-safe reference counting costs a lot of code size and execution time. The threat of exceptions prevents many reference count optimizations.

Adding reference count operations to the unwind tables (e.g. DWARF) instead of using handler code to fix refcounts might help. Limiting the exception threat by using no-throw by default might help.

···

On Jan 17, 2017, at 3:28 AM, Alex Blewitt via swift-evolution <swift-evolution@swift.org> wrote:

On 17 Jan 2017, at 11:10, Jeremy Pereira via swift-evolution <swift-evolution@swift.org> wrote:

On 17 Jan 2017, at 02:38, thislooksfun via swift-evolution <swift-evolution@swift.org> wrote:

I really hate to bring Java up, but I really do think it got at least one thing right with its error system, namely that one subset of error (namely `RuntimeException`), wouldn't be enforced by the compiler, but could optionally be caught.

I don’t entirely agree for two reasons:

1. If a runtime error is thrown and caught, there is no way to guarantee the logical consistency of the state of the running process because who knows which stack frames were unwound without cleaning up properly. There is no way to safely catch a run time exception and recover.

From a Java perspective, clearly that's not true.

It may well be true for the current implementation of Swift, and there's questions about how to clean up objects with reference counts outstanding (since there's no garbage collector). It isn't necessarily the case that it isn't possible, but it does require some additional stack unwinding; and the implementation of that may be too cumbersome/impractical/undesirable to occur.

--
Greg Parker gparker@apple.com <mailto:gparker@apple.com> Runtime Wrangler

As one example of an algorithm which would be helped, I recently had a project which had to do arithmetic on random values. Figuring out if an operation is going to overflow/underflow without overflowing/underflowing requires a bunch of lines of code which obscure the main logic. It would be much easier to just do the operation, then catch the error if it happens and apply the special case for that. The code would be much more readable.

The same arguments could be made of user entered values, which are hard to reason about before hand. Being able to move forward as if they are not going to cause a problem and then catching any problems which do happen (so that the appropriate error can be shown… perhaps modelessly) is much clearer and also much more friendly than crashing their program (potentially taking any unsaved data with it).

Thanks,
Jon

···

On Jan 17, 2017, at 2:45 PM, Jonathan Hull via swift-evolution <swift-evolution@swift.org> wrote:

Bringing it back towards the initial post, what if there was a separation from true needs-to-take-down-the-entire-system trapping and things like out-of-bounds and overflow errors which could stop at thread/actor bounds (or in some cases even be recovered)?

The latter were the ones I was targeting with my proposal. They live in this grey area, because honestly, they should be throwing errors if not for the performance overhead and usability issues. My solution was to give the compiler a way to know that this was the desired behavior and optimize the throwing away unless it was explicitly requested.

I guess another option would be to introduce a new concept for this grey type of error. Maybe instead of ‘fatalError’ you have something with a different name saying “this should only take down the current actor”… and then you add a well defined process for cleanup.

I would still really like to see the ability to turn this type of thing into normal throwing error handling, so maybe something like ‘fatalThrow’ which takes the same information as ‘throw’, so that it can be converted to a standard throw by the caller, but otherwise traps and takes down the actor. That would make certain types of algorithms much simpler for me.

Thanks,
Jon

On Jan 17, 2017, at 11:49 AM, Dave Abrahams via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

on Mon Jan 16 2017, Chris Lattner <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Jan 16, 2017, at 3:57 PM, David Waite via swift-evolution >>> <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

My interpretation is that he was advocating a future where a
precondition’s failure killed less than the entire process. Instead,
shut down some smaller portion like a thread, actor, or container

like .Net's app domains (which for those more familiar with
Javascript could be loosely compared with Web Workers).

Today - if you wanted a Swift server where overflowing addition
didn’t interrupt your service for multiple users, you would need to
use something like a pre-fork model (with each request handled by a
separate swift process)

That's the difference between CLI and desktop apps where the process
is providing services for a single user, and a server where it may
be providing a service for thousands or millions of users.

Agreed, I’d also really like to see this some day. It seems like a
natural outgrowth of the concurrency model, if it goes the direction
of actors.

If you’re interested, I speculated on this direction in this talk:
http://researcher.watson.ibm.com/researcher/files/us-lmandel/lattner.pdf
<http://researcher.watson.ibm.com/researcher/files/us-lmandel/lattner.pdf>

I totally support the idea of emergency shutown measures
(e.g. save document for recovery),

In general, though, when a precondition is violated, it means your
program state is compromised in an arbitrarily bad way. Unfortunately,
that applies equally across process boundaries as it does across thread
boundaries, if there's some kind of static guarantee of safety as might
be provided by actors. This means you need a way to make decisions
about which kinds of precondition violations should be considered
recoverable as long as you're willing to abandon the job, and which
really do need to be fatal for the whole process... and I don't know if
anyone's really ever figured that problem out. It'd be cool if Swift
could solve it.

--
-Dave

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Bringing it back towards the initial post, what if there was a
separation from true needs-to-take-down-the-entire-system trapping and
things like out-of-bounds and overflow errors which could stop at
thread/actor bounds (or in some cases even be recovered)?

The latter were the ones I was targeting with my proposal. They live
in this grey area, because honestly, they should be throwing errors if
not for the performance overhead and usability issues.

I fundamentally disagree with that statement. There is value in
declaring certain program behaviors illegal, and in general for things
like out-of-bounds access and overflow no sensible recovery (where
“recovery” means something that would allow the program to continue
reliably) is possible.

···

on Tue Jan 17 2017, Jonathan Hull <jhull-AT-gbis.com> wrote:

My solution was to give the compiler a way to know that this was the
desired behavior and optimize the throwing away unless it was
explicitly requested.

I guess another option would be to introduce a new concept for this
grey type of error. Maybe instead of ‘fatalError’ you have something
with a different name saying “this should only take down the current
actor”… and then you add a well defined process for cleanup.

I would still really like to see the ability to turn this type of
thing into normal throwing error handling, so maybe something like
‘fatalThrow’ which takes the same information as ‘throw’, so that it
can be converted to a standard throw by the caller, but otherwise
traps and takes down the actor. That would make certain types of
algorithms much simpler for me.

Thanks,
Jon

On Jan 17, 2017, at 11:49 AM, Dave Abrahams via swift-evolution <swift-evolution@swift.org> wrote:

on Mon Jan 16 2017, Chris Lattner <swift-evolution@swift.org >> <mailto:swift-evolution@swift.org>> wrote:

On Jan 16, 2017, at 3:57 PM, David Waite via swift-evolution >>> <swift-evolution@swift.org> wrote:

My interpretation is that he was advocating a future where a
precondition’s failure killed less than the entire process. Instead,
shut down some smaller portion like a thread, actor, or container

like .Net's app domains (which for those more familiar with
Javascript could be loosely compared with Web Workers).

Today - if you wanted a Swift server where overflowing addition
didn’t interrupt your service for multiple users, you would need to
use something like a pre-fork model (with each request handled by a
separate swift process)

That's the difference between CLI and desktop apps where the process
is providing services for a single user, and a server where it may
be providing a service for thousands or millions of users.

Agreed, I’d also really like to see this some day. It seems like a
natural outgrowth of the concurrency model, if it goes the direction
of actors.

If you’re interested, I speculated on this direction in this talk:
http://researcher.watson.ibm.com/researcher/files/us-lmandel/lattner.pdf
<http://researcher.watson.ibm.com/researcher/files/us-lmandel/lattner.pdf
<http://researcher.watson.ibm.com/researcher/files/us-lmandel/lattner.pdf>>

I totally support the idea of emergency shutown measures
(e.g. save document for recovery),

In general, though, when a precondition is violated, it means your
program state is compromised in an arbitrarily bad way. Unfortunately,
that applies equally across process boundaries as it does across thread
boundaries, if there's some kind of static guarantee of safety as might
be provided by actors. This means you need a way to make decisions
about which kinds of precondition violations should be considered
recoverable as long as you're willing to abandon the job, and which
really do need to be fatal for the whole process... and I don't know if
anyone's really ever figured that problem out. It'd be cool if Swift
could solve it.

--
-Dave

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution
<https://lists.swift.org/mailman/listinfo/swift-evolution>

--
-Dave

I am a bit ambivalent on this, on the one hand I think that “catch all, bring down thread only” stimulate careless programming, and on the other hand it did save my beacon on more than one occasion.

In a perfect world we should do without this kind of feature.

In the real world we need it to meet deadlines imposed by people who don’t understand software design….

Regards,
Rien

Site: http://balancingrock.nl
Blog: http://swiftrien.blogspot.com
Github: http://github.com/Swiftrien
Project: http://swiftfire.nl

···

On 17 Jan 2017, at 23:45, Jonathan Hull via swift-evolution <swift-evolution@swift.org> wrote:

Bringing it back towards the initial post, what if there was a separation from true needs-to-take-down-the-entire-system trapping and things like out-of-bounds and overflow errors which could stop at thread/actor bounds (or in some cases even be recovered)?

The latter were the ones I was targeting with my proposal. They live in this grey area, because honestly, they should be throwing errors if not for the performance overhead and usability issues. My solution was to give the compiler a way to know that this was the desired behavior and optimize the throwing away unless it was explicitly requested.

I guess another option would be to introduce a new concept for this grey type of error. Maybe instead of ‘fatalError’ you have something with a different name saying “this should only take down the current actor”… and then you add a well defined process for cleanup.

I would still really like to see the ability to turn this type of thing into normal throwing error handling, so maybe something like ‘fatalThrow’ which takes the same information as ‘throw’, so that it can be converted to a standard throw by the caller, but otherwise traps and takes down the actor. That would make certain types of algorithms much simpler for me.

Thanks,
Jon

On Jan 17, 2017, at 11:49 AM, Dave Abrahams via swift-evolution <swift-evolution@swift.org> wrote:

on Mon Jan 16 2017, Chris Lattner <swift-evolution@swift.org> wrote:

On Jan 16, 2017, at 3:57 PM, David Waite via swift-evolution >>> <swift-evolution@swift.org> wrote:

My interpretation is that he was advocating a future where a
precondition’s failure killed less than the entire process. Instead,
shut down some smaller portion like a thread, actor, or container

like .Net's app domains (which for those more familiar with
Javascript could be loosely compared with Web Workers).

Today - if you wanted a Swift server where overflowing addition
didn’t interrupt your service for multiple users, you would need to
use something like a pre-fork model (with each request handled by a
separate swift process)

That's the difference between CLI and desktop apps where the process
is providing services for a single user, and a server where it may
be providing a service for thousands or millions of users.

Agreed, I’d also really like to see this some day. It seems like a
natural outgrowth of the concurrency model, if it goes the direction
of actors.

If you’re interested, I speculated on this direction in this talk:
http://researcher.watson.ibm.com/researcher/files/us-lmandel/lattner.pdf
<http://researcher.watson.ibm.com/researcher/files/us-lmandel/lattner.pdf>

I totally support the idea of emergency shutown measures
(e.g. save document for recovery),

In general, though, when a precondition is violated, it means your
program state is compromised in an arbitrarily bad way. Unfortunately,
that applies equally across process boundaries as it does across thread
boundaries, if there's some kind of static guarantee of safety as might
be provided by actors. This means you need a way to make decisions
about which kinds of precondition violations should be considered
recoverable as long as you're willing to abandon the job, and which
really do need to be fatal for the whole process... and I don't know if
anyone's really ever figured that problem out. It'd be cool if Swift
could solve it.

--
-Dave

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

And finally, even if an operator function could fail in multiple ways (we're really getting to very hypothetical hypotheticals here), writing `try!` all the time might look silly and non-Swift users might then mock the language, but I dispute the contention that it would make things "unbearable.”

The whole point of ‘try’/’try!’ is to make the user consider how to handle the error cases. If it gets used everywhere, then you have a boy who cried wolf situation where it is seen as noise and ignored… which definitely affects usability. (Take Windows' error dialogs as an example of this phenomenon).

···

On Jan 14, 2017, at 7:29 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Sat, Jan 14, 2017 at 8:03 PM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:
My intended framing of this does not seem to be coming across in my arguments. I am not thinking of this as a way to avoid typing ‘try!’ or ‘try?’. This is not intended to replace any of the current uses of ‘throws’. Rather, it is intended to replace trapping and nil-returning functions where converting it to throw would be burdensome in the most common use cases, but still desirable in less common use cases. In my mind, it is only enabling the author to provide extra information and flexibility, compared to the current behavior.

For example, let’s say I have a failable initializer, which could fail for 2 or 3 different reasons, and that the vast majority of use-cases I only care whether it succeeded or not (which is why nil-returning was chosen)… but there may be a rare case or two where I really would prefer to probe deeper (and changing it to a throwing initializer would inhibit the majority cases). Then using ’throws?’ allows the primary usage to remain unchanged, while allowing users to opt-in to throwing behavior when desired.

Right now I end up making multiple functions, which are identical except for throw vs nil-return, and must now be kept in sync. I’ll admit it isn’t terribly common, but it has come up enough that I think it would still be useful.

As you say, I think this is a pretty niche use case. When you are in control of the code, it's trivial to write a second function that wraps the throwing function, returning an optional value on error. The only thing you'd need to keep in sync would be the declaration, not the function body, and that isn't truly onerous on the rare occasion when this is at issue.

The other argument I will make is one of symmetry. We have 3 different types of error handling in swift: throwing, optional-returns, and trapping.

As the Error Handling Rationale document has pointed out, these three different types of error handling are meant for different _kinds_ of error. The idea is that ideally the choice of what kind of error handling to use shouldn't be down to taste or other arbitrary criteria, but should reflect whether we're dealing with a recoverable error (throws), simple domain error (return nil), or logical error (trap). That much can be determined at the point of declaration. At the use site, there are tools to allow the end user to handle these errors in a variety of ways, but there is a logic behind allowing conversions between some and not all combinations:

* A logical error is meant to be unrecoverable and thus cannot be converted to either nil or throw. To call a function that traps is to assert that the function's preconditions are met. If it's a possibility that the preconditions cannot be met, it should be handled before calling the function. A trap represents a programming mistake that should be fixed by changing the code so as not to trap. There are adequate solutions to the few instances where an error that currently traps might not be always have to be fatal: in the case of array indices, for instance, there's been proposals to allow more lenient subscripting that don't trap, at the cost of extra overhead--of course, you can already implement this for yourself in an extension.

* A simple domain error fails in only one obvious way and doesn't need an error; the end user can always decide that a failure should be handled by trapping using `!`--in essence, the user is asserting that the occurrence of a simple domain error at that use site is a logical error. It shouldn't be useful to convert nil to an error, because a simple domain error should be able to fail in only one way; if the function fails in more than one way, the function should throw, as it's no longer a simple domain error.

* A recoverable error can fail in one or more ways, and how you recover may depend on how it failed; a user can always decide that they'll always recover in the same way by using `try?`, or they can assert that it's a logical error to fail at all using `try!`. The choice is up to the user.

As far as I can tell, `throws?` and `throws!` do not change these choices; it simply says that a recoverable error should be handled by default as a simple domain error or a logical error, which in the Swift error handling model should be up to the author who's using the function and not the author who's declaring it.

There is already some ability to convert between these:

If you have a throwing function:
  ‘try?’ allows you to convert to optional-return
  ‘try!’ allows you to convert to trapping

If you have an optional-return:
  ‘!’ allows you to convert to trapping
  you are unable to convert to throwing (because it requires extra info which isn’t available)

If you have a trapping function, you are unable to convert to either.

With ‘throws?’ you have an optional return which you can convert to throwing with ‘try’

With ‘throws!’ you have a trapping function where:
  ‘try?’ allows you to convert to optional-return
  ‘try’ allows you to convert to throwing

Thus, ‘throws?’ and ‘throws!’ allow you provide optional-return and trapping functions where extra information is provided so that it is possible to convert to throwing when desired. In cases where this conversion is not appropriate, the author would simply continue to use the current methods.

Basically it is useful in designs where optional-return or trapping were ultimately chosen, but there was also a strong case to be made for making it a throwing function.

This is totally the opposite use case from that outlined above. Here, you don't control the code and the original author decided to return an optional value or to trap. In essence, you're saying that the original author made a mistake, and what the author considered to be an unrecoverable error should be recoverable. However, you won't be able to squeeze useful errors out of it unless you write additional diagnostic logic yourself. This is already possible to do in an extension, where you can add a throwing function that checks the arguments before forwarding to the failable or trapping function. As far as I can tell, `throws!` doesn't provide you with any more tools to do so.

I think the fears of people using it instead of ‘throws’ are unfounded because they already have the ability to use optionals or trapping… this just mitigates some of the losses from those choices.

Does that make more sense?

Maybe I'm misunderstanding something. An author that writes a function that throws offers the greatest number of choices to their end users for how to handle errors. You're saying that in designing libraries you choose not to use `throws` because you don't want to burden your users with `try?` or `try!`, which as you say allows users to handle these errors in any way they choose, even though your functions fail in more than one non-trivial way. This represents a fundamental disagreement with the Swift error handling rationale, and again the disagreement boils down to: are the four letters in `try!` a burden? I would just think of it as making every throwing function at most four letters longer in name.

Put another way, the Swift error handling design says that at the point of declaration, the choice of `throws` vs. returning nil should be based on how many ways there are to fail (or more accurately, how many meaningfully distinct ways there are to recover from failure), not how often the user cares about that information. If there are two meaningfully distinct ways to recover from failure in your function, but users will likely choose to recover from both failures in the same way 99.9% of the time, still choose `throws`. If there is only one way to recover, choose to return nil. If there are none, choose to trap.

Put another way, going back to your original statement of motivation:

There are some cases where it would be nice to throw errors, but errors are rarely expected in most use cases, so the overhead of ‘try’, etc… would make things unusable.

I disagree with this statement. The overhead of `try` essentially never tips the balance between unusable and usable, for the same reason that making a function name three or four letters longer essentially never tips the balance between usable and unusable.

Thus fatalError or optionals are used instead.

In the Swift error handling model, the frequency with which a user might have to write `try!` or `try?` should play no role in the author's choice of throwing vs. returning nil vs. fatalError.

For example, operators like ‘+’ could never throw because adding ’try’ everywhere would make arithmetic unbearable.

As we discussed above, AFAICT, addition traps for performance reasons, as Swift aspires to be usable for systems programming.

Even if that weren't the case, it would never throw because there's only one meaningful way in which addition can fail; thus, if anything, it'd be a failable operation. This would probably not be terrible (other than for performance), as nil values could be propagated to the end of any calculation, at which point a user would write `!` or handle the issue in a more sophisticated way.

(As a digression, for FP values, NaN offers yet another way of signaling an error, which due to IEEE conformance Swift is obliged to keep distinct; however, as can be evidenced by the fact that the NaN payload is pretty much never used, it can be thought of as a counterpart to nil as opposed to Error.)

And finally, even if an operator function could fail in multiple ways (we're really getting to very hypothetical hypotheticals here), writing `try!` all the time might look silly and non-Swift users might then mock the language, but I dispute the contention that it would make things "unbearable."

Thanks,
Jon

On Jan 12, 2017, at 5:34 PM, Greg Parker <gparker@apple.com <mailto:gparker@apple.com>> wrote:

On Jan 12, 2017, at 4:46 PM, Xiaodi Wu via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

On Thu, Jan 12, 2017 at 6:27 PM, Jonathan Hull <jhull@gbis.com <mailto:jhull@gbis.com>> wrote:

Also, ‘try’ is still required to explicitly mark a potential error propagation point, which is what it was designed to do. You don’t have ‘try’ with the variants because it is by default no longer a propagation point (unless you make it one explicitly with ’try’).

If this is quite safe and more convenient, why then shouldn't it be the behavior for `throws`? (That is, why not just allow people to call throwing functions without `try` and crash if the error isn't caught? It'd be a purely additive proposal that's backwards compatible for all currently compiling code.)

Swift prefers that potential runtime crash points be visible in the code. You can ignore a thrown error and crash instead, but the code will say `try!`. You can force-unwrap an Optional and crash if it is nil, but the code will say `!`.

Allowing `try` to be omitted would obscure those crash points from humans reading the code. It would no longer be possible to read call sites and be able to distinguish which ones might crash due to an uncaught error.

(There are exceptions to this rule. Ordinary arithmetic and array access are checked at runtime, and the default syntax is one that may crash.)

--
Greg Parker gparker@apple.com <mailto:gparker@apple.com> Runtime Wrangler

And finally, even if an operator function could fail in multiple ways
(we're really getting to very hypothetical hypotheticals here), writing
`try!` all the time might look silly and non-Swift users might then mock
the language, but I dispute the contention that it would make things
"unbearable.”

The whole point of ‘try’/’try!’ is to make the user consider how to handle
the error cases. If it gets used everywhere, then you have a boy who cried
wolf situation where it is seen as noise and ignored… which definitely
affects usability. (Take Windows' error dialogs as an example of this
phenomenon).

In a hypothetical world where + was throwing, that would be a fair point,
and it would be something to balance against Greg's argument that `try!`
and `!` have value because they show all potential crash points at the
point of use. However, as this is very much a hypothetical, the more
salient point here is that there _aren't_ so many things that have multiple
meaningfully distinct ways of recovering from error.

In this present version of Swift, practical experience shows that if
anything people pay great amounts of attention (maybe too much) to
statements with `!`, even so far as to try to forbid it in their house
style (definitely too extreme, IMO)! Meanwhile, `try?` simply can't be
ignored because the type system makes you unwrap the result at some point
down the line.

···

On Sat, Jan 14, 2017 at 10:10 PM, Jonathan Hull <jhull@gbis.com> wrote:

On Jan 14, 2017, at 7:29 PM, Xiaodi Wu <xiaodi.wu@gmail.com> wrote:

On Sat, Jan 14, 2017 at 8:03 PM, Jonathan Hull <jhull@gbis.com> wrote:

My intended framing of this does not seem to be coming across in my
arguments. I am not thinking of this as a way to avoid typing ‘try!’ or
‘try?’. This is not intended to replace any of the current uses of
‘throws’. Rather, it is intended to replace trapping and nil-returning
functions where converting it to throw would be burdensome in the most
common use cases, but still desirable in less common use cases. In my
mind, it is only enabling the author to provide extra information and
flexibility, compared to the current behavior.

For example, let’s say I have a failable initializer, which could fail
for 2 or 3 different reasons, and that the vast majority of use-cases I
only care whether it succeeded or not (which is why nil-returning was
chosen)… but there may be a rare case or two where I really would prefer to
probe deeper (and changing it to a throwing initializer would inhibit the
majority cases). Then using ’throws?’ allows the primary usage to remain
unchanged, while allowing users to opt-in to throwing behavior when desired.

Right now I end up making multiple functions, which are identical except
for throw vs nil-return, and must now be kept in sync. I’ll admit it isn’t
terribly common, but it has come up enough that I think it would still be
useful.

As you say, I think this is a pretty niche use case. When you are in
control of the code, it's trivial to write a second function that wraps the
throwing function, returning an optional value on error. The only thing
you'd need to keep in sync would be the declaration, not the function body,
and that isn't truly onerous on the rare occasion when this is at issue.

The other argument I will make is one of symmetry. We have 3 different
types of error handling in swift: throwing, optional-returns, and trapping.

As the Error Handling Rationale document has pointed out, these three
different types of error handling are meant for different _kinds_ of error.
The idea is that ideally the choice of what kind of error handling to use
shouldn't be down to taste or other arbitrary criteria, but should reflect
whether we're dealing with a recoverable error (throws), simple domain
error (return nil), or logical error (trap). That much can be determined at
the point of declaration. At the use site, there are tools to allow the end
user to handle these errors in a variety of ways, but there is a logic
behind allowing conversions between some and not all combinations:

* A logical error is meant to be unrecoverable and thus cannot be
converted to either nil or throw. To call a function that traps is to
assert that the function's preconditions are met. If it's a possibility
that the preconditions cannot be met, it should be handled before calling
the function. A trap represents a programming mistake that should be fixed
by changing the code so as not to trap. There are adequate solutions to the
few instances where an error that currently traps might not be always have
to be fatal: in the case of array indices, for instance, there's been
proposals to allow more lenient subscripting that don't trap, at the cost
of extra overhead--of course, you can already implement this for yourself
in an extension.

* A simple domain error fails in only one obvious way and doesn't need an
error; the end user can always decide that a failure should be handled by
trapping using `!`--in essence, the user is asserting that the occurrence
of a simple domain error at that use site is a logical error. It shouldn't
be useful to convert nil to an error, because a simple domain error should
be able to fail in only one way; if the function fails in more than one
way, the function should throw, as it's no longer a simple domain error.

* A recoverable error can fail in one or more ways, and how you recover
may depend on how it failed; a user can always decide that they'll always
recover in the same way by using `try?`, or they can assert that it's a
logical error to fail at all using `try!`. The choice is up to the user.

As far as I can tell, `throws?` and `throws!` do not change these choices;
it simply says that a recoverable error should be handled by default as a
simple domain error or a logical error, which in the Swift error handling
model should be up to the author who's using the function and not the
author who's declaring it.

There is already some ability to convert between these:

If you have a throwing function:
‘try?’ allows you to convert to optional-return
‘try!’ allows you to convert to trapping

If you have an optional-return:
‘!’ allows you to convert to trapping
you are unable to convert to throwing (because it requires extra info
which isn’t available)

If you have a trapping function, you are unable to convert to either.

With ‘throws?’ you have an optional return which you can convert to
throwing with ‘try’

With ‘throws!’ you have a trapping function where:
‘try?’ allows you to convert to optional-return
‘try’ allows you to convert to throwing

Thus, ‘throws?’ and ‘throws!’ allow you provide optional-return and
trapping functions where extra information is provided so that it is
possible to convert to throwing when desired. In cases where this
conversion is not appropriate, the author would simply continue to use the
current methods.

Basically it is useful in designs where optional-return or trapping were
ultimately chosen, but there was also a strong case to be made for making
it a throwing function.

This is totally the opposite use case from that outlined above. Here, you
don't control the code and the original author decided to return an
optional value or to trap. In essence, you're saying that the original
author made a mistake, and what the author considered to be an
unrecoverable error should be recoverable. However, you won't be able to
squeeze useful errors out of it unless you write additional diagnostic
logic yourself. This is already possible to do in an extension, where you
can add a throwing function that checks the arguments before forwarding to
the failable or trapping function. As far as I can tell, `throws!` doesn't
provide you with any more tools to do so.

I think the fears of people using it instead of ‘throws’ are unfounded

because they already have the ability to use optionals or trapping… this
just mitigates some of the losses from those choices.

Does that make more sense?

Maybe I'm misunderstanding something. An author that writes a function
that throws offers the greatest number of choices to their end users for
how to handle errors. You're saying that in designing libraries you choose
not to use `throws` because you don't want to burden your users with `try?`
or `try!`, which as you say allows users to handle these errors in any way
they choose, even though your functions fail in more than one non-trivial
way. This represents a fundamental disagreement with the Swift error
handling rationale, and again the disagreement boils down to: are the four
letters in `try!` a burden? I would just think of it as making every
throwing function at most four letters longer in name.

Put another way, the Swift error handling design says that at the point of
declaration, the choice of `throws` vs. returning nil should be based on
how many ways there are to fail (or more accurately, how many meaningfully
distinct ways there are to recover from failure), not how often the user
cares about that information. If there are two meaningfully distinct ways
to recover from failure in your function, but users will likely choose to
recover from both failures in the same way 99.9% of the time, still choose
`throws`. If there is only one way to recover, choose to return nil. If
there are none, choose to trap.

Put another way, going back to your original statement of motivation:

There are some cases where it would be nice to throw errors, but errors

are rarely expected in most use cases, so the overhead of ‘try’, etc… would
make things unusable.

I disagree with this statement. The overhead of `try` essentially never
tips the balance between unusable and usable, for the same reason that
making a function name three or four letters longer essentially never tips
the balance between usable and unusable.

Thus fatalError or optionals are used instead.

In the Swift error handling model, the frequency with which a user might
have to write `try!` or `try?` should play no role in the author's choice
of throwing vs. returning nil vs. fatalError.

For example, operators like ‘+’ could never throw because adding ’try’
everywhere would make arithmetic unbearable.

As we discussed above, AFAICT, addition traps for performance reasons, as
Swift aspires to be usable for systems programming.

Even if that weren't the case, it would never throw because there's only
one meaningful way in which addition can fail; thus, if anything, it'd be a
failable operation. This would probably not be terrible (other than for
performance), as nil values could be propagated to the end of any
calculation, at which point a user would write `!` or handle the issue in a
more sophisticated way.

(As a digression, for FP values, NaN offers yet another way of signaling
an error, which due to IEEE conformance Swift is obliged to keep distinct;
however, as can be evidenced by the fact that the NaN payload is pretty
much never used, it can be thought of as a counterpart to nil as opposed to
Error.)

And finally, even if an operator function could fail in multiple ways
(we're really getting to very hypothetical hypotheticals here), writing
`try!` all the time might look silly and non-Swift users might then mock
the language, but I dispute the contention that it would make things
"unbearable."

Thanks,

Jon

On Jan 12, 2017, at 5:34 PM, Greg Parker <gparker@apple.com> wrote:

On Jan 12, 2017, at 4:46 PM, Xiaodi Wu via swift-evolution < >> swift-evolution@swift.org> wrote:

On Thu, Jan 12, 2017 at 6:27 PM, Jonathan Hull <jhull@gbis.com> wrote:

Also, ‘try’ is still required to explicitly mark a potential error
propagation point, which is what it was designed to do. You don’t have
‘try’ with the variants because it is by default no longer a propagation
point (unless you make it one explicitly with ’try’).

If this is quite safe and more convenient, why then shouldn't it be the
behavior for `throws`? (That is, why not just allow people to call throwing
functions without `try` and crash if the error isn't caught? It'd be a
purely additive proposal that's backwards compatible for all currently
compiling code.)

Swift prefers that potential runtime crash points be visible in the code.
You can ignore a thrown error and crash instead, but the code will say
`try!`. You can force-unwrap an Optional and crash if it is nil, but the
code will say `!`.

Allowing `try` to be omitted would obscure those crash points from humans
reading the code. It would no longer be possible to read call sites and be
able to distinguish which ones might crash due to an uncaught error.

(There are exceptions to this rule. Ordinary arithmetic and array access
are checked at runtime, and the default syntax is one that may crash.)

--
Greg Parker gparker@apple.com Runtime Wrangler

I really hate to bring Java up, but I really do think it got at least one thing right with its error system, namely that one subset of error (namely `RuntimeException`), wouldn't be enforced by the compiler, but could optionally be caught.

I don’t entirely agree for two reasons:

1. If a runtime error is thrown and caught, there is no way to guarantee the logical consistency of the state of the running process because who knows which stack frames were unwound without cleaning up properly. There is no way to safely catch a run time exception and recover.

From a Java perspective, clearly that's not true.

It absolutely is true.

    int someMethod()
    {
        aquireSomeLock();
        doSomethingThatIsNotDeclaredToThrowAnException();
        releaseSomeLock();
    }

The general way to perform locking in Java is to use a synchronised block around a particular resource, and those are cleaned up by the Java VM when unwinding the stack, or to use a finally block to do the same. You can certainly write incorrect code in Java; the correct code would look like:

int someMethod() {
  try {
    acquireSomeLock();
    doSomethingThatIsNotDeclaredToThrowAnException();
  } finally {
    releaseSomeLock();
  }
}

It's more likely to be written as:

int someMethod() {
  synchronized(lockObject) {
    doSomethingThatIsNotDeclaredToThrowAnException();
  }
}

and where 'lockObject' is 'this' then the shorter

synchronized int someMethod() {
   doSomethingThatIsNotDeclaredToThrowAnException();
}

It may well be true for the current implementation of Swift, and there's questions about how to clean up objects with reference counts outstanding (since there's no garbage collector). It isn't necessarily the case that it isn't possible, but it does require some additional stack unwinding; and the implementation of that may be too cumbersome/impractical/undesirable to occur.

Not all resources are reference counts or memory allocations.

True; file descriptors are one such example. Again, the language has the ability to recover instead of blowing up when that occurs:

try(FileInputStream in = new FileInputStream("theFIle")) {
  // do stuff with input stream
} // input stream is closed

Compilers and tools can generate warnings for such cases, in the same way that other languages can warn about not freeing memory.

2. People abuse RuntimeException to simplify their error handling code: “if I throw a RuntimeException I don’t need a zillion catch clauses or throws declarations”. Furthermore, if a library uses RuntimeExceptions where it should be using Exceptions there is no way to know if its API has changed except by reading the (hopefully up to date) documentation.

Problem 2 makes me particularly bitter because JEE programmers seem to have learned that allowing code to throw null pointer exceptions has no real consequences for them so they become very cavalier about doing their null checks. The user sees an 500 error page, the sys admin gets a 200 line stack trace in the log, but the system carries on. If you are lucky enough to have the source code to diagnose the problem, it usually turns out that the exception was thrown on a line with eight chained method calls. When you do track the problem down, it turns out you forgot a line in the properties file or something similar but the programmer couldn’t be bothered to help you out because it was easier just to let the null pointer exception happen.

That's a pretty poor example, and in any case, the individual cause would allow the system to continue on processing subsequent requests, which is generally what's wanted. When you're working on large systems and with large data sets, there are generally always problematic items like this which have to be diagnosed sufficiently in order to be retried, or even handled manually. It has very little to do with the language and more to do with the quality of data which isn't something you always have control over.

No it’s not a poor example. I’ve seen it happen with real software in real production scenarios.

And if we're talking anecdata, I've seen large production systems where a particular message has been unable to be processed due to some incorrect data formatting (in some cases, an invalid UTF-8 sequence of bytes being passed in and treated as UTF-8). The general way is not to crash and return that and let it be processed a subsequent time, but ignored, put into a manual resolve queued, and keep going.

I like Swift’s error handling because programming errors (like force unwrapping nil) are punished mercilessly by process termination and errors caused by external factors cannot be completely ignored. You have to at least put an empty catch block somewhere.

This is one of the significant problems in Swift at the moment for server-side logic. It may make sense to do this from a single-user application, but writing a server which is designed to handle hundreds or thousands of simultaneous clients can suffer from the fact that one bad client request can take out everyone's connection. In the server working group and in existing tools like Kitura/Vapor/Perfect etc. it's a non-trivial problem to solve, other than using a CGI like model where each request is handled by a single worker process that can terminate independently of the other requests in flight.

I agree it’s a non trivial problem to resolve. I’m saying that the Java “solution” of RuntimeExceptions doesn’t resolve it.

Except you've not said /why/ the Java 'solution' of RuntimeExceptions doesn't resolve it. You've just used a couple of examples to prove it's possible to write bad Java code; but you can write bad code in any language.

Alex

···

On 17 Jan 2017, at 11:46, Jeremy Pereira <jeremy.j.pereira@googlemail.com> wrote:

On 17 Jan 2017, at 11:28, Alex Blewitt <alblue@apple.com> wrote:

On 17 Jan 2017, at 11:10, Jeremy Pereira via swift-evolution <swift-evolution@swift.org> wrote:

On 17 Jan 2017, at 02:38, thislooksfun via swift-evolution <swift-evolution@swift.org> wrote:

Bringing it back towards the initial post, what if there was a
separation from true needs-to-take-down-the-entire-system trapping and
things like out-of-bounds and overflow errors which could stop at
thread/actor bounds (or in some cases even be recovered)?

The latter were the ones I was targeting with my proposal. They live
in this grey area, because honestly, they should be throwing errors if
not for the performance overhead and usability issues.

I fundamentally disagree with that statement. There is value in
declaring certain program behaviors illegal, and in general for things
like out-of-bounds access and overflow no sensible recovery (where
“recovery” means something that would allow the program to continue
reliably) is possible.

I think we do fundamentally disagree. I know I come from a very different background (Human-Computer Interaction & Human Factors) than most people here, and I am kind of the odd man out, but I have never understood this viewpoint for anything but the most severe cases where the system itself is in danger of being compromised (certainly not for an index out of bounds). In my mind “fail fast” is great for iterating in development builds, but once you are deploying, the user’s needs should come ahead of the programmer’s.

Shouldn’t a system be as robust as possible and try to minimize the fallout from any failure point? I don’t consider crashing and losing all of the user’s data minimal. It used to be that something like dividing by zero took down the entire machine. Now we mimic that by crashing the application, even though it isn’t strictly necessary. Wouldn’t it be even better if we only took down the current operation, notified the user about what happened and continue on?

Swift does a great job of using forcing functions (like optionals) to make some errors impossible, and this is literally the opposite of that. This requires the programmer to remember to add a check that the number is within certain bounds, but there is no reminder for them to do that. The failure is silent (i.e. there isn’t a ‘!’ or ’try' to mark that it is a possibility), at runtime, under certain conditions and not others. It is a recipe for bugs which cause a crash for the user.

If we wanted fool-proof arrays, then the subscript would return an optional, forcing the programmer to think about and deal with the possibility of out-of-bounds failure (that is how we handle dictionary lookups after all). If I remember correctly, we don’t do that because of performance. Instead we ask the programmer to remember to check against the array count first, just like we used to ask the programmer to remember to check for nil pointers in C++.

The idea that a programmer can or should be perfect is a lovely fantasy… My CS teachers always talked about how it would encourage bad programming if there wasn’t punishment for making these types of mistakes.

I don’t see the value in punishing the user for the programmer’s mistake, and I have rarely seen a case where sensible recovery wouldn’t be possible (given Swift’s error handling). In most real-world applications, you would just end up cancelling whatever operation was happening, reverting to the state before it, and notifying the user of the problem. The programming languages which get into trouble are the ones which treat everything as valid, so they just keep barreling on, overwriting data or something like that. We don’t have that problem though, since we have sensible error handling that can be used to fail an operation and get things back to a previous state (or better yet, avoiding overwriting the state until after the operation has succeeded). We should aim for robustness, and crashing isn’t robust.

Please note: I am not saying we allow memory access out of bounds. We are still triggering an error state when these things happen (either returning nil or throwing an error), we’re just not crashing the entire program because of it.

···

On Jan 17, 2017, at 7:13 PM, Dave Abrahams <dabrahams@apple.com> wrote:
on Tue Jan 17 2017, Jonathan Hull <jhull-AT-gbis.com <http://jhull-at-gbis.com/>> wrote:

My solution was to give the compiler a way to know that this was the
desired behavior and optimize the throwing away unless it was
explicitly requested.

I guess another option would be to introduce a new concept for this
grey type of error. Maybe instead of ‘fatalError’ you have something
with a different name saying “this should only take down the current
actor”… and then you add a well defined process for cleanup.

I would still really like to see the ability to turn this type of
thing into normal throwing error handling, so maybe something like
‘fatalThrow’ which takes the same information as ‘throw’, so that it
can be converted to a standard throw by the caller, but otherwise
traps and takes down the actor. That would make certain types of
algorithms much simpler for me.

Thanks,
Jon

On Jan 17, 2017, at 11:49 AM, Dave Abrahams via swift-evolution <swift-evolution@swift.org> wrote:

on Mon Jan 16 2017, Chris Lattner <swift-evolution@swift.org >>> <mailto:swift-evolution@swift.org <mailto:swift-evolution@swift.org>>> wrote:

On Jan 16, 2017, at 3:57 PM, David Waite via swift-evolution >>>> <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

My interpretation is that he was advocating a future where a
precondition’s failure killed less than the entire process. Instead,
shut down some smaller portion like a thread, actor, or container

like .Net's app domains (which for those more familiar with
Javascript could be loosely compared with Web Workers).

Today - if you wanted a Swift server where overflowing addition
didn’t interrupt your service for multiple users, you would need to
use something like a pre-fork model (with each request handled by a
separate swift process)

That's the difference between CLI and desktop apps where the process
is providing services for a single user, and a server where it may
be providing a service for thousands or millions of users.

Agreed, I’d also really like to see this some day. It seems like a
natural outgrowth of the concurrency model, if it goes the direction
of actors.

If you’re interested, I speculated on this direction in this talk:
http://researcher.watson.ibm.com/researcher/files/us-lmandel/lattner.pdf
<http://researcher.watson.ibm.com/researcher/files/us-lmandel/lattner.pdf
<http://researcher.watson.ibm.com/researcher/files/us-lmandel/lattner.pdf>>

I totally support the idea of emergency shutown measures
(e.g. save document for recovery),

In general, though, when a precondition is violated, it means your
program state is compromised in an arbitrarily bad way. Unfortunately,
that applies equally across process boundaries as it does across thread
boundaries, if there's some kind of static guarantee of safety as might
be provided by actors. This means you need a way to make decisions
about which kinds of precondition violations should be considered
recoverable as long as you're willing to abandon the job, and which
really do need to be fatal for the whole process... and I don't know if
anyone's really ever figured that problem out. It'd be cool if Swift
could solve it.

--
-Dave

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org> <mailto:swift-evolution@swift.org <mailto:swift-evolution@swift.org>>
https://lists.swift.org/mailman/listinfo/swift-evolution
<https://lists.swift.org/mailman/listinfo/swift-evolution>

--
-Dave

Terms of Service

Privacy Policy

Cookie Policy