There's a spectrum, depending on how conservative you want to be and how easy you want implementation to be. The most conservative strategy encourages “precise try” (by which I mean placing try as close as possible to the actual error propagation point) in any code that does mutation. That would let you address part of the problem, making it convenient to use a try {} block in the code that uses lots of closures , but would still discourage it in the cod🐟e that [de-]serializes .
[Note that many strategies are going to encourage precise try in code that operates on class instances, since those APIs can mutate without anything declared in the type system.]
But there are ways to be less conservative, reducing the number of false-positive warnings, that would still be effective. For example, you can limit the encouragement of precise try:
- to those places where throwing can happen between mutations (a throw that happens before or after all mutations is harmless to invariants). That would add no risk but is a bit harder to implement.
- to those places that use non-public mutating API (adds a little risk; types that aren't well-encapsulated may expose invariant-breaking API publicly). Interpreted the right way, this covers [de-]serialization code.
- to those places that use mutating API that's less accessible than the type(s) it operates on. Adds a bit more risk than the last bullet, I suppose.
[later: in this post I added some rules that actually eliminate nearly all of the risk: forbid catch and defer inside these try blocks]