you either get a result or an error and in the error case
Not all situations are formalized in the typesystem. Recently I wrote a function like UIView.layoutDifference(...) -> LayoutDifference
, a pure function that studies the receiver's state to produce a layout, or more specifically the update between present and desired layout.
Layout systems have underspecified cases, such as views with no constraints, and specifying these is a matter of taste. We could throw, assert, return nil, return an empty difference, return a difference that moves the view to a zero frame, introduce some enum result type, etc. One popular layout engine thought it was a good idea to have inconsistent frames and call these "ambiguous layout"
. Let's call these underspecified cases "silly situations" and gloss any way we decide to handle it as a "silly result".
What happens is, due to out-of-scope-cause-X, the function that makes constraints had a silly situation and did a silly result. Then our view has silly constraints and got a silly layout, two silly layouts had a silly layout difference, the silly difference started a silly animation and now I'm reading a bug report about that. So I have to navigate through this jungle all the way back to X.
Scarred by this experience, I resolve to make every silly result an assert, so it's easier to find the source of a problem. But it turns out that some callers create silly situations for sensible reasons and don't want to assert. So it's not there's an objective answer to avoid difficulties of this kind.
Anyway, it is useful with this sort of debugging to have a fixed enumeration of keywords that can return silly results to callers, so as to glance at a function and informally prove why it might have returned nil
or whatever you got.
if you frame that problem as a broken invariant it will give you lots of power, both to think about that code but also the rest of your program.
Nailing down which function in a jungle had its invariant violated can be a powerful method but it's context-dependent. At some point if there are too many riddles on Slack about "if an empty layout difference is applied in a forest, was the view ever really laid out?" management tells me to get back to work.
do you mind sharing your problem domain with us?
Sure although it may be offtopic. The situation I had in mind on performance (different from the layout example above), involves CPU/GPU. For a given function in my library, there's risk I may need to move it to the other processor or to have it on both.
The obvious strategy is to cross-compile, but this has all the problems one normally has writing cross-platform code, and more, because the architectures are more different. And if I didn't actually need both versions I've suffered a lot.
Alternatively. Swift can range from a very highlevel modern idiom, to a very lowlevel "maybe if I add semicolons, clang will compile it" portable idiom. I can slide between these idioms depending on how the port risk feels at that moment, and sometimes this sliding makes the code fast enough without the need to port for real. And if I do need to port, I have a working function to start with. So this is often a practical approach.
As far as nailing down particular branch counts, we use an abstract machine which is some blend of both platforms. It's not going to be fully accurate, but it encodes a useful idea of the worst case. Examples like &&
and ||
are indeed more expensive on the abstract machine than they are for Swift so we really do watch out for them when writing Swift with this idiom.
Overflow is an interesting case since it's cheap on CPU but prohibitive on GPU. Ultimately we write explicit overflow handling as if Swift did nothing, and then use Swift's behavior like a sanitizer. More rarely we turn on Ounchecked.
Other Swift features that might hide controlflow are generally not portable so we avoid them for this idiom. Occasionally though we port them â I actually found this thread because I am porting try
.
The distillation of all this for try
is that it's expensive for the abstract machine and we want to easily count it. I can totally see why this is a weird concern in the context of app code though.
I'm very inclined toward static safety and don't believe potential crashes or traps should ever be hidden behind implicit syntax. That's not what we're talking about here, though;
Sorry, I used a loaded analogy
. A better one is the requirement to write self
in a closure â it's annoying but it also prevents bugs. Like we did with self
, there is probably some way to relax the rules on writing try
. But I think we need advocates for both the 'annoyed' and 'useful' positions on try
to find a good balance.
do try
seems like a good balance, it is not disruptive for the cases I described. At the same time I don't know if it really addresses the burden of try
, simply because I don't feel that burden very acutely.