That base is already covered by âthrows.â The language prevents exceptions from being ignored by requiring âthrowsâ on throwing functions. Either a function catches all the errors that may be thrown into its body, or it has to declare that it throws. You don't need try for that.
Overcompensating by forcing try to be in so many places really illustrates the missing of a point I've been making, that the exact control flow doesn't matter at all in so many cases: if there's no mutation, or if the mutation is only of local variables whose lifetime will end when the function throws, or if the mutation is only of local variables of some caller whose lifetime will end when it throws, or⊠the list goes on (see my original posting). In fact, the idea that you âcan see the control flowâ is an illusion in all these cases: what you think you're nailing down by making the control flow visible can easily be scrambled by an optimizer without observable effect on the program's meaning.
This failure of the C++/Java model is one of the reasons that C++ exception safe programming model is such a binary (all or nothing) thing,
I have no idea what you might mean by that, honestly.
âŠthe back-pressure on "exception handling" logic is intentional, and is one of the things that is intended to help reduce the number of pervasively throwing methods in APIs.
By âback pressureâ I think you are referring to the idea that writing try is odious and the theory that therefore people will try to avoid creating APIs that throw. The premise here is that the language should have this error-handling feature but somehow, at the same time, we need to make it painful because we want to discourage its use.
Well, I don't buy it as a language design strategy. First, it's punitive in a way that's inconsistent with the character of the rest of the languageâthank goodness we haven't taken this kind of approach elsewhere, or Swift would be much less enjoyable to use. Second, I don't buy the idea that âoh but the caller will have to try, so I'd better notâ ever enters the thought process of an API designer deciding whether or how to report an error. The one thing that will come up is, âthe caller is almost sure to want to handle the failure right there, rather than reporting it up the chainââtypical for things like the lowest-level networking operations, which the caller is likely to retry. But again, that disincentive base is covered by the fact that the caller will have to catch, which involves more ceremony than simply checking for nil or looking at a result enum. Last of all, the thought of one try is simply not painful enough to exert any significant âback-pressure.â Remember, I'm not bringing this up because writing try is so horrible for the programmer, but because of what it does to the language, its source code base, and its community of users in aggregate, when it happens over and over in places where it can't make a difference.
Your characterization of marking being a historical artifact (whose "ship has sailed") isn't really fair IMO: many of us are very happy with it for the majority case, and believe that the original Swift 2.0 design decisions have worked out well in practice.
? I never said it was a historical artifact, and I don't understand how fairness comes into it. I do sincerely apologize if I've somehow offended, but when I say âthat ship has sailedâ I'm not saying anything about marking; I was talking about some of my earlier proposals. I'm merely saying it might have been viable to consider them once upon a time, but the language is too mature at this point to take such a significant turn.
I also personally believe that async marking is a promising (but unproven) direction to eliminate a wide range of deadlock conditions in concurrent programs when applied to the actors model. I'm not aware of any other model that achieves the same thing.
Interesting; I'd like to hear more about that in detail, if you don't mind. It does seem at odds with some of my understanding, though: AFAIK the proposers have not declared an intention to change actors from the unconditionally re-entrant model originally pitched, and IIUC that provably eliminates deadlocks.
You make a good point about the separation of API and implementation. I guess we have no other precedent for a choice like that, so it would be hard to justify.
That said, I agree that you're on to something here and I agree with you that async will exacerbate an existing issue. There are a couple of ways to address this. One is to reduce the keyword soup by introducing a keyword that combines try and await into one keyword (similarly throws and async ) - but I am convinced that we should land the base async proposal and gain usage experience with it before layering on syntactic sugar.
I don't think that scales. What happens when we add an impure effect (or whatever the next effect dimension is)?
That other side of this is the existing point that you're observing where we have existing use cases with try that are so unnecessarily verbose that they obfuscate the logic they contain. To me, I look towards solutions that locally scope the "suppress try " behavior that you're seeking: while you pick some big cases where it appears to be aligned with functions, often this is aligned with regions of functions that are implemented in terms of throwing (and also, in the future, async logic). That said, the whole scope of the declaration isn't necessarily implicated into this.
No, not necessarily, but that's been the problem with the design approach to try all along. Because there are occasional places where being alerted to the source of error propagation can be helpful, we've ignored the broad fact that in the vast majority of cases, it is irrelevant. And I maintain this is not good for programmers. If you look at what's happened with your rewritten encode example below, it gives the impression that it's somehow significant that no error can propagate from the first statement, but AFAICT there is no world in which that helps anyone think about the semantics of this function, any more than putting a non-throwing let inside the try do {...} block would make it worse. I respect your inclination to do something more tightly scoped and conservative, but I hope I've explained why I have the opposite inclination.
OK, I appreciate your willingness to consider the possibilities. But let's compare that with the code you'd write today:
func encode(to encoder: Encoder) throws {
var output = encoder.unkeyedContainer()
try output.encode(self.a)
try output.encode(self.b)
try output.encode(self.c)
}
I think you'll agree the extra level of nesting in your example is a significant syntactic cost, which makes it hard to argue that there's much improvement.
But if you'll allow me to run with your idea, I think there are two things we can do to improve it and that will give us both what we want:
-
Eliminate the need for do and allow try { ... } to mark an entire block as throwing.
func encode(to encoder: Encoder) throws
{
var output = encoder.unkeyedContainer()
try {
output.encode(self.a)
output.encode(self.b)
output.encode(self.c)
}
}
-
Allow that at the top level of the function:
func encode(to encoder: Encoder) throws
try {
var output = encoder.unkeyedContainer()
output.encode(self.a)
output.encode(self.b)
output.encode(self.c)
}
Thanks for engaging,
Dave