FWIW, in cases where the ‘self
is gone’ situation really Should Never Happen, I’ll usually combine weak self
with an appropriate assertionFailure
, which IMO is the best of both worlds to some extent: highly visible and easy to debug when testing, but not catastrophic if it slips through to production.
I still see the proposal valuable for example;
class Model {
func printDate(_ date: Date) {
print(date)
}
var sub: AnyCancellable?
init() {
sub = Timer
.publish(every: 1, on: RunLoop.main, in: .default)
.autoconnect()
.sink { [weak self] in
self?.printDate($0)
}
}
Would that be safe to change it to unknown
?
Or in such cases
class MyViewController {
func bind(_ publisher: AnyPublisher<Int, Never>)
-> AnyCancellable? {
return publisher
.sink(receiveValue: { [weak self] in
self?.displayValue($0)
})
}
}
If the view was dismissed, no reason to do any display operation on it.
Also, as the proposal states, closure won't be called at all so maybe it gives additional opportunity to the compiler to optimize all of those calls for weak self...
dance closures that immediately return.
Examples for the convinience taken from Using self, weak, and unowned in Combine
Not to be a wet blanket, but I have to give this a -1. I get what the proposer is getting at, and I get why a lot of people like this in theory, but it feels like one of many examples of features that aren’t really helpful to programming.
We as programmers often tend towards code that looks clean. We try to abstract things away to make a bit of code appear simple, especially during and after writing said code. But that doesn’t necessarily make the code better. Optimising for writability or perceived cleanliness is often a mistake as it comes at the expense of learnability, readability, and debugability.
I understand the appeal. guard let self = self else {}
is boilerplate, but it is good boilerplate. As others have said, the important part is what happens in the else. Even in simple cases, you may want more than just a simple return
. It’s also far easier to debug if things are going wrong. There’s also the point about implicit self.
which appeals to those who use that feature, though I’d argue implicit self.
is a feature is the poster child for the “optimising for writability & perceived cleanliness over learnability, readability, and debugability” issue (as anyone who has tried to learn about some Swift code without syntax highlighting can attest!)
The intention is good, but I think this falls foul of too many genuinely important things in favour of some superficially important things. It makes things easier in the short term, but does it help long term maintainability? I’d argue no. Eliminating boilerplate that has no purpose and is used all over is good (see when @property syntax was added to Obj-C). Eliminating boilerplate that is arguably there to make you think, and only in a narrow set of cases at that (I’d argue this has to support all return types to be considered), isn’t really worth adding syntax to the language.
Your self in the here and now may hate it, but your future self will likely thank you that it’s not there
I'm mostly a lurker around here, but this one is something that I've been giving a lot of thought to, so I thought I might actually weigh in.
We can talk ourselves in circles all day about the philosophical implications of optionals — and people regularly do — but I think there will always be a little bit of a divide between the people who view the inconvenience of optionals as a barrier to clear code and the people who see that inconvenience as a significant language feature that forces developers to explicitly account for edge cases. @jrose makes some good points about why this pattern is often the beginning of other troubles, and did a perfect job of characterizing why I often think the semantic of something like unowned
is underused in these contexts, but addressing those concerns is really above my proverbial paygrade. So I'll focus on the hypothetical of if this existed.
I think the more important thing here is the question of how this will impact debugging, especially for the beginning Swift developer. In my observation, [weak self]
is already a knee-jerk default choice for many developers starting out even when a closure has no chance of causing a retain cycle (something that can cause interesting bugs if an object has to register a certain piece of work was finished, for example), because the safety is reassuring. In that regard, [guard self]
would become a habitual behavior for many Swift users virtually overnight, and unfortunately, as currently proposed, it seems to me like this could create a very beginner-hostile situation during debugging.
Probably the most common reason you'd lose self
in a closure like this is some kind of race. This means that for most beginning developers, the first time they're not seeing their closure work correctly may be one of the most difficult bugs they've ever had to troubleshoot. Add on top of that the fact that it would not really be possible to breakpoint inside the closure and see that this implicit guard
statement is the source of one's problem.
Maybe the solution to this situation is not necessarily a silent failure from a guard
, but an assertion? In development settings, this would give you very clear visibility into potential issues as they crop up, and it would indeed be incredibly difficult to ignore that there's now a near-invisible bug in the app, but in production, users wouldn't see the issue. Perhaps an [assert self]
or [expect self]
might be a way to preserve this functionality while still retaining some kind of visibility into the mechanics of what's going on?
This is an interesting direction. It seems reasonable to have a capture specifier that behaves in this way:
button.tapHandler = { [assert weak self] in
// ...
}
// is desugared into:
button.tapHandler = { [weak self] in
guard let self = self else {
assertionFailure("Weakly captured variable 'self' no longer exists.")
return
}
// ...
}
I'm also coming around to @jrose's argument, that we should consider including weak
in any spelling for this feature (since guard
or assert
on their own don't necessarily imply weak
, unless you've memorized this).
Some related precedent is how unowned
has unowned(safe)
(the default) and unowned(unsafe)
(here be dragons!) variants -- so stricly following this precedent would give us weak(assert)
. But I almost feel like unowned(unsafe)
is trying to be visually abrasive to dissuade folks from using it .
Any thoughts on [assert weak self]
?
Alternatively, would [guard weak self]
be a suitable way to spell a capture specifier that behaved in this way (a guard
with an assertionFailure
)?
IMO, hiding an assertionFailure
behind a guard
keyword is unintuitive. I like assert weak
or weak(assert)
better.
Thinking about this more, it seems like having [assert weak self]
would almost certainly improve safety and correctness over the current status quo.
The status quo today, for many people, is to add a guard let self = self else { return }
or use self?.
. Both of these widely prominent defaults silently ignore cases where self
no longer exists. This is often ok, but is not always ok -- so silently ignoring this case can be harmful.
If [assert weak self]
existed, I suspect it would become the status quo default for Void
closures, since it has better ergonomics (it avoids some boilerplate and enabled implicit self). This would mean the status quo in this case would no longer be silently ignoring potential failures, which would be unambiguously better with respect to correctness.
Other than differences in the diagnostics (which, granted, is no small thing), what is the difference between this and [unowned self]
without the flexibility of choosing where to assert?
"Doesn't crash in production" is a pretty important consideration -- this is the main reason why many folks in the app development community prefer avoiding unowned self
.
What do you mean?
Nice proposal, +1 for the boilerplate cleanup. I agree, code is read more often than it is written, and removing boilerplate makes it more readable (for me).
What about:
{ [with self, with button] in
...
}
?
This is real cool @cal and would certainly make it more convenient to write safe Swift code in a very common use case.
Given that the capture is weak I feel like [guard weak self]
is more clear than [guard self]
. I also wonder if the “chaining” approach (ie adding guard
to an existing weak
capture) is more generalizable to other use cases throughout the language
It’s been a hot second, but isn’t it the case that a runtime trap occurs at the first attempt to use unowned self, such that anything before that in the closure is still executed? By contrast, your proposed assert would always occur at the start of the closure?

"Doesn't crash in production" is a pretty important consideration -- this is the main reason why many folks in the app development community prefer avoiding
unowned self
.
I have to agree with others that this is an anti-goal, and a commonly raised one.
I do think sometimes when I and others push back on it by reminding folks that crashing is safe, we make it seem like we want our apps to crash left and right for any trivial scenario. Of course not. It makes lots of sense not to want to crash!
But here’s the trade-off that Swift offers—and consistently so—that guides the user to write tangibly better code: If you don’t want to crash, then you state explicitly the alternative instead of crashing. Either handle the unhappy path yourself or don’t, the choice is yours; the language rules even help out by making it so that you’ll (usually?) have to choose explicitly not to handle the unhappy path rather than just forgetting. The guarantee (and it’s a feature, not a bug) is that, if you don’t, it’ll be handled for you by halting execution.
So, yes, while it’s a big deal and a positive thing for an app not to crash in production, you’ve elided the rest of the motivation here, which is to not crash and also not have to spell out the alternative instead of crashing (i.e., return
). But that’s just not Swift’s deal. Nor, I’d venture to say, should it be.
well said.
being somewhere between -1 and +1/2 about the feature itself, i'd risk to assume that the proponents of the feature just consider the short form notation [guard weak self]
to be equivalent to some full form which could look like [guard weak self else return]
or something similar in case of returning nil or default value. so it could be argued that they are still specifying the unhappy path behaviour, just using a shortcut notation to spell it out. similar to how we can write "expression" instead of "return expression" or $0 instead of explicit parameter name in closures.
i wrote above about related but slightly different possibility: extend "guard let self = self else { ... }" syntax to allow implicit "self" usage afterwards. is it worth exploring?

If one uses escaping closures mainly as an alternative to delegates in UI code (as event handlers) then the majority of the closures require weak captures to break reference cycles.
yep
I am against this pitch (reasons at the end of this post), but I don't understand the claim that [weak self]
is an anti-pattern: I'd like to have better arguments for this, otherwise I could simply reverse it by saying that the claim "[weak self]
is an anti-pattern" is an anti-pattern.
I'm against the pitch because I don't think this should be sugared, but I'll explore the matter a little bit more.
Every time you have an escaping closure strongly owned by self
where self
is referenced, to manage the lifecycle of the closure you have 3 options:
- allow for a retain cycle, and manually dispose of the closure at a certain point in time;
- let the closure lifecycle depend on
self
lifecycle, and break the retain cycle with[weak self]
; - do the same thing, but with
[unowned self]
.
From a semantic standpoint, using [unowned self]
is identical to using [weak self]
while also force unwrapping self!
in the closure each time it's referenced. Also, [unowned self]
will remove the requirement to explicitly call self
in an escaping closure, hiding the potential crashing points. Because force unwrapping makes the risk clear at call site, I'd argue that one should never use [unowned self]
unless there are specific reasons for it that are unrelated to semantics (performance?). I'll repeat it: in order to have clearer code, where the risk is clearly spelled out at call site (one of the cornerstones of Swift design), one should always use [weak self]
and self!
, unless there are reasons unrelated to semantics. In other words, [unowned self]
is the worst possible default - in fact, I think it should be used only in very specific cases, where performance matters - and I've seen my fair share of Swift projects where unexperienced developers read "[weak self]
is an anti-pattern" in some blog post or forum thread, and profoundly damaged their codebases. From the user standpoint, even retain cycles are a better default, because they are unlikely to cause problem to them (they're still a problem, of course).
But then again, because an app should not crash in production, use of force unwrapping should be limited to cases where the compiler is unable to understand and predict a behavior that the developer is 100% sure about. For example, in there was no if/guard let
binding, then this code would be perfectly fine:
let list = [1, 2, 3]
guard list.first != nil else { return }
let firstElement = list.first!
But then, consider the following code:
final class Foo {
var bar: () -> Void = {}
init() {
bar = { [weak self] in
self!.baz()
}
}
func execute() {
bar()
}
private func baz() {
print("all good")
}
}
In writing [weak self]
and self!.baz()
, the behavior that the compiler doesn't understand, but that the developer does, is the fact that the closure assigned to bar
will never outlive self
, or in other words, that the closure is escaping
but will not "escape" Foo
: but will it though? Or rather, can the developer truly guarantee this? That code certainly doesn't, because bar
is internal
, and can be referenced from outside Foo
:
let foo = Foo()
let escaped = foo.bar /// if `foo` dies, but `escaped` lives, and if it's called, the app will crash
We could make the bar
property private
, so it can't be referenced from outside, but this will not guarantee that the closure will not escape Foo
, because it could be returned from a function:
final class Foo {
private var bar: () -> Void = {}
...
func execute() -> () -> Void {
bar()
return bar // the closure escapes Foo
}
...
}
If you're thinking "I'm too good as a software developer to make this mistake", I can tell you that you're probably wrong: the reason why this code is fine
let list = [1, 2, 3]
guard list.first != nil else { return }
let firstElement = list.first!
is because the condition that guarantees a sound force unwrap is on the very previous line, in fully synchronous code. The moment control jumps around and the context is asynchronous, our mental capacity becomes insufficient, and that's where the compiler is supposed to help (of course I'm assuming that a Swift developer will appreciate the idea of leveraging a static type system to guarantee correctness of code while working on a codebase that's supposed to be scalable and maintainable).
A hacky option to avoid basic escaping of the bar
closure could be this:
final class Foo {
private struct Token {}
private var bar: (Token) -> Void = { _ in }
init() {
bar = { [weak self] _ in
self!.baz()
}
}
func execute() -> () -> Void {
bar(.init())
return bar // this will not compile, and requires more elaborate code to actually make the closure escape Foo
}
private func baz() {
print("all good")
}
}
So the problem seems to be twofold:
- the Swift language is not expressive enough in order to express the intent that the
bar
closure should not escapeFoo
, but... - ...the case is not simple enough in order for a developer to be reasonably sure that the code is correct and will always be correct, indefinitely.
The actual solution, in my opinion, would be to equip the language with more kinds of ownership, in order to express, at the language level, this kind of need.
For example, if the language had a uniquelyOwned
declaration for ownership, that's like strong
but the corresponding instance can't be moved around and only exists for the duration of its owner, we wouldn't need any capture group, nor to break the retain cycle:
final class Foo {
private uniquelyOwned var bar: () -> Void = {}
init() {
bar = {
// no need for weak or unowned
// no retain cycle is created
baz()
}
}
func execute() -> () -> Void {
bar()
return bar // won't compile because the closure would escape Foo
}
private func baz() {
print("all good")
}
}
But right now we don't have anything like this in Swift, so we need to find a good solution in order to work with what we have.
Consider the following case:
- we have an escaping closure and we need to break a retain cycle (actually need to, and I frequently find cases where it's not really needed);
- we assume that using
unowned
in this case is a bad idea (for the reasons I mentioned above); - we care about the fact that
self
should not benil
when the closure is called (which maybe isn't the case, sometimes we can actually ignorenil
and move on, without considering it a relevant error);
well in this case I'd argue that clearly spelling out guard let self = self else { ... }
and clearly putting something meaningful in the else
branch is the best course of action. This even if we crash in the else
branch, possibly with a meaningful message that gives some context.
I personally very strongly think that one should never crash in production for the sole purpose of analytics: it's always bad for the user. If the question is "given that we're crashing, should we crash earlier or later" then I would argue that "earlier" is better, but this is a different question than "should we crash or not", to which I would simply answer "we should not" (if possible) in almost all cases, and certainly all cases related to business logic, memory ownership et cetera. But this is another matter: what matters is that the [weak self] -> guard let self
"dance" is explicit about what is happening, and requires handling of the else
case: even if one just return
s, at least is clearly spelled out in code, and could be spotted as a red flag by some other developer, during a code review for example.

I don't understand the claim that
[weak self]
is an anti-pattern: I'd like to have better arguments for this, otherwise I could simply reverse it by saying that the claim "[weak self]
is an anti-pattern" is an anti-pattern.
From Wikipedia:
An anti-pattern is a common response to a recurring problem that is usually ineffective and risks being highly counterproductive
The first recurring problem is that developers don't understand the lifetime of objects in their program, leading to situations where 2 objects depend on each other, or a retain cycle. The common response is to break the superficial dependency (by downgrading a strong reference to a weak/unowned one), without considering the logical dependency - what happens to your application state if this object is no longer alive to respond to the event you were notifying it of?
Unowned references will fail early and loudly in such circumstances. That's what I personally prefer, because the alternative is that my application enters an unspecified state. Perhaps some energy-intensive operation doesn't know that it should stop, and drains the user's battery or cellular data allowance, or a payment dialog seems not to respond and the user taps a few more times out of frustration and is double-billed. In my opinion, those things are worse than an application crash -- and they might happen; I don't know! My application entered an unspecified state. All bets are off at that point.
The second recurring problem is that developers consider crashing to be worse than their applications entering an unspecified state. Crashes are more obvious, seem to lead more easily to poor reviews, and are something that non-technical business people can judge the developer's performance on. Everyone understands what a crash is, and why it can leave customers dissatisfied. If you think this isn't an issue, try looking around the r/swift or r/iOSDevelopment subreddits and count how many developers you see panicking because their App has a bug, who completely lack any confidence in their own ability, and think they are being judged by their bosses and will lose their job within days if they don't solve the issue.
Given the combination of these two recurring problems, the common response is to reach for weak references by default. They break the superficial dependency, meaning no retain cycles, but also don't crash, meaning no hard metrics to throw at the developer. It seems like weak references allow you to skip reasoning about object lifetimes without the clear and obvious crash which occurs when you reach a logical inconsistency. It is usually ineffective because it doesn't solve the root problem - that the developer doesn't understand their program, and it can be highly counterproductive because it often hides issues which the developer really should be addressing.
All of this is why I think SwiftUI's approach is strictly better. It makes application state the focus, you describe how that state is presented, and it automatically ensures that what appears on screen always matches that state. It makes it a lot harder for those logical inconsistencies to arise in the first place.
Of course, this isn't the only use of weak references. They are not inherently bad, and certainly do have their uses. But the idea that you should reach for weak references by default when developing mobile applications is, I'm afraid, an anti-pattern.

From Wikipedia:
My response is appropriate given the definition of an anti-pattern: claiming that [weak self]
is anti-pattern is a common response to these discussions, and it's usually ineffective and counterproductive.

what happens to your application state if this object is no longer alive to respond to the event you were notifying it of?
The answer to this depends on the context, and we have several tools to generate metrics when an unexpected thing happens: in terms of user experience, crashing is the worst possible one.

Unowned references will fail early and loudly in such circumstances. That's what I personally prefer, because the alternative is that my application enters an unspecified state.
This is simply not true: it's not the case at all that the application will enter an "unspecified state" in the else
branch of a guard let self = self
, it depends on what you do in that branch.

The second recurring problem is that developers consider crashing to be worse than their applications entering an unspecified state. Crashes are more obvious, seem to lead more easily to poor reviews, and are something that non-technical business people can judge the developer's performance on. Everyone understands what a crash is, and why it can leave customers dissatisfied. If you think this isn't an issue, try looking around the r/swift or r/iOSDevelopment subreddits and count how many developers you see panicking because their App has a bug, who completely lack any confidence in their own ability, and think they are being judged by their bosses and will lose their job within days if they don't solve the issue.
Developers are supposed to produce software that users find useful, and I don't see any user thanking a developer because they didn't allow them to use an app in an unspecified state: again, there's a lot of stuff that one can do instead of just giving up, in fact crashing is not a software development matter, but a user experience matter. You always have alternatives, and can always offer the user a better experience: your ability to do that (in terms of actual knowledge) or available time/effort to do that are completely different matters, and the latter, in my opinion, doesn't really have a place in a technical discussion about a language feature.

Given the combination of these two recurring problems, the common response is to reach for weak references by default. They break the superficial dependency, meaning no retain cycles, but also don't crash, meaning no hard metrics to throw at the developer.
The "hard" in "hard metrics" is more on the user than on the developer: you can 100% have extremely clear and informative metrics about the behavior of your code, without the need to crash, while also failing gracefully for your user.

It seems like weak references allow you to skip reasoning about object lifetimes without the clear and obvious crash which occurs when you reach a logical inconsistency.
This is certainly wrong: if an unexperienced developer gets this feeling, they should certainly be taught to reason about their code a little more.

It is usually ineffective because it doesn't solve the root problem - that the developer doesn't understand their program, and it can be highly counterproductive because it often hides issues which the developer really should be addressing.
I agree with this, but it has nothing to do with the need of crashing: forcing the code into a path because the compiler is not expressive enough to understand that that it's the only possible path should be done only when the developer is almost 100% certain that that is the case, but asynchronous callbacks and complex control are usually not things that are clearly understandable (and they certainly don't scale well), so the problems shift to "how can I give a good experience to the user given the limitations of the tools at my disposal"?

But the idea that you should reach for weak references by default when developing mobile applications is, I'm afraid, an anti-pattern.
No it's not. And I clearly spelled you the case against the usage of unowned
: in terms of code understandability and risk management, using [weak self]
and self!
is strictly better than [unowned self]
(that also removes the requirement from using self
in the closure) and that is the advice that I would give to an unexperienced developer.
The anti-pattern is automatically returning on the else
branch, without thinking about it. The anti-pattern, more generally, is not thinking about it and use a default that removes from the code any form of flagging that something could go wrong, which is exactly what [unowned self]
does.

This is simply not true: it's not the case at all that the application will enter an "unspecified state" in the
else
branch of aguard let self = self
, it depends on what you do in that branch.
No, it depends on what you were going to do on the other branch. When you refer to another object in a callback, there are really only 3 things you can do with it:
- Read a value
- Write a value
- Call a function
Now, there are circumstances where you were going to read a value (and couldn't capture it in advance), but you literally don't care if that data exists or not, or you were going to write a value, but actually it is truly optional and really nothing changes regardless if that storage location still exists, or were going to call a function, but you don't care if your callback ends up screaming in to the void - but those tend to be the exception.
The more usual case is that you take actions for a reason - you needed that data, or some other object needed that function call, and without it, your application ends up doing nonsensical things.

Developers are supposed to produce software that users find useful, and I don't see any user thanking a developer because they didn't allow them to use an app in an unspecified state: again, there's a lot of stuff that one can do instead of just giving up, in fact crashing is not a software development matter, but a user experience matter.
This is the key point where we differ: my belief is that crashing is not solely a user experience matter, or that user experience doesn't mean what you think it means.
There are far worse things you can do than crashing, and an application which finds itself in a state its creators never envisioned could quite easily end up destroying data, draining the user's battery, purchasing things which cost the user real money, leaking their private information, etc. Would you really consider hobbling along, praying that those things don't happen, to be a better user experience than crashing? Try asking your users what they think.
Yeah, sometimes it's thankless - because users should be able to take it for granted. You don't thank every driver for not hitting you with their car.

No, it depends on what you were going to do on the other branch. When you refer to another object in a callback, there are really only 3 things you can do with it:
- Read a value
- Write a value
- Call a function
Now, there are circumstances where you were going to read a value (and couldn't capture it in advance), but you literally don't care if that data exists or not, or you were going to write a value, but actually it is truly optional and really nothing changes regardless if that storage location still exists, or were going to call a function, but you don't care if your callback ends up screaming in to the void - but those tend to be the exception.
The more usual case is that you take actions for a reason - you needed that data, or some other object needed that function call, and without it, your application ends up doing nonsensical things.
I'm not sure these tend to be the exception, especially with UI handlers, which are one the most frequent usages of [weak self]
. I'm not sure about this, but you might be referring to a case where a callback is stored by object A (and references self
) but is executed by object B, that assumes that object A will do something when the callback is executed: if this is the case, I think this is simply not a good solution to this kind of problem. In general, code that "assumes" that things will happen at a distance – because some function is called – can get out of control (and become impossible to understand) very easily. A value should be returned from that callback, signaling what happened, or returning the needed information, and in the case of entering the else
branch of a guard
, the callback could return nil
, a specific value that describes the situation, like a failed Result
, or even throwing, so the caller can understand what happened without making unwarranted assumptions.
But this is just a specific case: my point is, on a case-by-case basis there's (usually) always a choice, and there's always a way to preserve useful information, in order to offer the user a good experience.

This is the key point where we differ: my belief is that crashing is not solely a user experience matter, or that user experience doesn't mean what you think it means.
There are far worse things you can do than crashing, and an application which finds itself in a state its creators never envisioned could quite easily end up destroying data, draining the user's battery, purchasing things which cost the user real money, leaking their private information, etc. Would you really consider hobbling along, praying that those things don't happen, to be a better user experience than crashing? Try asking your users what they think.
I think the point were we differ is that you're opposing a crash to a bunch of terrible consequences, while I think that's a false dichotomy. I agree that crashing is better than the scenarios you envisioned, but there are alternatives that would offer a better, more informative experience to the user, while also being sufficiently informative to the developer. But what I actually see in the field is a lot of developers that think that [unowned self]
(or force unwrapping in general) is an ok default, and are completely misled on this: that is exactly the pattern that leads a team to not really think about what they're doing, while using users as beta (or rather, alpha) testers for their "product".

Yeah, sometimes it's thankless - because users should be able to take it for granted. You don't thank every driver for not hitting you with their car.
I think this analogy is completely wrong: a crashing app is not a driver that hits you, but a car that turns off randomly while driving. The consequences, of course, are different, but the underlying concept is: you're using a product that fails while you're using it, that kicks you out and leaves you confused.
I think both of these positions have merit:
- The status quo around
[weak self]
(e.g. with aguard let self = self else { return }
by default most of the time) means that developers often do not consider handling the case whereself
is nil. In some circumstances, this can be harmful. [unowned self]
is not always a suitable alternative to[weak self]
. Some developers consider the risk of crashing in production to be unacceptable, and prefer using[weak self]
for this reason.
Can we find a middle ground option that:
- encourages developers to consider the case where
self
isnil
more often, - has similar usability / ergonomics to
[unowned self]
(e.g. supports implicit self, doesn't require manual boilerplate), - and does not crash in production?
I think @NathanLawrence's suggestion of an [assert self]
/ [assert weak self]
capture specifier is worth exploring further:

Maybe the solution to this situation is not necessarily a silent failure from a
guard
, but an assertion? In development settings, this would give you very clear visibility into potential issues as they crop up, and it would indeed be incredibly difficult to ignore that there's now a near-invisible bug in the app, but in production, users wouldn't see the issue. Perhaps an[assert self]
or[expect self]
might be a way to preserve this functionality while still retaining some kind of visibility into the mechanics of what's going on?

This is an interesting direction. It seems reasonable to have a capture specifier that behaves in this way:
button.tapHandler = { [assert weak self] in // ... } // is desugared into: button.tapHandler = { [weak self] in guard let self = self else { assertionFailure("Weakly captured variable 'self' no longer exists.") return } // ... }
Thoughts on this direction?

Can we find a middle ground option that:
- encourages developers to consider the case where
self
isnil
more often,- has similar usability / ergonomics to
[unowned self]
(e.g. supports implicit self, doesn't require manual boilerplate),- and does not crash in production?
I just don't think this can be reconciled with Swift's overall design. Unlike Rust, Swift's arithmetic operators crash on overflow in production; array subscripts crash on out-of-bounds access in production (rather than, say, clamping the index), and precondition failures (used liberally in the standard library rather than asserts) do also.
As I noted above, crashing in production when users don't consider the nil
case (not willy nilly) is a deliberate feature, not an oversight to be corrected. Some developers do not agree with this, whether in this context or regarding the behavior of operators, subscripts, preconditions, etc. But Swift is an opinionated language; it could have chosen to be more Rust-like in its behavior by not crashing in production, but it does not choose to do so. I don't understand what the limiting principle is here that would make this anything other than the first step in overturning this foundational design decision of Swift wholesale.
As I point out above, Swift's essential bargain to users is that it doesn't crash in production exactly in proportion to the degree to which users explicitly consider an alternative to crashing. I disagree with the characterization that such explicit handling is "manual boilerplate"--it's the user's end of the bargain.
Again, I think all of these difficulties can be overcome in the narrow case of what you're proposing if it's sugar over an if let
unwrapping, for the reasons discussed above that it doesn't involve any aspect of eliding the unhappy path. But fundamentally, I can't see how any formulation of the problem involving not crashing in production despite not considering the unhappy path is supportable within the design decisions laid down for Swift.