A roadmap for improving Swift performance predictability: ARC improvements and ownership control

I wonder whether we should spell this kind of thing as a with block, ala:

with greatAunt = mother.father.sister {
  greatAunt.name = "Grace"
  greatAunt.age = 115

perhaps with a short form option like

with mother.father.sister {
  .name = "Grace"
  .age = 115

The latter might even generalise, so you could do e.g.

with mother.father.sister, cousin.father {
  $0.name = "Grace"
  $0.age = 115
  $1.name = "Bob"
  $1.age = 63

i.e. using similar syntax to that for closures.


Your first variant is visually better IMHO:

with a = mother.father.sister, b = cousin.father {
    a.name = "Grace"
    a.age = 115
    b.name = "Bob"
    b.age = 63

A question would be if some of the "EXC_BAD_ACCESS" errors would then be resolved see in this forum and those Swift bugs.

From my understanding: As long as one is not doing anything "unsafe", a "EXC_BAD_ACCESS" error is always because of a Swift bug, right? And when it is occurring in a async/await context, it could have something to do with ARC (or then "ownership"), right? So my hope is that improving ARC (especially in async/await contexts) would indeed resolve a lot of those bugs.

Are you sure that isn’t implemented at a higher-level, in the framework itself? I don’t think that is relevant to Swift as a language.

All code that violates a precondition is incorrect, even if it compiles successfully. The unsafe prefix merely indicates that not all preconditions are checked in release builds (or possibly ever).

This is not a bug in Swift. At worst, it is a limitation of the compiler’s ability to stop you. Some of the items in this roadmap (such as @nonescaping) would expand compile-time checking further, but ignoring documented requirements remains incorrect even now.

1 Like

You know, that’s an excellent idea if we want to encourage minimizing the duration of the borrow. This would make it rather similar to an if let block, but without the branching[1].

I especially like how this would read to someone learning the language: “with” has strong implications of possessiveness, like holding a tool in your hand and using it[2]. That’s exactly what is happening with an exclusive borrow, right down to the inability of others to touch it for the duration and other invariants. It also reinforces that you should only perform work relevant to the borrow while holding it.

I can’t say I’m a fan of your implicit-member or anonymous argument shorthand, though. They are much less clear than naming new bindings.

  1. Branching could be accomplished with the pleasingly orthogonal if with, which reads quite well in my opinion. ↩︎

  2. And unlike in Python, it doesn’t come with an intimidating amount of baggage related to cleanup delegates that doesn’t seem relevant to the chosen keyword. ↩︎


When minimizing brevity, I personally try to look for redundant information at the point of use. This is distinct from repetition, as the same binding may be used in different roles without being redundant. If I do that with your first example:

I end up with something like

with greatAunt = mother.father.sister {
  // I'm not actually suggesting any language work like this
  greatAunt.[name = "Grace", age = 115]

Given the potential to reference things that are not the borrowed value, implicit member syntax seems inappropriate. Unnamed parameters are also unsuitable: the name greatAunt provides additional clarity when reading the code[1].

You could use modifier-chaining, but you probably wouldn't be able to use mutation to do it, thus defeating the purpose of the borrow:

with greatAunt = mother.father.sister {
  // LHS indicates destination, RHS indicates source, no redundancy
  greatAunt = greatAunt

It would also clutter the namespace. Other options include making a list of unary mutating closures then calling each one with greatAunt, but that sort of inversion is why modifier-chaining exists in the first place.

You could probably do something with a sequence of key paths and assigned values, but in this case you'd have different types being assigned to different key paths. That would mean using existential types, which is almost certainly not worth doing.

All of these seem worse than just writing it out. The more times you have to do it, the more likely it is that not having that name would make it harder for someone to understand what it is doing at a glance.

  1. That may not always be the case, but it is here. Besides, it'd be unreasonable to add that only for this construct and not things like if let. ↩︎

Yeah, I think there are use cases for both ad-hoc variable binding and block-scoped forms of these bindings, depending on whether you want strictly scoped semantics or point-of-last-use semantics for when the borrow is released. Rust transitioned fairly early on from lexical lifetimes to point-of-last-use lifetimes for borrows, which suggests to me that point-of-last-use is the more ergonomic "default" choice, but an explicit "with" block for scoped borrows seems useful, particularly for working with unsafe resources where the compiler alone can't fully assert that the lifetimes are correct for safe execution.


I think they were talking about using it instead of inout x = &y, for exclusive borrowing. That is, not for lifetime control.

They're different ways of looking at the same thing. If you wrote:

inout x = &y

then y needs to be exclusively borrowed to give you access to x for, at minimum, the lifetime of x's use, up to doStuffWith(x). If you had other code that informally relied on y being exclusively borrowed, such that variable's lifetime of use isn't sufficient, then a with block would be another way of expressing that, like:

with inout _ = &y {

Sorry, I see what you mean now.

I prefer the with block, since it allows you to maintain invariants quite handily. You may not want to release the exclusive borrow just because you are finished using it. Furthermore, you shouldn’t be doing anything that doesn’t rely on the exclusive borrow in between acquiring it and releasing it.

I don’t really think inout x = &y is particularly intuitive in comparison, and I think it is misleadingly similar to var and let despite having a very different meaning (the & is doing a lot of work to compensate for that).

I also think if with would lend itself quite well to replacing if let syntax for many purposes, as @Chris_Lattner3 has discussed.

On the other hand, maintaining invariants may not be a relevant issue, since everyone would presumably be using the concurrency model to handle it: you’d have until the next await.

Sorry, copy forwarding is the pass that implements the "forward ownership of last use" rule. I gave an example of safe code that this broke in the OP.

1 Like

In fairness, “forward ownership of last use” doesn't appear to be defined anywhere either, but I think I know what you're talking about: the rule where delegate's lifetime has ended by the time callDelegate is invoked, because it was only weakly-referenced:


IMO it's unsurprising that if you want to keep a thing alive, a single weak reference doesn't cut it, and that that code can reasonably declared wrong.

Also, the fix seems entirely reasonable…

let delegate = MyDelegate(controller)
withExtendedLifetime(delegate) { MyController(delegate).callDelegate() }

…although it's non-obvious to me where, if the programmer is expecting this delegate to persist due to a single weak reference, the compiler can release the object referenced by delegate without violating their expectations.

I still feel many of my questions are unanswered:

  1. In what way does lifetime shortening “break debug workflows?”
  2. Do you have code that is broken by ending lifetimes immediately after the last use but can't reasonably be declared to be wrong? You never explicitly made the latter claim about the example referenced here, so I'm not assuming.
  3. Do you consider the lifetime extension guarantees provided by the standard library functions I mentioned to be meaningless, or can we agree that we have in fact, had guarantees all along?
  4. What, in the end, is the argument for separating the user-visible concepts “escaping argument” and “consumed argument” as a first evolutionary step, when they are so often coupled, and can conceivably remain so?



Many users expect local variables to be inspectable for their entire lexical scope.

It's easy to call out an example like the one I posted as "wrong" in isolation, but it becomes harder when the same example appears in the wild in hundreds of projects. We live in a society, for better or worse.

withExtendedLifetime's entire purpose has been to set a minimum bound for a value's lifetime, so no, I wouldn't say that it's meaningless. It's probably the only firm rule for lower-bounding a value's lifetime that we've had up to this point.

As we currently envision it "consumed" does imply "escaping". "Nonescaping" puts an additional constraint on nonconsumed, copiable arguments, promising that the callee won't copy the value out of scope.


1 Like

This just doesn't seem like a serious problem to me.

It's easy to call out an example like the one I posted as "wrong" in isolation, but it becomes harder when the same example appears in the wild in hundreds of projects. We live in a society, for better or worse.

You're conflating distinct things. What people have done with the language does not define the language rules; design intent and available documentation do. The code we're talking about violates the language rules as designed and documented (even if you think it's only weakly documented), and is therefore wrong by current measures, no matter how many times it occurs in the wild. The wild is full of data races and type-based aliasing violations, too.

That is not in itself an argument that we shouldn't change the language rules to make that code correct, but we should be clear that's what's being proposed, rather than refuse to admit that the code is currently wrong, which unfairly biases the decision in favor of the change. Sometimes declaring incorrect code correct is worth it; sometimes it isn't. When that's what we're considering, it changes the way we think about the trade-offs.

withExtendedLifetime 's entire purpose has been to set a minimum bound for a value's lifetime, so no, I wouldn't say that it's meaningless. It's probably the only firm rule for lower-bounding a value's lifetime that we've had up to this point.

There are a few others, surely. Most obviously, any read or write of a stored property lower-bounds the value's lifetime, and there are some limits on the reordering of reads and writes implicit in the language semantics. I would guess the withUnsafeMutableXXX functions are also guaranteed to keep the target object alive.

And I'm saying we could avoid much complexity by evolving the language to mandate that all escaping arguments be annotated as such. Every consumed argument would be so-annotated and thus non-consumed arguments would automatically be non-escaping.

Sean Parent recently suggested “sink” as a lightweight replacement for @escaping/consumed/whatever. Personally, I love this picture:

func f0(x: T) { ... }           // x is non-consumed and non-escaping.
func f1(x: inout T) { ... }     // x is non-consumed and non-escaping.
func f2(x: sink T)              // x is consumed and escaping.

func f3(x: escaping T) { ... } // x is non-consumed and escaping

The last notation (or some variant of it) to be added later, if necessary.


Who is the language designed for, the optimizer or the programmers? Nobody expects a variable’s lifetime to end in the middle of evaluating arguments to a function call. They didn’t expect it ten years ago when the Objective-C compiler tried adding the same optimization.

Programmers, obviously, and the principle of ending lifetimes at last use was designed to benefit them, not the optimizer.

Nobody expects a variable’s lifetime to end in the middle of evaluating arguments to a function call.

Um, sorry, but that's false: I do. I had to account for that possibility in writing the standard library, and I still do account for it whenever it's necessary.

They didn’t expect it ten years ago when the Objective-C compiler tried adding the same optimization.

The point of Swift was to establish a new and better set of language rules, not necessarily beholden to the expectations that applied to Objective-C. I consider it a defect that the compiler didn't end lifetimes at last use in Swift v1, which by now could have changed what Swift programmers expect. But it didn't, so it's possible that Hyrum's law prevails here, and the designed rules have been so widely violated that we can't afford to keep them.

I don't know how to decide whether that's true, but the mere fact that some people think they have a right to have their expectations fulfilled doesn't seem like enough of a basis for making that decision. We should consider all the consequences of a rules change, and not just whether some people are initially outraged by what happens when the designed semantics are fully realized. You can mollify outrage with messaging, but there's usually no cure for language design mistakes.


You will need to make a very strong argument that ending lifetimes in the middle of an expression, rather than at a statement boundary, is “better”. Because in practice it leads to crashes.

Just a data point, I have found it super confusing that lifetimes dont end early (after their last) in debug builds.

I think this behavior makes a ton of sense, but debug and release behaving differently is definitely annoying.

I have been using withExtendedLifetime(foo) { } to explicitly extend lifetimes and I think it actually makes the code easier to follow.


For the record, I share your and @dabrahams’s opinion that lifetimes should not extend to the end of the declaring scope. My (strong) objection is to lifetimes ending within an expression, especially in between argument evaluation and function/setter invocation*. In practice, this leads to crashing bugs in code that appears straightforward, which is why it has been reverted every time it has been attempted in two different compilers.

If an official Swift release had ever shipped this behavior, the natural result I foresee is that some influential developer would have blogged about their nightmare debugging a crash caused by a value being deinit’ed in the middle of an expression, and “use `withExtendedLifetime { } around all your weak variables” would be cargo-culted to the same extent as “don’t use force-unwrapping” has been.

* P.S.: Do you expect this code can ever crash?

weak var delegate = Delegate()
let strongDelegate: Delegate? = delegate
1 Like

If mistakes like this are truly common, I think it would make much more sense to add a compiler warning instead of extending the lifetime to match mistaken expectations.

This would, obviously, require a way to silence the warning. But it would be an improvement on silently being less efficient than it could be.

When a function not marked with @discardableResult returns a value, a compile-time warning prompts the programmer to explicitly discard it rather than silently doing so. This is because it is often not the actual intent.

Similarly, it is reasonable to argue that ending a lifetime in the middle of an expression is confusing. That does not mean we should silently accommodate that mistake.