I would be much happier with a model that always capped the lifetimes of owned local variables to their last syntactic use in scope — prior to any optimization — than one that guaranteed the full scope. If that's the compromise that we need in order to still allow the practical optimizability of copies in Swift, I think I can live with it.
At that point, I assume we would guarantee the optimization as a language rule, which means we'd need to define what exactly it meant in terms of loops and other conditional control flow. But that doesn't seem particularly hard, just a bit onerous.
And what is that optimization, please? Google is not helping me out here.
The major problem we ran into is that it broke many people's debug workflows,
In what way?
and also broke a lot of code that assumed longer lifetimes than we actually guaranteed—because, as @lukasa, @Andrew_Trick , and others noted, we didn't really have any guarantees
That's only true if you read “the pointer argument is valid only for the duration of the method’s execution” as meaning the pointer argument is never guaranteed to be valid, and instead the standard library is vending a complete fiction. I don't think that's reasonable, personally.
Also, maybe I just wrongly assumed, but I was under the impression that deferred code was guaranteed to run after the rest of the code in the block. If that's not true, I think it would be a reasonable place to add a happens-before relation.
and it wasn't just "unsafe" code, or code that could be reasonably be declared to be wrong, that was affected.
I'd like to see some examples of code that is broken by ending lifetimes immediately after the last use, but can't reasonably be declared to be wrong, please. I simply can't imagine it.
We do have to define some rule in order to establish guardrails as the restraints on the optimizer get lifted by better SIL infrastructure.
There are rules implicit in the documentation of standard library functions like withExtendedLifetime. If the implementations of those functions haven't been blessed in such a way that that they can uphold their guarantees in the face of the semantic ARC changes, then that is the first step to take. If those functions can be used to fix the code that is breaking with the new changes, it's completely reasonable to argue that the code is already wrong.
If those functions aren't sufficient to fix the code that is breaking, another alternative that should be considered is to add additional guarantee-providing functions to the standard library, rather than complicating the core language.
Another situation where you may need to copy something out of a value without consuming it is if you have a mutating method on the value that caches some part of the value going in;
I don't see a problem here. That mutating method is non-consuming, and it makes a copy of that part of the parameter internally. Only whole-parameter copies need to be prevented from escaping when the parameter is not so annotated.
Really? I m thinking about actual memory usage here. To my understanding, the runtime tries to reuse memory blocks of the same size, as this scheme is occurring often. The consequence then is if you are e.g. loading big image files into memory inside a loop, the memory is not freed after each iteration, but the images being of different sizes, memory cannot be reused and more and more memory is allocated, faster than memory is released. From the original description I thought that this would be resolved with the proposal, meaning that memory is freed immediately with each scope ending (in the example with a performance penalty in comparison to the old behaviour in the case that the image files are indeed of same size).
I wonder whether we should spell this kind of thing as a with block, ala:
with greatAunt = mother.father.sister {
greatAunt.name = "Grace"
greatAunt.age = 115
}
perhaps with a short form option like
with mother.father.sister {
.name = "Grace"
.age = 115
}
The latter might even generalise, so you could do e.g.
with mother.father.sister, cousin.father {
$0.name = "Grace"
$0.age = 115
$1.name = "Bob"
$1.age = 63
}
i.e. using similar syntax to that for closures.
Your first variant is visually better IMHO:
with a = mother.father.sister, b = cousin.father {
a.name = "Grace"
a.age = 115
b.name = "Bob"
b.age = 63
}
A question would be if some of the "EXC_BAD_ACCESS" errors would then be resolved see in this forum and those Swift bugs.
From my understanding: As long as one is not doing anything "unsafe", a "EXC_BAD_ACCESS" error is always because of a Swift bug, right? And when it is occurring in a async/await context, it could have something to do with ARC (or then "ownership"), right? So my hope is that improving ARC (especially in async/await contexts) would indeed resolve a lot of those bugs.
Are you sure that isn’t implemented at a higher-level, in the framework itself? I don’t think that is relevant to Swift as a language.
All code that violates a precondition is incorrect, even if it compiles successfully. The unsafe prefix merely indicates that not all preconditions are checked in release builds (or possibly ever).
This is not a bug in Swift. At worst, it is a limitation of the compiler’s ability to stop you. Some of the items in this roadmap (such as @nonescaping) would expand compile-time checking further, but ignoring documented requirements remains incorrect even now.
You know, that’s an excellent idea if we want to encourage minimizing the duration of the borrow. This would make it rather similar to an if let block, but without the branching[1].
I especially like how this would read to someone learning the language: “with” has strong implications of possessiveness, like holding a tool in your hand and using it[2]. That’s exactly what is happening with an exclusive borrow, right down to the inability of others to touch it for the duration and other invariants. It also reinforces that you should only perform work relevant to the borrow while holding it.
I can’t say I’m a fan of your implicit-member or anonymous argument shorthand, though. They are much less clear than naming new bindings.
When minimizing brevity, I personally try to look for redundant information at the point of use. This is distinct from repetition, as the same binding may be used in different roles without being redundant. If I do that with your first example:
I end up with something like
with greatAunt = mother.father.sister {
// I'm not actually suggesting any language work like this
greatAunt.[name = "Grace", age = 115]
}
Given the potential to reference things that are not the borrowed value, implicit member syntax seems inappropriate. Unnamed parameters are also unsuitable: the name greatAunt provides additional clarity when reading the code[1].
You could use modifier-chaining, but you probably wouldn't be able to use mutation to do it, thus defeating the purpose of the borrow:
with greatAunt = mother.father.sister {
// LHS indicates destination, RHS indicates source, no redundancy
greatAunt = greatAunt
.name("Grace")
.age(115)
}
It would also clutter the namespace. Other options include making a list of unary mutating closures then calling each one with greatAunt, but that sort of inversion is why modifier-chaining exists in the first place.
You could probably do something with a sequence of key paths and assigned values, but in this case you'd have different types being assigned to different key paths. That would mean using existential types, which is almost certainly not worth doing.
All of these seem worse than just writing it out. The more times you have to do it, the more likely it is that not having that name would make it harder for someone to understand what it is doing at a glance.
That may not always be the case, but it is here. Besides, it'd be unreasonable to add that only for this construct and not things like
if let. ↩︎
Yeah, I think there are use cases for both ad-hoc variable binding and block-scoped forms of these bindings, depending on whether you want strictly scoped semantics or point-of-last-use semantics for when the borrow is released. Rust transitioned fairly early on from lexical lifetimes to point-of-last-use lifetimes for borrows, which suggests to me that point-of-last-use is the more ergonomic "default" choice, but an explicit "with" block for scoped borrows seems useful, particularly for working with unsafe resources where the compiler alone can't fully assert that the lifetimes are correct for safe execution.
I think they were talking about using it instead of inout x = &y, for exclusive borrowing. That is, not for lifetime control.
They're different ways of looking at the same thing. If you wrote:
inout x = &y
doStuffWith(x)
then y needs to be exclusively borrowed to give you access to x for, at minimum, the lifetime of x's use, up to doStuffWith(x). If you had other code that informally relied on y being exclusively borrowed, such that variable's lifetime of use isn't sufficient, then a with block would be another way of expressing that, like:
with inout _ = &y {
doStuffThatReliesOnYBeingExclusivelyAccessed()
}
Sorry, I see what you mean now.
I prefer the with block, since it allows you to maintain invariants quite handily. You may not want to release the exclusive borrow just because you are finished using it. Furthermore, you shouldn’t be doing anything that doesn’t rely on the exclusive borrow in between acquiring it and releasing it.
I don’t really think inout x = &y is particularly intuitive in comparison, and I think it is misleadingly similar to var and let despite having a very different meaning (the & is doing a lot of work to compensate for that).
I also think if with would lend itself quite well to replacing if let syntax for many purposes, as @Chris_Lattner3 has discussed.
On the other hand, maintaining invariants may not be a relevant issue, since everyone would presumably be using the concurrency model to handle it: you’d have until the next await.
Sorry, copy forwarding is the pass that implements the "forward ownership of last use" rule. I gave an example of safe code that this broke in the OP.
In fairness, “forward ownership of last use” doesn't appear to be defined anywhere either, but I think I know what you're talking about: the rule where delegate's lifetime has ended by the time callDelegate is invoked, because it was only weakly-referenced:
(right?)
IMO it's unsurprising that if you want to keep a thing alive, a single weak reference doesn't cut it, and that that code can reasonably declared wrong.
Also, the fix seems entirely reasonable…
let delegate = MyDelegate(controller)
withExtendedLifetime(delegate) { MyController(delegate).callDelegate() }
…although it's non-obvious to me where, if the programmer is expecting this delegate to persist due to a single weak reference, the compiler can release the object referenced by delegate without violating their expectations.
I still feel many of my questions are unanswered:
- In what way does lifetime shortening “break debug workflows?”
- Do you have code that is broken by ending lifetimes immediately after the last use but can't reasonably be declared to be wrong? You never explicitly made the latter claim about the example referenced here, so I'm not assuming.
- Do you consider the lifetime extension guarantees provided by the standard library functions I mentioned to be meaningless, or can we agree that we have in fact, had guarantees all along?
- What, in the end, is the argument for separating the user-visible concepts “escaping argument” and “consumed argument” as a first evolutionary step, when they are so often coupled, and can conceivably remain so?
Thanks,
Dave
Many users expect local variables to be inspectable for their entire lexical scope.
It's easy to call out an example like the one I posted as "wrong" in isolation, but it becomes harder when the same example appears in the wild in hundreds of projects. We live in a society, for better or worse.
withExtendedLifetime's entire purpose has been to set a minimum bound for a value's lifetime, so no, I wouldn't say that it's meaningless. It's probably the only firm rule for lower-bounding a value's lifetime that we've had up to this point.
As we currently envision it "consumed" does imply "escaping". "Nonescaping" puts an additional constraint on nonconsumed, copiable arguments, promising that the callee won't copy the value out of scope.
-Joe
![]()
This just doesn't seem like a serious problem to me.
It's easy to call out an example like the one I posted as "wrong" in isolation, but it becomes harder when the same example appears in the wild in hundreds of projects. We live in a society, for better or worse.
You're conflating distinct things. What people have done with the language does not define the language rules; design intent and available documentation do. The code we're talking about violates the language rules as designed and documented (even if you think it's only weakly documented), and is therefore wrong by current measures, no matter how many times it occurs in the wild. The wild is full of data races and type-based aliasing violations, too.
That is not in itself an argument that we shouldn't change the language rules to make that code correct, but we should be clear that's what's being proposed, rather than refuse to admit that the code is currently wrong, which unfairly biases the decision in favor of the change. Sometimes declaring incorrect code correct is worth it; sometimes it isn't. When that's what we're considering, it changes the way we think about the trade-offs.
withExtendedLifetime's entire purpose has been to set a minimum bound for a value's lifetime, so no, I wouldn't say that it's meaningless. It's probably the only firm rule for lower-bounding a value's lifetime that we've had up to this point.
There are a few others, surely. Most obviously, any read or write of a stored property lower-bounds the value's lifetime, and there are some limits on the reordering of reads and writes implicit in the language semantics. I would guess the withUnsafeMutableXXX functions are also guaranteed to keep the target object alive.
And I'm saying we could avoid much complexity by evolving the language to mandate that all escaping arguments be annotated as such. Every consumed argument would be so-annotated and thus non-consumed arguments would automatically be non-escaping.
Sean Parent recently suggested “sink” as a lightweight replacement for @escaping/consumed/whatever. Personally, I love this picture:
func f0(x: T) { ... } // x is non-consumed and non-escaping.
func f1(x: inout T) { ... } // x is non-consumed and non-escaping.
func f2(x: sink T) // x is consumed and escaping.
// YAGNI
func f3(x: escaping T) { ... } // x is non-consumed and escaping
The last notation (or some variant of it) to be added later, if necessary.
What people have done with the language does not define the language rules; design intent and available documentation do. The code we're talking about violates the language rules as designed and documented (even if you think it's only weakly documented), and is therefore wrong by current measures, no matter how many times it occurs in the wild. The wild is full of data races and type-based aliasing violations, too.
Who is the language designed for, the optimizer or the programmers? Nobody expects a variable’s lifetime to end in the middle of evaluating arguments to a function call. They didn’t expect it ten years ago when the Objective-C compiler tried adding the same optimization.
Who is the language designed for, the optimizer or the programmers?
Programmers, obviously, and the principle of ending lifetimes at last use was designed to benefit them, not the optimizer.
Nobody expects a variable’s lifetime to end in the middle of evaluating arguments to a function call.
Um, sorry, but that's false: I do. I had to account for that possibility in writing the standard library, and I still do account for it whenever it's necessary.
They didn’t expect it ten years ago when the Objective-C compiler tried adding the same optimization.
The point of Swift was to establish a new and better set of language rules, not necessarily beholden to the expectations that applied to Objective-C. I consider it a defect that the compiler didn't end lifetimes at last use in Swift v1, which by now could have changed what Swift programmers expect. But it didn't, so it's possible that Hyrum's law prevails here, and the designed rules have been so widely violated that we can't afford to keep them.
I don't know how to decide whether that's true, but the mere fact that some people think they have a right to have their expectations fulfilled doesn't seem like enough of a basis for making that decision. We should consider all the consequences of a rules change, and not just whether some people are initially outraged by what happens when the designed semantics are fully realized. You can mollify outrage with messaging, but there's usually no cure for language design mistakes.