I personally don’t see the need for it either, but I’m willing to entertain the notion pending a precise description.
If it's just a very misleadingly worded pitch, I suppose that's different; I don't think I have any problem with some selective extension of deinit barriers. But the whole SortedArray
initialization example, where there are no weak references or extant unsafe pointers, certainly does seem to strongly imply that this extension isn't very selective. If I can't be guaranteed of a move in that case without explicit annotation, Houston, we have a problem.
As for implicit vs. explicit, I'm merely contrasting with existing constructs that create explicit notations in code when lifetimes are extended past the last use. In the proposal, IIUC, that would happen in some cases where today it does not, and without any explicit annotation. Thus, it's implicit.
I think the pitch was well-worded and accurate...
withUnsafePointer
semantics won't change. Escaping an unsafe pointer from the body
closure will still be undefined, and the compiler will rely on that.
withExtendedLifetime
will still be useful to control of deinitialization order. Though it will no longer be needed in commically absurd, but common situations like this:
class ManagesPointer {
var pointer: UnsafeMutablePointer<Int>
init() {
pointer = .allocate(capacity: 1)
pointer.pointee = 3
}
func read() -> Int {
return withExtendedLifetime(self) {
pointer.pointee
}
}
deinit {
pointer.deallocate()
}
}
The lexical lifetime model previewed here is primarily a specification that the compiler can be written against. It does not change the basic message to Swift programmers that deinitialization is generally unordered.
The new rules are designed to avoid breaking real code patterns that both non-expert and expert Swift programmers will continue use intuitively. These code patterns come about because, for example, we force breaking cyles with weak references. We can't reasonably expect programmers to do that while also remembering to explicitly manage the lifetime of the parent reference everywhere, even within statements at the level of sub-expressions. If the compiler starts applying today's ARC rules consistently, then reasonable looking code will crash only after enabling optimization with no way to debug or understand the issue.
There is no proposal to "always extend lifetimes to the end of scope". There is no proposal to make deinitialization ordered, as-if a call to deinit has the same dependencies as regular function calls. But the reality is that Swift deinitializers have always had synchronous semantics. We never told programmers they needed to synchronize access to shared state, and now we need to live with that. Unlike Java/C# finalizers, it's impossible for the Swift compiler to ignore deinitialization side effects.
What is proposed is that certain specific code constructs are barriers to deinitialization. This actually introduces a new programming model for controlling deinitialization, which is far more powerful than withExtendedLifetime
. It follows the same mental model as asynchronous programming, and would work under a superset of the same conditions. Happens-before relationships can now be expressed between deinit and nondeinit code. Not possible before without structuring the code around withExtendedLifetime
closures.
Will the new rules make it harder to automatically optimize code patterns like the array sort example? Yes, significantly harder. I'm probably as irritated about that as anyone. But given the inherent limitations of a language designed around interoperability, separate compilation, automatic memory management, and synchronous deinitialization, those optimizations aren't nearly reliable enough to be a practical performance guarantee. I'm afraid that will require new annotations one way or another, regardless of this proposal.
So it doesn't really seem to have anything to do with lexical scopes. It's just additional deinit barriers. But perhaps I'm also misunderstanding it.
I think this illustrates the implicit assumption that everyone makes that variables are lexically scoped. A barrier wouldn't mean anything without a lexical scope.
About the yield statement in this thread’s first post - that’s a similar concept to with
and yield
in Python. If/when yield
is added to the Swift compiler, it would allow more opportunities for Python interop.
If I can't be guaranteed of a move in that case without explicit annotation, Houston, we have a problem.
Is it really a problem, though? It feels very in keeping with the language’s pragmatic history if Swift can usually optimize the store into a move, but is not guaranteed to do so.
func read() -> Int { return withExtendedLifetime(self) { pointer.pointee } }
Could you explain why withExtendedLifetime
is currently needed here? Can self
really drop to zero retain count in the middle of the call?
I would assume this line:
pointer.pointee
is decomposed in two operations during compilation:
let pointer = self.pointer // read pointer from self
return pointer.pointee // read pointee from pointer
and, if inlined within a bigger function, self
might no longer be referenced after the first line so the deinit
could be called and deallocate memory before pointee
is read.
wow ok, thanks. I would never have thought this was a possibility.
I would assume this line:
pointer.pointee
is decomposed in two operations during compilation:
let pointer = self.pointer // read pointer from self return pointer.pointee // read pointee from pointer
and, if inlined within a bigger function,
self
might no longer be referenced after the first line so thedeinit
could be called and deallocate memory beforepointee
is read.
Would it be better if there was a compiler warning in that scenario, rather than penalizing all such usage?
I feel like anyone using weak references should be expecting this sort of thing.
We wanted to have our cake and eat it too, allowing debug builds to compile straightforwardly, with good debugger behavior, while also retaining the flexibility for optimized builds to reduce memory usage and minimize ARC traffic by shortening lifetimes to the time of use.
It seems like one of the fundamental issues here is actually the disconnect caused by keeping things longer than normal in debug mode. Would it be possible for debug mode to unload things like normal, but first make copies purely for the sake of LLDB?
Could you, for example, unload things at last use like normal, but right before unloading make a copy that is only used for debugging and has no impact on behavior?
About the yield statement in this thread’s first post - that’s a similar concept to
with
andyield
in Python. If/whenyield
is added to the Swift compiler, it would allow more opportunities for Python interop.
Could someone explain to me precisely what the difference between yield
and return
is? I’m still a little unclear on the matter, and it sounds like something that is going to be quite common (unlike some of the other elements of this roadmap).
In Python, yield
means you halt a function, execute the code nested inside the with
statement, then resume the original function. The meaning is slightly different in Swift. In Python, yield
doesn't have an argument like return
does; it just redirects control flow. In Swift, yield
does have an argument, which is the resource it acts on.
What’s the benefit of it over return
? When would you use one versus the other? How does it affect the caller?
I recommend you to read on the concept of coroutines. Many languages have it, you can probably find an explanation that helps ;)
A hand wavy explanation is that Yield is a return that doesn’t exit the function but just pauses it, giving control back to the caller and continuing the execution after the yield when resumed (instead of starting at the top).
I get that part (though this is going to need to be explained to people learning Swift as a first language, so you can’t rely on knowledge of Python).
The point I’m confused on is why it would be even remotely relevant to getters and setters which consist of only one statement.
struct Foo { private var _x: [Int] var x: [Int] { read { yield _x } modify { yield &_x } } }
Is this something the compiler is actually allowed to do? Is this documented? I know that C# can do this with object finalizers, but I thought one of the advantages of ARC is the fact that object lifespans are deterministic and predictable.
If it is something the compiler is allowed to do, then I think a warning should be put in the "Deinitialization" section of The Swift Programming Language.
Could you, for example, unload things at last use like normal, but right before unloading make a copy that is only used for debugging and has no impact on behavior?
Making a copy has a more drastic impact on behavior than moving an ARC release.
modify
helps support in-place mutation, which can help ARC optimizations. Consider the following code:
foo.x.append(0) // `foo` is an instance of `Foo` from Joe Groff's example
If Foo.x
were to use get
/set
, then x
would return a new Array
instance referencing the contents of _x
. append(0)
would then mutate that array, triggering a copy-on-write, then that array would be written back to _x
.
Since Foo.x
uses modify
, x
will yield _x
's Array
instance so that append(0)
can directly mutate _x
. If _x
's backing storage is known to be uniquely referenced and there is capacity for a new element, then this will avoid a copy-on-write.
IIRC this is why modify
is already used for Array
's subscript (using _modify
since it's not yet a stable feature).
You may also want to read this proposal for more information about modify
and yield
.
I think the pitch was well-worded and accurate...
There seems to be a lot of confusion over exactly what's being pitched here with respect to lifetimes and last-use of variables. I clearly still don't understand what's being proposed. The phrase “lexical lifetimes” seems to lead everyone down a path of deduction that you seem(?) to be telling us is wrong. The apparent misinterpretation extends beyond those who've posted here to quite a few others I know are watching. Maybe in fact the text is perfect and the readers are somehow inadequate… I guess everyone can draw their own conclusions.
That said, I may have been unclear also, because I have never been concerned about the following things you mentioned (excerpted):
withUnsafePointer
semantics won't change.withExtendedLifetime
will still be useful to control of deinitialization order.- …does not change the basic message to Swift programmers that deinitialization is generally unordered.
Continuing on, the only way I know how to make this statement
There is no proposal to "always extend lifetimes to the end of scope".
logically consistent with the rest of what I've read is to conclude that the user model will be, “you have no right to assume the lifetime will end before the end of scope. It might, but if you want to be sure, you'll need to move()
out of the variable.” Is that the intention?
But the reality is that Swift deinitializers have always had synchronous semantics. We never told programmers they needed to synchronize access to shared state, and now we need to live with that. Unlike Java/C# finalizers, it's impossible for the Swift compiler to ignore deinitialization side effects.
All true, but I'm afraid I don't understand what you're driving at here. When you talk about “synchronizing access to shared state” I naturally think of preventing data races. But since we never told users (until recently, maybe, with concurrency?) that they could use Swift in code with preemptive multitasking or true parallelism in the first place, and always left implicit the assumption that they would have to find a way to synchronize access to shared state if they ventured into that territory, I imagine this maybe doesn't have anything to do with data races. So I'm at least a bit confused.
Happens-before relationships can now be expressed between deinit and nondeinit code. Not possible before without structuring the code around
withExtendedLifetime
closures.
I don't think withExtendedLifetime
ever gave any such guarantee, since it only defines a lower bound on the end of a lifetime. Being able to control those relationships sounds interesting, although it's unclear to me that there's a compelling case for making that control available at the cost of any complexity. Sometimes it's better to tell users, “don't do that” or “don't depend on that.”
given the inherent limitations of a language designed around interoperability, separate compilation, automatic memory management, and synchronous deinitialization, those optimizations aren't nearly reliable enough to be a practical performance guarantee. I'm afraid that will require new annotations one way or another, regardless of this proposal.
I disagree. The existing @escaping
annotation, extended to non-closure types, is enough. I have no expectation that the compiler will automatically move out of the last use of a variable into a function parameter unless that parameter has been labeled @escaping
(or is an init
parameter, where we already make the assumption that the parameter escapes).
So it doesn't really seem to have anything to do with lexical scopes. It's just additional deinit barriers. But perhaps I'm also misunderstanding it.
I think this illustrates the implicit assumption that everyone makes that variables are lexically scoped. A barrier wouldn't mean anything without a lexical scope.
[/quote]
To the contrary, I think this perfectly illustrates that some people actually think of “scope” in terms of where you are allowed to use a name and what that name refers to, the well-established meaning of that term in CS. Variables are lexically scoped in Swift. Whether you extend the lifetime to the end of such a scope is a mostly orthogonal issue. It is true that some people will assume, as that article says, that “scope is a subset of lifetime” but I believe that statement is wrong, since there are arguably lots of counterexamples.
I clearly still don't understand what's being proposed.
I hope the confusion will be eliminated when the full proposal is ready:
The upcoming proposal will go into more detail about what exactly anchoring means, and what constitutes a barrier
As it stands, this conversation is backing into a definition of “anchoring” instead of letting the proposal authors provide a fully articulated vision.
Are we sure we actually need a yield
keyword on top of read
and modify
? Would there ever be a possibility of having more than one statement in such a block? If not, given the existing implicit return, I think we could just avoid using any keywords at all.