Should deinit be called after explicit consume of reference type?

Thanks, that's an excellent document. The rules and rational that it lays out make a lot of sense. There's a lot more nuance and complexity to it than I would have guessed.

I concur that in general it's unwise to make timing assumptions in implementing deinit since the author nominally can't control how instances are used. But arguably it's different for the user of the class - they might completely control the use. As such, when consume is used explicitly like that, arguably it's the author telling the compiler "make sure this reference goes away right here", in which case any otherwise normal behaviour regarding automatic lifetime extension should be suppressed (but the compiler could still issue an error diagnostic if it can see that lifetime extension is required for correctness, in which case the author has to fix their code - and more importantly perhaps their understanding of their code).

Maybe I'm a dinosaur? :slightly_smiling_face: I've never liked the wild uncertainty of garbage collection and I grew up in the manual retain-release era, so in my mind reference types are deterministic if you just understand their use (which you often do in any reliable retain-release-using application, because retain-release doesn't magically make lifetime concerns go away, such as peak memory usage).

I recognise that there's also the use-case for _ = consume of just being a way to say "don't let me use this again" (as an alternative to using block scoping to the same effect). Possibly these two functions should be separated into orthogonal directives? I'm not particularly a fan of using consume for that purpose anyway - it seems like it's incidental and a bit hacky.

Is that bad? It seems like this serves as a useful distinction of different desires. The use of an explicit consume in the caller is a way to say "I want to be done with this variable right here irrespective of what the callee does", as opposed to the implicit consumption case which is about the callee's preference instead. The caller and callee can have different preferences here (just not diametrically opposed, e.g. consuming callee and non-consuming caller for a non-copyable type).

Rabbit holing

It seems that automatic lifetime extension is technically a convenience, so that people don't have to use withExtendedLifetime nearly as often? It seems like the compiler could have chosen to just issue an error - "use after free" essentially - in these cases, but instead it just silently makes it work. Which I think is great, but it does mean anyone with the mental model that things deinit / release immediately after last use has an inaccurate understanding.

Now that I reflect on it, prior to reading the aforelinked doc I didn't have a precise notion for how Swift handles lifetimes within a given block. e.g. whether it keeps everything alive until the end of the block or just until the last reference therein. I think my presumption was that it's semantically until the end of the block but the optimiser, as in all cases, is free to actually do it differently if there's no observable difference. Which seems to be what the aforelinked doc confirms is the ideal if not always the reality?

But that just ensures the object remains alive until a well-defined point, it doesn't require that the object go away at the end of that block, right?

Related: Asserting that deinit happened

There's a pattern that's been around for ages, but I've seen it promoted a lot recently, for checking that reference types are deinited when they're supposed to be (per the author's intent). Particularly as part of unit tests. In a nutshell it is to take a weak reference to the object and then assert that it's nil later. It sounds like the weak reference would actually extend the lifetime of the value, ironically, in some cases. That's unintuitive (for that pattern).

Maybe there should be a way to more explicitly deinit a reference type, meaning a way to say "release this now and assert / precondition that it's actually been deallocated as a result"? It seems to be desirable to a lot of people - call it part of "acceptable society". :grin:

Tangent: Delayed deinit optimisations

I am also very interested in the potential for delaying deinit beyond the nominal scope. Potentially way beyond. That might sound odd given my interest in shortening lifetimes, above, but it's for cases like Memory pools (re. binary tree performance) where it'd be awesome if the compiler could transform a bunch of heap allocations into essentially a stack allocation, or an arena more generally, such that allocations are super cheap and deallocation is fantastically cheap (just the whole memory region, not values individually). If anything in this discussion is at odds with permitting that, I'd like to know.

3 Likes