[Sub-Pitch] Task-local values in isolated synchronous deinit and async deinit

I have one main concern with isolated deinits: It introduces GC-style 'do it later' memory reclamation (*) without backpressure into the language.

This system works okay, if the garbage producer (i.e. the piece of code that lets the reference go to ref count 0) is slower as the garbage consumer (i.e. the actor's executor that'll run the isolated deinit). There is no backpressure which means that even if the garbage consumer is overwhelmed, the garbage producer will not be slowed down. That's common in GC'd systems but is usually not the case with ARC. In ARC, the deinit usually run synchronously and in line with the place that decrements the reference to 0. So assuming the backpressure works otherwise, ARC won't break it.
After introducing isolated deinit we could however get to a place where the garbage consumer cannot catch up with the garbage producer and we will start to uncontrollably accumulate memory, possibly until an OOM kill (**).

And of course, with ARC it's absolutely possible to either DispatchQueue.async inside of a deinit or a user could attempt to run the deinit in a DispatchQueue.async by carefully arranging the code in a way that makes it likely for the deinit to be triggered in a DispatchQueue. But that is done by explicitly and by calling library functions (as opposed to using a language feature). So this isn't a new problem, but it would be a new problem if we consider only the language itself.

The combination of automatic reference counting and deterministic deinitialization makes deinit in Swift a powerful tool for resource management

Yes, ARC and is deterministic, although this is very rarely a property that can usefully be exploited by a programmer. Once references are shared across threads, it's usually entirely unpredictable when and where deinit is run. Yes, the places where swift_retain and swift_release are called are absolutely knowable but in all but the most trivial cases it's unknowable which swift_release will actually deinit and what thread that is on.

What plain ARC gives you however is that once the last swift_release is called, deinit will run immediately, inline and guaranteed. No implicit and hidden buffers that hold onto resources until some unknowable later point in time.

Regarding "powerful tool for resource management": This applies to the narrowest set of resources: The ones that can synchronously be released, on any thread and are abundant enough that no program would need to know the amount currently allocated. This includes memory and really not much else. (Yes, for ~Escapable & ~Copyable resources that can be synchronously destrutured this is different but I consider this a very narrow field of resources).

If we do go ahead and introduce isolated deinits into the language, then the system becomes non-deterministic even in trivial-looking cases like the following:

@MainActor class Foo { var number: Int; isolated deinit { print(self.number) } }

func makeTwoFoos() async {
    // Trivial example that can shows ARC's determinism
    // On global default executor, not main actor
    precondition(!Thread.isMainThread)
    let two = Foo(number: 2)
    let one = Foo(number: 1)
    _ = consume one
    _ = consume two
    print("3")
}

This without isolated deinit is guaranteed to print (1 then 2 then 3). But with isolated deinit it might print the numbers in any order I believe. That's because (as I understand it) executors do not guarantee order. Not with regards to task enqueuing order or of course with regards to the caller's progress. So 1 then 2 then 3 is possible, as is 3, 2, 1 or any other permutation. This is not true today.


(*) I do not consider re-claiming most non-memory resources in deinit sensible or workable at a reasonable scale. I have seen many systems that attempt to do so and they usually regret this choice, eventually. In many, many cases this also violates structured concurrency because network connections/threads/... etc cannot typically be reclaimed synchronously without blocking a kernel thread hence scheduling something like a network connection to be closed at a later point is a strucutured concurrency violation (the structure (the scope) has ended, yet we're still doing work in the background).


(**) Extremely crude program that I don't think adds very much to this discussion but just in case

It's probably pretty clear that uncontrollable memory grow can appear if the actor that something's isolated to can't catch up running the enqueued deinits but just in case it's not. I (well mostly claude.ai -- my SwiftUI is ~non-existent) wrote an extremely crude, unrealistic, totally constructed program to demonstrate this. In the absence of isolated deinit I'm using DispatchQueue.main.async { ... } inside of deinit as a crude way to emulate it.

If compiled & run with swiftc -o /tmp/test test-sync-enqueue.swift && echo GO && /tmp/test then we can see how it first runs smoothly with stable memory, same after one click in "Throw Confetti" but on the second click it starts ballooning memory. When it starts ballooning memory (because it keeps enqueuing more deinit work faster than it runs) will depend a lot on your machine as well as if it's run from the command-line or from a UI app and what not. I think it depends on the priority of the main thread vs. the garbage producing thread.

13 Likes