My original example just has one task allocating and then deallocating objects. Playing with the priorities of the tasks spawned in the deinit doesn't completely solve the problem, as there is no real backstop for the growth.
I think the two-task status quo scenario you mention is different in a few ways:
First, there would need to be an await (or a lock / atomic) somewhere to synchronize the transfer of objects between tasks (say adding/removing from some concurrent queue). That can serve as a place to add backpressure for the allocating task.
Second, the low-priority task destroying the objects has to be grabbing some number of objects out of the queue and then first doing something with them. Then, the accumulation of data in that queue is not garbage. If the task removing the items can't keep up, maybe it has the wrong priority, or you need to add backpressure for the allocating task... but that data is not garbage.
Otherwise, if this low-priority task really is the only one taking items, and it's doing nothing but calling the queue's remove() method and ignoring the return value, then that queue is definitely an accumulation of garbage. The elements should have never been added to the data structure. Then, they would have been destroyed right after it's last use in the allocating task. This queue of garbage scenario more closely matches the async deinit / isolated deinit issue.
Speaking of status quo's: To end up with similar unbounded memory growth today, you'd have to end up doing something asynchronous in some deinit executed by the task:
- Kick off a new task with
Task.init or Task.detached
- Call something that is async like
DispatchQueue.async.
That in itself could maybe become a problem, even if these async jobs don't recursively trigger more async work (corresponding to the case where the Node's children.count < 2). I haven't experimented with this.
But one way to really compound the issue would be if this one task spawns more than one other task. That could easily happen in the recursive destruction scenario. For that to happen, you would need to explicitly capture more than one unique reference to some class (like the single children field of the Node), into these asynchronously-invoked closures. You cannot refer to it as self.children, since that captures self and doing that in the deinit is illegal:
class BinaryTree {
var left: Node?
var right: Node?
// ...
deinit {
Task.detached { [left,right] in
// do something
}
}
}
Is the risk of making all of this happen implicitly with just an async or isolated deinit an acceptable solution? Of course, that depends on what we gain from these deinits and what the alternative solutions are like. But this thread isn't a proposal review.
In terms of optimization goals, the impact of that problem is really the question. We know there is a risk of uncontrolled accumulation of memory and tasks. How bad can it get and in what ways can the implementation mitigate those issues?