Hi @Nickolas_Pohilets, thanks for tackling this! This part of the design in SE-327 has been a sore point for me and I've also considered a redesign for deinits along this line of thinking.
First I want to say that this pitch looks good overall. I don't see any issues with it as it stands, because as others have mentioned, the point at which a deinit
will be executed is not guaranteed by the language.
But, I do think that we will want to evolve and/or add the option to have a fully-fledged async
deinit that one must opt into, because on its own, a synchronous but isolated deinit won't fully solve the problem of not having access to access an instance's isolated data to deinitialize it. However, the synchronous deinit here does appear to permit for an efficient and simple implementation, so I think there's value in having it as an option in the overall solution.
The main corner case I can think of at the moment is the situation where multiple properties have different isolations:
class A {
@MainActor var one: NonSendableType = .init()
@SomeActor var two: NonSendableType = .init()
@MainActor deinit {
_ = one.tearDown() // OK with deinit isolated to MainActor
_ = two.tearDown() // error: cannot access property 'two' with a non-sendable type
// 'NonSendableType' from main-actor isolated deinit
}
}
Since a function can only be isolated to one actor, we can't write a synchronous, isolated deinit
to allow unrestricted access to these properties due to their types not being Sendable. An implicit deinit
that would otherwise be generated by the compiler is an exception to that and does not need to worry about isolation, because we know exactly what it does with the properties and can "prove" that it is safe. User-defined deinits are the only trouble. So I think the only way to completely fix this issue is to also allow for an async
deinit.
Thus, my immediate concern is around how to implement of async
deinits efficiently but without too much complexity. For example, if a large data structure full of async
deinits were to be deallocated, we want to ability to optimize this at runtime to avoid flooding the task scheduler. I think it would be good to have this either in the initial implementation, or have the implementation be extensible in the future to allow for those optimizations, without causing an ABI break.
I have some unpolished ideas for how we could make this efficient (e.g., imagine batching deinits to use a single task), but I'm curious if you've thought about this?