I want to discourage making assumptions about the order of class deinitialization because it is subtle and problematic.
deinit {
print("before setting z to nil") // <-+
// |
z = nil // <-+ property assignment is ordered with print
// | deinit occurs "after" assignment
print("after setting z to nil")
}
Consider possible output A:
before setting z to nil
Z deinit called
after setting z to nil
and possible output B:
before setting z to nil
after setting z to nil
Z deinit called
The ARC specification says:
By default, local variables of automatic storage duration do not have precise lifetime semantics. Such objects are simply strong references which hold values of retainable object pointer type, and these values are still fully subject to the optimizations on values under local control.
Only output A is legal since no local reference exists, but seemingly insignificant code changes could change the output:
deinit {
print("before setting z to nil") // <-+
let tmp = z // | local assignment is unordered
z = nil // <-+ property assignment is ordered with print
// | deinit occurs "after" assignment
print("after setting z to nil") // | ??
}
Now ARC rules give you either output A or B. Swift's implementation will give you output B, but with subtle caveats. Value types, like arrays, have minimal local lifetimes. And "side effects" that can only be observed synchronously don't carry same weight as those that can be observed asynchronously, like an I/O operation. Programmers who want to rely on ordered deinitialization (precise lifetime semantics) should do so explicitly using either withExtendedLifetime
or ~Copyable
types.
Arguably the compiler must always behave like this - observing the deinit of Z when it it is set to nil - because deinits can have side-effects
That would arguably be a desirable language behavior, but ARC does not generally give you an ordering between side effects in main code and deinit side effects. When local references exist, class deinitialization is effectively unordered with respect to side effects in the main code. Deinitialization does run synchronously--deinitialization effects are atomic relative to the main code, but the point of execution is not defined relative to side effects. What the deinitializer does is generally irrelevant, as there's no reliable way for the compiler to know what a class deinitializer does (the implementation does have a set of conditions and exceptions, but we can't make such broad statements about deinitialization side effects).
I think it will only reorder deallocations for types with trivial deinits, for this reason (and it's also why not having a 'manual' deinit is generally preferable, for performance).
I don't know of any semantic or performance difference between an explicit ("manual") vs. generated deinit.