The main area where I've had problems with this model is in test code, where I'll create local variables containing objects that, during normal execution of the program, are owned by other objects and/or have singleton instances. When I have a full object graph, the dependency rules regarding strongly-referenced properties make things easier to reason about. I guess I'll just have to get into the habit of using withExtendedLifetime
more liberally in my tests!
As it stands today, the dead-code pass must either run before or after ARC inserts retain/release pairs. If it runs before, nothing would change if we modified the order in which ARC inserts said pairs. If it runs after, then it's clearly able to cope with eliminations in the face of existing calls to swift_release
. It not clear to me why changing the order of the calls to swift_release
would inhibit the ability of the compiler to identify dead code.
IMO, this begs the question. Of course we shouldn't rely on precise lifetimes in Swift, since they don't exist!
I read this and think, "What a great use-case for more-explicit lifetime control!" The fact that libraries have to build solutions around the non-deterministic1 nature of lifetimes in Swift should be motivation for surfacing that control more explicitly. E.g., what if there were a type-level attribute that could be applied to enforce precise lifetimes? Then API authors could actually prevent some classes of improper use by auto-close
-ing whenever the lifetime ends.
I'm reminded of @Hoon_H's post from a couple weeks ago where they ran into a similar issue with Combine. I'm similarly unsettled by the unpredictable-by-default nature of Swift object lifetimes every time it crops up.
1: ETA: is "non-deterministic" the right word here? I would hope that the ordering of calls to deinit
s remains consistent from one run of a program to the next (though I'm not sure how the runtime is implemented).