Right, but the dead-code pass can't eliminate those objects because there are calls to swift_release. This is a common problem in compilers, which is why there usually isn't just one dead-code pass, but several: optimisations enable each other.
I'm sympathetic to this position, but in practice I think it's best to try to train yourself out of reliance on object lifetimes. Where possible, avoid code that is brittle in the face of deallocations moving around. This means trying not to put logic in your own deinits.
This is part of why SwiftNIO has so many objects with explicit start and stop/shutdown/close functions: we don't want these lifetime bugs to manifest. If you need to call close to close one of our resources, you'll automatically keep it alive (how else could you call close? ).
Sometimes you'll use code you don't own that has logic in its own deinits. In those cases, your best bet is, if you need something to live within a single function scope, to use withExtendedLifetime to ensure it does. However, if you only need to tie one lifetime to another, you can store the object in a stored property on the object whose lifetime you want to tie it to. In practice, Swift will never release these objects before the ones holding them.
The main area where I've had problems with this model is in test code, where I'll create local variables containing objects that, during normal execution of the program, are owned by other objects and/or have singleton instances. When I have a full object graph, the dependency rules regarding strongly-referenced properties make things easier to reason about. I guess I'll just have to get into the habit of using withExtendedLifetime more liberally in my tests!
As it stands today, the dead-code pass must either run before or after ARC inserts retain/release pairs. If it runs before, nothing would change if we modified the order in which ARC inserts said pairs. If it runs after, then it's clearly able to cope with eliminations in the face of existing calls to swift_release. It not clear to me why changing the order of the calls to swift_release would inhibit the ability of the compiler to identify dead code.
IMO, this begs the question. Of course we shouldn't rely on precise lifetimes in Swift, since they don't exist!
I read this and think, "What a great use-case for more-explicit lifetime control!" The fact that libraries have to build solutions around the non-deterministic1 nature of lifetimes in Swift should be motivation for surfacing that control more explicitly. E.g., what if there were a type-level attribute that could be applied to enforce precise lifetimes? Then API authors could actually prevent some classes of improper use by auto-close-ing whenever the lifetime ends.
I'm reminded of @Hoon_H's post from a couple weeks ago where they ran into a similar issue with Combine. I'm similarly unsettled by the unpredictable-by-default nature of Swift object lifetimes every time it crops up.
1: ETA: is "non-deterministic" the right word here? I would hope that the ordering of calls to deinits remains consistent from one run of a program to the next (though I'm not sure how the runtime is implemented).
Is it really unpredictable? As explained, the compiler is free to optimize away or release a variable after its last concrete use. For something like Combine, this is easy to manage, as every subscription is maintained by an AnyCancellable value, and you just need to keep those alive for as long as you need the observation. In my case I usually subclass XCTest case with API to store the tokens for me during test methods. For other tests, referencing the value in your assertions works fine (or referencing something that keeps the other value alive), or, similar to the Combine case, special storage for values you need to keep alive.
As an alternative to guaranteed lifetimes, it might be interesting to consider an annotation which would cause the compiler to warn about immediate release, similar to how the warning for weak variables works. That way the developer can't just shoot themselves in the foot by using _ = to stop a warning.
By unpredictable-by-default I mean that (AFAICT) the only ways for me to effectively reason about lifetimes is to a) store a reference in an object whose lifetime is otherwise known, which just moves the reasoning one level up, b) use withExtendedLifetime, or c) introduce a series of known dependences via print or other extremely obviously "observable-effect" calls to force lifetime extension. It's not even obvious to me that referencing the values in assertions is sufficient—if the compiler can prove that the reference is not side-effective and that the condition is false, could the call be eliminated altogether?
IMO, it's not a great solution to have to guess what optimizations the compiler will perform in order to reason about lifetimes. It's unpredictable in the sense that it's impossible to give definitive answer to the question "what does the program in the original post print when run?". The word "concrete" in your post is hiding a lot of complexity.
This is why my recommendation remains to stop trying to reason about lifetimes. The answer to "what does this program print" should not depend on the lifetime of any one object.
The less you reason about lifetimes, the easier time you'll have of things.
Sure, there is scope for interesting work in this area. However, I think move-only types are the most immediately-promising solution to this problem. Move-only types would have much clearer ownership semantics because they are always singly-owned. This property makes reasoning about their lifetime much clearer, and therefore makes it much more valuable to clarify the remaining uncertainty in lifetime.
This is also the model adopted by Rust: move-only types in combination with RAII.