Retain count set to 2 and no deinit called

This rabbit hole is way bigger, but, in the mean time, I have two questions:

  1. Why is the deinit code never called here?
  2. Why is the retain count returned: 2?

Code is below and compiled in macOS Big Sur with the latest everything as of now.

import Foundation

class Person {
    let name: String
    init(name: String) {
        self.name = name
        print("\(name) is being initialized")
    }
    deinit {
        print("\(name) is being deinitialized")
    }
}

let a = Person(name: "Jessica Doe")
print("CFGetRetainCount(a): \(CFGetRetainCount(a))")

Compiled with: swiftc /tmp/main.swift -o /tmp/main.

Output is:

Jessica Doe is being initialized
CFGetRetainCount(a): 2

Why in God's name is the code for the deinit not called here?

Second, it might be that CFGetRetainCount is dumb and print two just because it doesn't like 1, or whatever, but it doesn't say anything in the documentation about why it would print 2. It'd be super fun if CFGetRetainCount was increasing the count from 1 to 2 by grabbing a strong reference to it, but that would explain the behavior.

1 Like

Make a an optional and set it back to nil when you want to release it:

var a: Person? = Person(name: "Jessica Doe")
print("CFGetRetainCount(a): \(CFGetRetainCount(a))")
a = nil

CFGetRetainCount cannot really be relied on to produce any particular value with ARC because the optimizer may move retains and releases around and remove them. However, cleanups are not generally guaranteed to run at process exit, because ending the process is sufficient to release most resources (and you could be summarily killed by signals, or by app lifecycle management on Apple OSes, so it isn't generally safe to rely on deinits to do process cleanup). If you put your code inside a function, and call the function from the top level, it will be more likely to invoke deinit. (However, in theory, we could still inline the call, see that the object lives till end of process, and drop the release nonetheless.)

5 Likes

Is there a best practice for what to do instead?

This surprises me, because it could leak resources not tracked by the operating system (e.g. a client's place-in-line reserved in a web system). Of course you should have expiration and other cleanups to prevent leaks (since crashes/freezes could also leak data), but it surprises me to see that this can't be relied on even in the happy path.

1 Like

If you really want something to happen at process exit, you can try using atexit like in C, though that still only works if you actually exit normally by reaching the end of toplevel or calling exit(3), and don't end the process abnormally (or by making the _exit(2) syscall directly). For a robust system you probably need a monitor process that can clean up after dead processes or something like that, or have the process check for expired resources left behind by a previous run of the process on startup. That's not really specific to Swift.

1 Like

So, basically, you're saying: "Here. we offer you this feature called deinitializers, that might work, or not, depending on when the compiler feels like it."

Be aware, this isn't about the process exiting abnormally, I wouldn't expect Swift to do magic. I just expect a language feature to work reliably. There are three possible states: works as expected all the time (and when it doesn't it's a bug), works whenever the compiler decides, or it doesn't work. In any of the three cases, shouldn't it be properly documented?

From the Swift documentation:

Deinitializers are called automatically, just before instance deallocation takes place.

As you can see, there is no place for doubt there. It says "it will happen".

The issue here is that process exit can happen before instance deallocation. Relying on anything to happen before process exit is not robust, no matter what language you use.

5 Likes

The issue here is that process exit can happen before instance deallocation. Relying on anything to happen before process exit is not robust, no matter what language you use.

I think that's been the case forever (at least 25 years) on macOS, but what happens if a variable is explicitly set to nil before exit? I would expect deinit to be called, but I could be wrong.

Setting a variable to nil before the process ends ought to trigger a release of the previous value, yeah.

It does happen, this

var a: Human? = Human(name: "Jessica Doe")
print("CFGetRetainCount(a): \(CFGetRetainCount(a))")
a = nil

prints this:

Jessica Doe is being initialized
CFGetRetainCount(a): 2
Jessica Doe is being deinitialized

The point here is I shouldn't need to be thinking for the compiler. This isn't like an unknown situation or something out of the ordinary. The instance is going out of context when the application exits, and should be deallocated. Swift is deciding not to run the deinitialization because: it's freeing the resources anyway, which is lazy and wrong.

Basically this kind of thing prevent Swift from being used for anything that requires resources to be freed at the end of the execution, cause Swift might decide to not, and even better, might decide to do so without saying as much in the documentation.

I'm not aware of any languages that don't have this property, at least when running on Darwin based OSs. In fact, there's API in the SDK to opt into an even more aggressive version of this behavior, where the kernel doesn't bother sending your process SIGTERM at all if you've marked it as being in a consistent state.

3 Likes

Very few languages promise to run finalizers for all objects before process exit. The vast majority of finalizer work is redundant with the exit of the process, and running them would just burn time and energy for nothing. It is therefore appropriate to expect the exceptions to explicitly opt in to something stronger. For example, in Java this is the distinction between finalize() and Runtime.addShutdownHook. Swift doesn't have a designed equivalent to the latter yet, both because demand for that has been minimal and because it's unclear what the semantics should be on all platforms.

6 Likes

Even this isn't really a guarantee that things will run to completion, the VM could still get killed without finishing.

Bottom-line is any program that really needs to assure cleanup of something in all failure cases, needs to have a separate process to do that cleanup.

2 Likes

Sure, and similarly the entire computer could melt down (or, more likely, get disconnected from the network), and that shouldn't leave dangling resources on other servers forever. There are levels of engineering outside of the scope of the process.

6 Likes

Here's Python 2 and 3

# Python program to illustrate destructor 
class Employee:

        # Initializing 
        def __init__(self):
                print('Employee created.')

        # Deleting (Calling destructor) 
        def __del__(self):
                print('Destructor called, Employee deleted.')

obj = Employee()

Output:

Employee created.
Destructor called, Employee deleted.

It makes all the sense in the world to do it, even if only cause the language advertise it. Swift is going out its way to provide the programmer the coziness of not having to write a main(), Swift will do that for you. But when it comes to call the deinit you actually wrote, well, Swift thinks that's not important.

And the thing with a general purpose language is, it has to operate correctly, or at the very least, as advertised. If the language documentation says: _Swift will run deinitializers when the object is deallocated, it doesn't matters what others reason, however true they are, users will expect the deinitializer code they wrote to run. What happens when it doesn't is simple, users lose whatever trust they could have in the language implementation.

When a process exits, the objects are not deallocated. Here's a trivial C++ program that behaves the same way as Swift:

#include <stdio.h>

class Test {
        public:

        Test() {
                printf("init\n");
        }
        ~Test() {
                printf("deinit\n");
        }
};

int main(int argc, char **args) {
        Test *t1 = new Test();
        Test t2;
}

The output is

init     <- t1
init     <- t2
deinit   <- t2
2 Likes

And that's why we want better things than C/C++ (that's not really it, or all of it, but the point is the same)
The fact that others get it wrong doesn't mean Swift has to, or that it is a valid reason to do it wrong.

Also, that C++ code is a completely different demon. The first object is allocated in the heap and it does has it destructor called. The second is allocated in the stack and it doesn't. Who knows what the rules for destructors are in C++, and this might very well be in accordance with it.

1 Like

The opposite, actually.

And again, Swift doesn't deallocate anything on program exit, the operating system does. Since there's no deallocation, deinit isn't invoked.

1 Like

IMO Swift should formalize synchronization points (implied by 'await' or I/O operations) and guarantee that all outstanding cleanups have run before continuing. If you want to ensure your deinits are called before returning from a function (including main), then you would just stick an explicit synchronization point there.

3 Likes

This strikes me as a semantics debate about whether unmapping pages in a process after it exits counts as "deallocating", which I could see going either way on.

As an aside: would it be interesting to do cleanup like this in script mode but not in other modes? I feel like that might make sense.

1 Like
Terms of Service

Privacy Policy

Cookie Policy