Deinit called on global object?

I have the following code:

class HasDeinit
{
   var i:Int = 0
   
   deinit
   {
      print( "HasDeinit was destroyed" )
   }
}
var ho = HasDeinit()
print(ho.i)

It prints "0" as expected, but I would have thought that when the main program quits it would call deinit as well (printing "HasDeinit was destroyed"). It does not.

If I change this to:

class HasDeinit
{
   var i:Int = 0
   
   deinit
   {
      print( "HasDeinit was destroyed" )
   }
}

func x()
{
   let ho = HasDeinit()
   print(ho.i)
}
x()

I get the expected output:

0
HasDeinit was destroyed

when ho goes out of scope in function x.

Why the difference?

2 Likes

This might have something to do with the scope of top-level code not being fully observable when it ends.

The following works.


@main
enum Test {
    static func main () {
        let ho = HasDeinit()
        print (ho.i)
    }
}

Personally, I never liked the idea of having top-level executable code in a program. I think it should be abolished. All programs should declare a unique entry point, where the execution starts.

When program execution terminates after print(ho.i), there is no need to clean up memory (which is what release is for). The OS is going to reclaim all the memory used by the app anyway.

In the second example, ARC has inserted a call to release at the end of the function call, which in turn executes deinit. The program terminates after that code runs.

4 Likes

See also previous discussions like Retain count set to 2 and no deinit called

tl;dr: deinits are not called at the end of a process' lifetime, because the cleanup is, overwhelmingly often, unnecessary and wasteful.

4 Likes

Basically what I'm hearing here is that Swift's handling of explicit deinit methods is defective.

I understand that not cleaning up memory when the program is quitting is unnecessary, as the system will restore all memory a process uses when that process terminates. However, object destruction (to use init() terminology) is a two-phase process: during phase one, destruction should call an explicit deinit method, if one is present. When the explicit deinit method finishes, then the code can proceed to phase two where it deallocates memory, deals with reference counters, etc.

I can understand bypassing the second phase, as the Swift run-time system has complete knowledge of what it has allocated and so on; Swift can short-circuit this if it knows that terminating the program doesn't require that work to be done.

However, Swift must not short-circuit the first phase. The explicit deinit method cleans up resources (and whatever) that the Swift run-time system knows nothing about. Allow me to provide two examples:

  1. The class has open files that need to be cleaned up and closed (a common example of deinit at websites). Sure, the system will close any open files when the program quits, but the object instance might be caching up some data that needs to be written to the file before closing it. Swift's run time system will have no way of knowing this. If it doesn't call deinit, the file could be left in an inconsistent state.

  2. Consider an embedded system (for example, a Swift application running on a Raspberry Pi). An object may have registered the use of certain system hardware resources (such as I/O pins on the GPIO header, or the exclusive use of some port expander such as an MCP23017 device). When the program quits, the object should run the deinit method in order to:
    a. return hardware system resources
    b. Put output pins in a high-impedance state (if applicable)
    c. Set output pins to 0 (if a high-impedance state is unavailable)
    d. Set generic I/O pins to input
    e. Disable any enable outputs
    f. etc. etc.

Once again, Swift's run time system cannot possibly know about these resources and how to deal with them. The only practical way to deal with this is to complete phase one of the object destruction process.

Someone suggested never executing code at the main level. That's a programming style issue (which isn't a bad one), that only provides a work-around to this defect in Swift. Someone might also suggest that I wrote a "destroy()" method method and manually call it if object destruction is so important (another work around is to assign the object instance a new value before the program quits, to force a call to deinit()); however, that's a kludge to work around this defect.

I suspect I should probably figure out how to file a defect notification on this. Swift's main-level objects really should call deinit (and complete the first phase of object destruction), even if the system doesn't execute the second phase of the destruction process.

2 Likes

I don't think this logic makes sense, as long as people reclaim something other than memory in deinit. I know there are downsides to that approach, and generally (especially in the asynchronous Swift Concurrency world) people should prefer with-style functions and swift-service-lifecycle package for resource management, it's still a widespread pattern to close file descriptors and do more significant cleanups in deinit. I could totally imagine people establishing child process lifecycle that cleans up child processes in deinit.

Either way, it feels like one more bug with top-level code to me. That's why I'd recommend using @main entrypoints instead.

3 Likes

As was shown, deinit is called when you use @main-entrypoints. What you've demonstrated is one of the known bugs with top-level code when @main is not used.

There's been plenty of back-and-forth discussion on these points here on the forums in threads like the one I linked to above, so I won't do a great job of summarizing the points fully, but: typing crucial cleanup to object lifetimes specifically is both typically insufficient, and fraught with peril.

  1. Object lifetimes are subject to optimization, and retains and releases may not happen where you expect — and may change over time. This may have a significant effect on when, how, and in what order cleanup might actually happen
  2. What happens if your object is never deinited anyway? Whether you have an accidental retain cycle, or your process crashes — you still need to be able to handle cleanup in a deterministic and holistic way. This typically requires significantly more thought than a deinit can offer

Typically, the recommendation is to figure out what sort of cleanup you should be doing, how, and how to make it resilient, rather than putting that cleanup in a deinit and relying on memory management to take care of trigggering cleanup.

That being said, I don't have a stake in this whatsoever, just repeating what others have said. I personally think that calling this "defective" is a bit extreme, but filing a bug report is completely appropriate! Others may feel more strongly one way or another.

3 Likes

That's exactly why my previous post had so many conditions: I recommended using with-style methods, swift-service-lifecycle package for unified resource management, and I used "either way" phrasing to make sure that we bring the discussion on what people put into deinit out of scope.

Either way, if behavior between top-level code and @main is different, I'd consider this a bug. In this specific case, IMO behavior of @main (calling deinit on process shutdown) is the correct one, regardless of what people think arbitrary deinit should or shouldn't do in its body.

4 Likes

We're in agreement — I was just trying to summarize why those recommendations are typically made.

I agree that consistency here would be an improvement. If deinits are called at the end of at @main only because the code happens to be inside of a function with a clear end-of-scope, then it's something to consider. As it stands, so much about top-level code is "magical" (and not necessarily for the better) in many ways that I know the project has wanted to address.

If you really need RAII you can use a non-copyable struct with a deinit.

I guess this doesn’t bother me so much because there are many ways a program stops that aren’t “reach the end of main”: exit(0), SIGKILL and other force-quits, Sudden Termination mode on Macs, suspension (sleep) or loss of power. That said, just because a server can deal with e.g. a socket suddenly dropping or going silent, or a local database’s write-log not getting merged into the main file, doesn’t mean it’s ideal, so the point stands.

1 Like

I filed a bug report. The fact that there are workarounds, or the fact that there are situations where this won't help, is irrelevant. The behavior should be consistent between a function going out of scope and the main program going out of scope.

Further, I would hope that something like "exit(0)" also cleans things up before terminating the code (as this isn't exactly an abnormal termination). Obviously, some things can't be helped; but under normal circumstances the deinit method really should be called.

It is intentional that cleanups do not run at the end of top-level code. I would also not count on them always running on the way out of known top-level entry points like main, since we could choose to avoid them in those cases as well. It's best to do these sorts of operations explicitly.

8 Likes

In practice, every time I've seen a program that went out of its way to do this sort of cleanup, it a) was a performance problem (e.g. during system reboots, which yes, we do care about the performance of), and/or b) eventually caused correctness problems. A particularly thorny and often-problematic issue with them is what order to run them in.

My sample is admittedly biased because I tend to see such things as a result of bug reports, but overall I'm firmly on team "atexit handlers are usually a mistake".

For people writing macOS programs, I'd strongly encourage going even further and opting in to Sudden Termination via enableSuddenTermination | Apple Developer Documentation. This allows the kernel to terminate the process doing even less cleanup work than usual.

7 Likes

I hope that the ~Copyable rules were not written to imply one can use it for RAII, because it would force the language to run deinitializers unnecessarily—correct behavior that people in this thread have incorrectly described as a bug.

Yeah, ~Copyable types have more reliable ownership and lifetime rules, but these still don't guarantee anything about process exit; when we know the process is ending, such as at the end of top-level code or before a Never-returning function like exit, we don't bother cleaning anything up.

4 Likes

Running deinit before a never-returning function isn't necessarily correct anyways, e.g. if that function is something like UIApplicationMain. (That function is actually declared as returning Int32 but you get my meaning.)

2 Likes

I fully agree with David here. One of the many lessons of server-side development is that, while you can invest enormous effort into try to prevent some kinds of faults (network partitions being the most notable), they will eventually occur. While reducing the rate is valuable, the urgency of doing that goes way down if you build a system that is resilient to the failure.

In this case, if you have cleanup that must happen when a process terminates, it is only possible for that cleanup to be done outside the process in question. Taking the Raspberry Pi example, the right thing to do is to have a monitoring process that can put the system back into a known-good state. If the system is embedded enough, that will likely mean rebooting and relaunching the process, which will in its setup ensure the system is in known-good state.

8 Likes

Not to mention, there are also global variables outside of top-level code; since those are initialized lazily on first access, deterministic destruction of their values would require a bunch of extra book-keeping so that their destructors could then run in some order on exit. Those destructors could trigger further lazy initialization of global variables, or observe already-destroyed globals, etc. It would quickly become unworkable.

2 Likes