Deinit called on global object?

It’s worth pointing out that C++ has this exact problem.

2 Likes

It’s worth pointing out that C++ has this exact problem.

No, it doesn't.
Consider the C++ code that I've compiled and run on macOS using clang:

#include <stdio.h>

class withDestructor
{
	public:
	int i;

	withDestructor( int initialI )
	{
		i = initialI;
	}

	~withDestructor()
	{
		printf( "destroying \%d\n", i );
	}
};

withDestructor wd = withDestructor(1);

int main( int argc, char **argv )
{
	withDestructor wd2 = withDestructor(2);
}

Compilation and output:

iMac-Pro-5:Data$ g++ wd.cpp
iMac-Pro-5:Data$ a.out
destroying 2
destroying 1

C++ (clang in this case) calls the destructor for both main's local instance and the global instance.

Yes, I understand that C++ (and most other languages) won't guarantee that the destructor gets called for an abnormal program termination. Any reasonable programmer is probably willing to accept the fact that the destructor won't be called in that situation (and anyone who has thought it through carefully, would probably insist that the destructor not be called in this case because the system is in an inconsistent state and executing the destructor code could make things worse).

However, this C++ code seems to work fine for a normal program termination. That's all I'm asking for here in Swift. Note that defer{...} works properly. All I'm asking is that deinit should do the same thing as defer{...} (same thing as ataxia({....}) would be even better, but that's probably pushing my luck).

There’s a reason the problem has been named the Static Initialization Order Fiasco. And since destruction happens in the opposite order of initialization, the Fiasco plays out again in reverse. Add that to paging in otherwise unused code that slows down the whole system while the user is trying to quit your app, and you arrive at Swift’s design.

A Swift program that relies on deinit to run for correctness is not a correct Swift program.

3 Likes

Don't use top level code. I've simply been doing the following in all of my programs:

func main() {
  let x = Foo(...)
  ...
  // x deinit called here
}

main()

It avoids using top level code (which has weird optimization rules) and allows for deinits to be called like usual at the "end of program termination".

Again, you cannot and should not rely on your deinits executing at the end of main.

1 Like

This isn't main, this is $s6output4mainyyF :slight_smile:

And if the compiler can prove that function tail calls into _exit, it is correct to skip all object teardown.

2 Likes

Not to mention, there are also global variables outside of top-level code; since those are initialized lazily on first access, deterministic destruction of their values would require a bunch of extra book-keeping so that their destructors could then run in some order on exit. Those destructors could trigger further lazy initialization of global variables, or observe already-destroyed globals, etc. It would quickly become unworkable.

I don't buy the "efficiency" argument at all. Anyone who has looked at the machine code that Swift generates is often appalled at all the extra code the compiler is emitting to do things like ensure memory safety. I'm not arguing that Swift shouldn't do these things (I can write C++ code if I don't want such checking), however, arguing about extra "book-keeping" makes no sense to me in the Swift context.

Further, I'm not asking for Swift to automatically destroy every global object (and super-global object) created by the run-time system. I'm only asking that it, on normal program termination, executing the deinit method in a class. I don't care if Swift doesn't actually deallocate the memory associated with that class value and I don't care if it doesn't clean up any other resources that it created or allocated behind my back when setting up that object. But if I've made the effort to write a deinitializer, the system should execute that deinitializer after the program is done using the object and before the program terminates. I can't see how this is any more extra book keeping than it already has to do for objects that a program creates in some local scope. If Swift can manage defer{} in global-level code, it should be able to handle the book keeping to call any outstanding deinitializers.

After having thought through all the responses in this thread, I'm also convinced there should be something like C++'s delete that allows you to force the invocation of the deinit method (in a synchronous fashion, so that you're guaranteed that deinit has run before the system executes the next statement). Again, this does not mean that the run-time system has to deallocate storage and other resources, only that the code within the deinit method runs.

I've been using the following workaround:

class someClass
{
    ...
}

var sc :someClass! = someClass(...)
 .
 .
 .
sc = nil

However, this scheme has a couple of problems:

  1. I have to make sc a mutable variable, I can't use let here.
  2. I'm not sure Swift guarantees sc.deinit() will be called at this point (so far, it always has been).

It would be nice to state the equivalent of

delete sc

and know that the deinit() method has been called.

That's a workaround, not the proper solution to the problem.

Deinits are simply never going to be called on globals which is what you're doing with top level code. Don't do it.

You mean that same paging that has to happen when someone runs your app? Again, I do not buy the "efficiency" argument here. If you have an application that is performing poorly when it executes because it ran a bunch of deinit() methods, and that's an actual problem for the end user, it's your job as the application developer to clean that up (including dumping the deinit() methods and using some other approach, if necessary). It is not the language designer's, library implementer's, nor compiler implementer's job to force this on you.

That can be said about so many Swift features that this statement is basically meaningless

Yes, don't do it because that's the way Swift is today. Why not fix this problem?
Please explain to me how this is any different from:

func main
{

    put entire program here

}
main()

Which is just a lame workaround.

Code that runs just to free resources that will be freed by the kernel is literally a waste of time. This “argument” originates from practical experience improving the performance of general purpose operating systems deployed at worldwide scale. The design of Swift is informed by that data.

As I said above, you should not assume that this pattern will result in any deinitializers being run either. If the compiler can prove your top-level code tail-calls into another function, it can elide the cleanup at the end of that function.

1 Like

Once again, I am not arguing that Swift free all these resources associated with the object's value, only that it execute the Swift code written inside the deinit() method. If it chooses not to deallocate any other resources (such as memory), I'm fine with that.

But the whole purpose of the deinit() method is to allow the class' author to provide cleanup code when the object is destroyed. If Swift doesn't execute that code under certain conditions, why have deinit() at all?

1 Like

As a concrete example, suppose your program constructs a large graph of reference-counted objects on startup, then proceeds to do some computations, and then exits. If we required scoped cleanup of global data, we would spend time deallocating each object, from the leaves on up, all while carefully decrementing all reference counts down to zero. The paging argument comes in because this touches the reference count of every object — it’s the worst case in terms of locality basically, all for nothing. The time spent doing this can be significant compared to what the kernel has to do to just tear down the entire address space mapping in one shot (which has to happen anyway).

2 Likes

(For those who aren’t aware, compilers written in C++ follow this exact pattern and spend a nontrivial amount of time destructing their DAGs during exit unless the compiler author has gone significantly out of their way to prevent it. You can prove this to yourself with a profiler.)

Perhaps the mistake was in renaming this language feature from dealloc. Though I have seen plenty of apps make the same mistake in Objective-C, conflating object lifetime with resource lifetime.

C++ has the exact same issue with deconstructors too. std::terminate() is special cased to guarantee that deconstructors are run, but anything else exit(), abort(), __builtin_trap(), etc. will skip destructor calls too.

Isn't that largely addressed by @hryde's suggestion to make the runtime behavior optional and just run the deinits? For example, rather than just a graph of objects, perhaps a graph of objects tracking files on disk, perhaps a large, sharded file layout, where writing may need to happen to any of many files, and so are all opened at launch. What's the argument against allowing the deinit's to close those files rather than leaking them?

If "deinit isn't guaranteed" is hard rule in the language, perhaps we need something like @nodeinit like @noasync to ensure particular APIs, like file closure or other cleanup methods, can't even be called there, so developers know they must find another solution. Then we can talk about features which may help developers write scoped resource accessors to replace them.

2 Likes

File descriptors are closed when the process exits anyway; nothing is leaked.