Feel free to assume it's a lock specific to whatever the program is doing, like the old SVN file locks. It doesn't really matter.
If the resource outlives the execution of the process it should really be cleaned up by some mechanism external to the process itself, otherwise you have a denial of service bug if your process crashes or hangs for some reason. Your program could still have an âorderly shutdownâ path for the happy case, but destructing all heap objects in reverse order is not necessary or sufficient to implement such a thing.
Again, I wouldn't rely on this always being the case (and it doesn't help if someone ends the process with exit
or any other Never
-returning function).
From my consideration, if you want to do happy-path-best-effort cleanup at process end, atexit(3)
is probably the least bad way to do it even from Swift (with the caveats that everyone else has well explained, that you should be mindful that you may not get any particular ordering relative to other best-effort-process-end cleanup mechanisms, if you have multiple threads, then it'll run at the end of the main thread's execution so shouldn't clean up anything background threads rely on, and that ultimately, you cannot rely on having control of when your process ends in a UNIX-y or Windows-y environment).
Another practical workaround could be:
var ho: HasDeinit! = HasDeinit()
print(ho.i)
ho = nil
Again, no. The optimizer can remove that dead store.
Sure it could. It doesn't do it now though (tested with godbolt with -O and Xcode -O) so in practical terms that is a short term better than nothing workaround.
Interesting problem. I was also thinking about peripherals, including some exotic and maybe not so well-designed ones, that would require you to reset them on exit.
But the problem with peripherals is almost always trickier than it seems: if a clean exit is critical (say, you have an iOS app that controls some nuclear launch system ) then you'd need to install signal handlers anyway and regardless of the language you use.
I think a good assumption in such cases is that, you are lucky if your app exits normally; you should always assume abnormal termination and do everything to prevent your nuclear launch system from launching accidentally.
For everything else, like others already pointed out: system file handles, sockets and all their friends, the cleanup is guaranteed and automatic, no need to worry about it.
See, this is exactly why the iTunes EULA always forbade you from using it in the development of nuclear weaponry â there wasn't a monitoring process to keep an eye on crashes
In this scenario you might have a separate task that is responsible for launching the sub-process, and also then resetting the hardware resource when the sub-process exits, successfully or not. For bonus points, if the sub-process exposes some kind of socket interface, your manager task could also periodically try to connect to the socket and send some kind of health check request, to make sure the sub-process hasn't wedged itself. If this times out or fails, you kill the sub-process, reset everything, and start again. Of course now the problem is that the manager itself might get wedged---so you add a hardware watchdog which requires a value to be written to a register periodically; if the timeout expires, it resets the entire board, etc.
Why having the manager then? The app itself could do that (and design would be simpler), no?
Sure, but the manager might manage multiple services, which each have their own independent probability of crashing.
Even with one worker process, a process that does nothing else besides managing the worker process should generally have much less surface area to crash or run astray than the worker does.
That's understandable. OTOH here we have a choice of:
- a more complicated two stage mechanism that
- either kills/relaunches the (complex) app when it hangs
- or resets the board when the manager hangs, vs
- a simpler one stage mechanism that
- resets the board when the (complex) app hangs.
(2) seems less error prone.
Though this topic has wandered way off course, I could not help but comment on this subthread, having worked on nuclear reactor consoles for the past 20 years.
Having a separate monitor task is the only reasonable way to approach this issue. The problem with "a simpler one stage mechanism that resets the board when the (complex) app hangs." approach is that it's rare in modern systems for that "complex app" to have a single thread. Therefore, some thread has to take responsibility for watching over the other threads. Given that this watchdog thread is one of the most important threads in the application from a mission-critical/safety-critical point of view, you want that thread to be as simple as possible (so that it can be easily validated).
Of course, the next question is "how do you determine that a thread has hung?" Some threads, for example, suspend waiting for input from some external device and there is no deterministic way to predict how often that input will arrive. Is the thread hung? Or is it just waiting for input that rarely arrives (such as someone pressing a reactor SCRAM button)?
Beyond reducing coupling with other code, there is another good reason for making the watchdog thread separate from the "complex app," it's far easier to extend such a component when you add additional threads to the system. And if you deploy a hardware watchdog timer, only having to tickle the watchdog in one spot (rather than all over the code) makes this much easier to show it is correct (and debug when it is not).
I have finally come up with an acceptable solution to this problem.
When writing a console application, wrap the entire outer-level code with a single do statement:
do
{
class withDeinit
{
init()
{
print( "Creating class" )
}
deinit
{
print( "deinitializing class" )
}
}
let classObject = withDeinit()
} // end do
This forces deinit to be called when the do{...} statement leaves scope.
You can even use a break statement as an "early exit" from the program (though I'd recommend using a labeled do{...} and break when doing that).
BTW, the output is
Creating class
deinitializing class
Again, if the compiler can prove this do
statement does nothing, it can skip calling your deinit
.
The solution here is to write your own closure-taking method that explicitly calls your teardown code after invoking the closure.
???
If the main (top-level/global) code in a console application is surrounded by a do statement, and the Swift compiler does away with it, one of two things is true:
- The program does nothing, in which case who cares if Swift optimizes the code away?
- Swift is severely broken.
You will note that I said:
wrap the entire outer-level code with a single do statement
That would include all the object declarations whose deinit methods I'm interested in Swift automatically calling.
There are, of course, some situation where even this isn't a perfect solution. For example, if you import some module that creates global objects on its own, I suppose you'd have to live with those deinitializers not being called. However, my concern is that the deinitializers I write (containing code that I would like to see automatically executing on a normal shutdown), run as expected. As long as all those object declarations are inside the do block, along with the code that uses those objects, I fail to see how Swift would optimize them away.
By âdoes nothing,â I meant if the do
statement has no effect other than introducing a lexical scope that is coterminous with the containing scope. The optimizer can see that control transfers directly and unconditionally from the end of the do
statement to the end of the enclosing scope, and from there to the end of the program. It can then effectively merge the body of the do
statement into the surrounding scope, and then observe the tail-call to _exit
and skip all the deinit
calls from the merged outer scope.
If you're going to make that argument, then putting all the code in a function and calling the function would produce the same result. The compiler could expand the function in-line, and now the scope of that function is the same as the scope of the global code, and (according to you) Swift could dispense with all the deinitialization that takes place when the function returns.
You can make an argument all day long about what happens in global scope. But if Swift's optimizer can go out of its way to remove the scope that invokes deinitialization, then Swift is broken; that's just a bug in the optimizer.
If the Swift development team is going to continue claiming that the use of deinit cannot be guaranteed, then perhaps they should remove it from the language design.
Yes, this is the argument I am making. It is a missed optimization that Swift doesnât do this today.
If there is a bug, it is that Swift isnât doing this today. But the thing about optimizations is that it is always valid not to do them.
I would definitely like to see the rules about when and whether deinit
is invoked more explicitly defined. Itâs always been an under-specified area of the language. The rule today is that the lifetime of an object ends when the last reference to it ends, but itâs unclear whether that happens at the end of the scope enclosing the binding holding that reference, or after the last use of that binding. Either way, if you want to ensure a reference lives until the end of a scope, the current rule is that you must use withExtendedLifetime(binding) { }
.
At one point the compiler team tried to more aggressively shrink reference lifetimes based on flow analysis, which caused deinit
to execute earlier, which caused real regressions in applications. The design of ~Escapable
types is closely tied to reference lifetimes, and I fear a similar situation arising in the future, especially if people start trying to use ~Escapable
to implement RAII in Swift.