Not Jon, but to be fair, RAII-like patterns were discouraged here prior to the introduction of non-copyable types.
Trying to do some cleanup work at process exit can be a good idea. That being automatically integrated with the language's concept of value destruction is a bad idea because:
- the vast majority of the work that's normally performed during value destruction is a waste of energy at process exit, and
- you're generally destroying globally-reachable things, and they have often have dependencies on each other, and unlike with lazy initialization there's no trick for automatically coordinating teardown in dependency order.
One of the classic examples of the latter problem is logging: of course you want to flush your logging streams during teardown, but you also want to log during teardown of almost everything else, so it's really important to make sure flushing is one of the last things you do. I have seen a lot of systems that just drop some of their final logging messages or, worse, nondeterministically crash at exit because the logging system was cleaned up and then something else tried to use it.
(This seems to be appropriate for this thread, rather than creating a new one. Apologies if I'm mistaken.)
I'm thinking about binding SDL (and managing unmanaged resources in general), and I noticed something that @eskimo said a few years ago:
I don't understand why having a Window
class that has an SDL_Window
pointer that's constructed in init
and destroyed in deinit
would be problematic. If I look at the documentation on ARC, it seems like the semantics are exactly what I'm expecting -- once all the classes are gone (setting aside the process exit case) the deinit
would be called.
And in this thread, people suggest using non-Copyable types, which is fine but I'd like to understand why using a class here would be a problem, because it's not obvious to me. I especially don't understand why the unmanaged resource would need a retain/release mechanism to be modelled as a class.
If your type just has create/destroy (and not retain/release) operations then you can safely use either a class or a non-copyable struct if your Swift type owns the C type for its entire lifecycle. The class is a bit easier to use but thereās a (small) amount of overhead needed to ensure reference counting works across threads. On the other hand, a non-copyable struct has a reference count of either 0 or 1 so the reference counting is done at compile time.
If you donāt have retain/release operations, it wonāt be safe to create an instance of your wrapper type while still holding onto the C type, or to get the C type out of your wrapper without a withUnsafePointer
-style method that guarantees the wrapper type lives to the end of the closure.
Do I understand correctly that doing clean-up in, say, SIGTERM handler is explicit and implicit mechanisms mainly refer to the code in deinit
?
Was it because the introduction of non-copyable type? But I think the exit-time clean up issue discussed in this thread applies to it too.
Take @spacecrafter3d's app as an example, a Connection
instance may be destroyed when A) it's replaced by a new instance, and B) when the process exits. According to this thread, case B is unreliable (in this specific example, the websocket related network resource will be released by OS when the process terminates, so it's OK. But if it was app specific cleanup, such as saving data chagnes, it would be an issue). Since the root cause is due to the unstable environment on process exit, I don't think using non-copyable type can change it.