Since I started development in C++, I have always been inclined to model systems that are initialized upon creation and cleaned up automatically on destruction.
In my Swift app, for example, I have a connection to a remote WebSocket. I have a class called Connection that connects to the WebSocket in init and that disconnects the WebSocket in deinit. I like this because by replacing my active connection with a new one, I can be assured to release unused resources.
However, I have repeatedly run into bugs where the connection was not released because somewhere something held on to a reference to the Connection. Usually this is a bug in my use of capturing in closures that I can eventually figure out and fix, but in one case that I haven't yet figured it out, it only was resolved by removing a seemingly unrelated Binding in my UI code.
So to avoid leaving connections open (which is very bad in my context), I have resorted to manually tearing down the connection. This works, but it makes me sad.
I also find that I use class where otherwise I would be inclined to use a value type just so I can use deinit.
Is there a better way I could use Swift's capabilities to get automatic resource management?
No. Most of the Swift designers that hang out here will say not to use RAII, despite there being nothing else as easy. We do it anyway. Otherwise the alternative is what you suggested, a teardown and cleanup method, which has its own set of issues.
Thanks for the clarification! I guess another approach that might work in my case could be a value type representing the connection (containing a reference to the WebSocket) and a didSet on the property that stores the connection to clean it up automatically when it changes. But then I can't really encapsulate the logic within the Connection type.
Coming from C++ too, and I've had similar thoughts/isses. Another thing I've run into is that if you're trying to clean up in deinit, the clean up won't be called on program exit (AFAICT), which can be a problem for types I use in small programs.
Elaborating on that a bit: cleanups running on process exit are actually strongly negative in practice, not merely unnecessary. Things we've seen in practice due to unnecessarily doing this:
Failed software update installation because a process stalled during quit
Slow reboots due to many processes being slow to quit
Crashes during process termination due to unwisely dlclose()ing plugins
macOS applications can and should opt into Sudden Termination to speed up their exit process by skipping all cleanups (including temporarily disabling it when there's inconsistent on-disk state). NOTE: this is not Automatic Termination, that is a different technology. Sudden Termination does not change the user-visible behavior or your application in any way whatsoever, it just makes quitting it faster.
Indeed. I'll add to your list with a non-Swift non-Apple example: I've seen C# apps on Windows spending up to a minute between "quit" and "window disappearing", and upon closer inspection the apps are dutifully unregistering hundreds of event handlers.
You can wrap your connection in a weak / unowned ref Box, like this one:
public struct WeakRef<T: AnyObject> {
public private(set) weak var instance: T?
public init(_ instance: T) {
self.instance = instance
}
}
public struct UnownedRef<T: AnyObject> {
public unowned let instance: T
public init(_ instance: T) {
self.instance = instance
}
}
I don't understand well enough your use cases, but I can tell with what kind of problems these wrappers helped in my past projects.
We had a mobile app with such tree hierarchy:
UIWindow -> RootScreen -> TabBarScreen -> ... many other
RootScreen Interactor conformed to several protocols and was passed as a dependency to child screens.
This interactor was passed as protocol-type instances, like any URLHandler, any DeepLinkHandler, any AppReloadListener, any AddressChangeListener...
At some point of time some of this protocols began to be implemented by other classes, but RootScreen Interactor was often an easy default to make feature fast.
This way it was easy to make a retain cycle – child screen get an any DeepLinkHandler as dependency and keep a strong reference to it, which leads to retain cycle between RootScreen components and child screen.
So we wrap dependency in a box, having a UnownedRef<any DeepLinkHandler>, and all children store this box or capture it in closures.
As we had compile-time DI, we had strong guarantees that such unowned-ref dependency can't be destroyed earlier than any child using it, so no crashes or problems with unowned ref.
At risk of dragging this too off-topic, this sometimes feels unavoidable. The one thing that springs to mind is applications that use the terminal (TUIs, usually via curses). Users generally would prefer if echo was re-enabled and everything was back in cooked mode on exit. (But this doesn't rely on deinit being called. Just a reason why just exiting and getting out of there doesn't always work.)
I don't know why you would assume that. Non-copyable types are literally allowing the most convenient RAII possible and the very first example in its proposal is the classic RAII example.
Don't try to use deinit for RAII was the general advice of the Swift team before non-copyable types were added as well as during the initial discussions of ownership in 2017. The core team clearly changed their minds since then, but if one hadn't been keeping up very closely it'd be easy to miss that they had.
Terminals are also a great example of how systems that rely on process-exit-time cleanup (not done by the kernel) can fail, since when a curses app inevitably terminates abnormally, your terminal is often left in an unusable state unless you're lucky to have gotten back to an invisible shell prompt and can remember the right reset command without visual feedback. For writing in Swift, I would recommend that mandatory exit-time (and start-time, for that matter) operations be done explicitly in your main() or top-level rather than rely on any implicit mechanisms. But also design your system in such a way that it can cope with those exit-time operations not actually happening, since it is impossible in any way for any language to absolutely guarantee that they'll happen.