Unavailable `deinit` in `~Copyable` types

Hello, Swift community!

I would like to solicit discussion about the possibility of allowing deinit inside ~Copyable types to be marked as @available(*, unavailable) for the purpose of enforcing the use of a consuming method.

To provide some context, here's the primary use case I had that drove me to wish for such a feature:

struct File: ~Copyable {

    init(_ path: FilePath, _ mode: FileDescriptor.AccessMode) throws {
        fileDescriptor = try .open(path, mode)
    }

    deinit {
        do {
            try fileDescriptor.close()
        } catch {
            /*
             * Uh-oh!
             * A pending write operation had failed.
             * There's nothing we can do here, but accept the silent data loss.
             */
            assertionFailure("error while closing file: \(error)")
        }
    }

    consuming func close() throws {
        discard self
        try fileDescriptor.close()
    }

    private let fileDescriptor: FileDescriptor

    /* Some `borrowing` methods for reading and writing... */
}

func doSomething() throws {
    let file = try File("/dev/stdout", .writeOnly)
    try file.write(/*...*/)
    /* We forgot to manually close the file, so we're at risk of silent data loss. */
}

With the ability to mark deinit to be marked as @available(*, unavailable), this use case becomes a lot safer:


struct File: ~Copyable {

    init(_ path: FilePath, _ mode: FileDescriptor.AccessMode) throws {
        fileDescriptor = try .open(path, mode)
    }

    @available(*, unavailable, message: "use close() explicitly")
    deinit {
        assertionFailure("not reachable")
    }

    consuming func becomeNoLongerOpen() throws {
        try fileDescriptor.close()
        // ERROR: `deinit` is unavailable: use close() explicitly
        // FIX-IT: consider using `discard self`
    }

    consuming func close() throws {
        // okay: `discard self` prevents an implicit call to `deinit`
        discard self
        try fileDescriptor.close()
    }

    private let fileDescriptor: FileDescriptor

    /* Some `borrowing` methods for reading and writing... */
}

func doSomethingBad() throws {
    let file = try File("/dev/stdout", .writeOnly) // ERROR: `deinit` is unavailable: use close() explicitly
    try file.write(/*...*/)
    // NOTE: `deinit` implicitly called from here
}

func doSomethingGood() throws {
    let file = try File("/dev/stdout", .writeOnly)
    try file.write(/*...*/)
    try file.close()
    // okay: consuming method call prevents an implicit call to `deinit`
}

With this feature, a successfully initialized instance of File is guaranteed to never go without an explicit call to close(), because the compiler will not accept code that leaves the instance to deinitialize implicitly.

Fun fact: discard self is now the only way to end the lifetime of the instance. If a consuming method does not discard self, the instance will have to be moved out to another scope, but that other scope will also not have any way of getting rid of the instance, so eventually the instance will have to be moved back into a method of the type itself for the purpose of triggering discard self.

Another fun fact: If a type has no consuming methods and also has an unavailable deinit, then all instances of that type will be immortal (e.g. globals).

Has anyone else wished this was a thing? Are there any major issues about this that I'm forgetting?

5 Likes

I've been wanting similar logic around UnsafeThrowingContinuation to guarantee that it cannot be leaked

1 Like

Yes! Aside from the danger of eating an error during cleanup, there are indeed use cases where the cleanup depends on some parameters, like the continuation object.

What if the developer forgets to call the close method :thinking:

That's demonstrated by the doSomethingBad function:
The compiler will emit an error at the point of acquiring the instance, stating that the deinit is unavailable and providing a note at the end of the scope stating that this is where the deinit is implicitly called.

You won't be able to compile the scope containing the instance without either triggering discard self (if inside a non-static method on that type) or consuming it somehow (either using the consume keyword or by passing it into a consuming function).

And as the first "func fact" explains, ultimately the only way to get rid of the instance would be to trigger discard self. The compiler wouldn't accept any code that doesn't demonstrably do that.

1 Like

It’s theoretically possible to do something like this, but it might be more frustrating than you’re imagining. Even if deinit always traps, having to statically prove that it’s never called would be incompatible with planned directions like being able to store a non-copyable value in an Optional and then dynamically move it out.

6 Likes

I guess in order for such a type to be safely abstracted away behind a generic context, the fact that it may not be automatically deinitialized would have to represented in the type system in a generics-compatible way, rather than by way of simply marking the deinit as unavailable.

A highly bikesheddable syntax that comes to mind would be something like this:

/// A type whose instances can be automatically destroyed. 
/// Conformance to this protocol is implicit and can be suppressed by `~Destructible`.
protocol Destructible { /*compiler magic */ }

/// A type whose instances can be automatically copied. 
/// Conformance to this protocol is implicit and can be suppressed by `~Copyable`.
protocol Copyable: Destructible { /* compiler magic */ }

Just like every type conforms to the Copyable protocol by default, so will every type conform to the Destructible by default.

A type may be:

  • Copyable & Destructible // A simple type. This was the only option before ownership.
  • ~Copyable & Destructible // A non-copyable type. This is currently the only option for a non-copyable type.
  • ~Copyable && ~Destructible // A non-copyable and non-destructible type. This is the option that is being proposed.

A type may NOT be:

  • Copyable & ~Destructible // Without instance uniqueness or customizable copying logic like in C++, the concept of non-destructibilitiy is meaningless, which is conveyed by the protocol hierarchy.

If a type is ~Destructible, then the compiler will disallow even defining a deinit, so there is no need for availability annotation and an always-trapping body.

This way, we could have:

enum Optional<Wrapped>: ~Destructible, ~Copyable { /* ... */ }
extension Optional: Destructible where Wrapped: Destructible { /* ... */ }
extension Optional: Copyable where Wrapped: Copyable { /* ... */ }

Here, Optional inherits the non-destructibility of its wrapped value.

@John_McCall, do you think this approach would solve the ergonomics and generics composability problem?

It would solve the composability with generics. It doesn’t solve the issue with Optional I was talking about, which is centered on dynamic vs static information (I was really assuming a solution to the problem you’re talking about, at least to the point of making Optional<NonDestructible> a destructible type). Consider e.g. putting a value of non-destructible type into a class or an actor; being able to dynamically move the value out covers a lot of situations where otherwise we would have to add a language feature for non-destructible types to be usable at all.

3 Likes

Couldn't we require in those cases that the class or actor define an explicit deinit that consumes the non-destructible value in case the optional has a value inside?

Example:

class A {
    var x: SomeNonDestructible?

    deinit {
        if let x = consume x {
            // do something with `x`, like
            // calling a consuming method or trapping
        }
    }
}
1 Like

Yes, that’s an example of further language work that would have to be done in order to make non-destructible types work.

2 Likes

Another wrinkle here is that we have some types might want to make their consuming methods async. I always find the exploration that Rust did around linear types interesting.

I personally would love if we could find some solution to the problem so we can express resources with required async teardown.

4 Likes

Good point!

This is now the third reason why one might want to reach for a non-destructible type. The fourth one that I just remembered would be private cleanup methods:

  1. The cleanup method is parametrized.
  2. The cleanup method can throw an error.
  3. The cleanup method is asynchronous.
  4. The cleanup method is private.

I'd like to hear some specific use cases for this in order to get a better idea of what exactly we'd be getting if this feature was available, so that the cost-to-value ratio can be estimated more accurately.

Off top of my head, here are a few:

  • Continuation: the cleanup method may be parametrized (the value to return from the continuation or the error to throw out of the continuation).
  • Database transaction: the cleanup method may be asynchronous (applying the transaction).
  • System resource: the cleanup method may throw an error (error with pending operation).
  • Allocation pool: the cleanup method may be private (the instance is expected to be passed back to the allocator where it came from).

@John_McCall, if I understood you correctly, your point is that having non-destructible types would be useful and doable, but it's not clear if the use cases justify the implementation complexity. Do you think the potential for deterministic and flexible cleanup (like these use cases) would be worth the implementation complexity of elaborate lifetime checking?

Maybe I'm missing something, but I don't see how Optional<NonDestructible> could ever be destructible itself.
My understand is that the only way to get rid of such an optional is to consume-unwrap it and then deal with the now-unwrapped non-destructible payload separately. With some compiler support and syntax suger, it could be as simple as using optional chaining behind the consuming method call.

1 Like

Sorry, I meant non-destructible.

Yes, my point is that this feature is necessarily throwing away all the dynamic solutions to the problems you deal with with non-copyable values. It relies completely on the programmer being able to structure things in a way that works with local data flow analysis.

2 Likes

Yes! My the whole premise of non-destructible types (as described in this thread) is solely to enhance local data flow analysis and provide compile-time guarantees about it. Aside from graceful integration with dynamic typing (generics, existentials) via the Destructible protocol (just like Copyable, it is assumed by default, can be explicitly opted out of, and its inverse doesn't fit into Any), the entire feature is completely static.

I see its effect as sort of similar to implementing a non-Void returning function using a single switch over a case-less enum. As in, technically the function promises to return something, practically there it's not actually returning any value, conceptually the compiler is okay with it by way of statically proving that the seemingly blatant violation of the function interface is theoretically impossible (and injecting an "unreachable marker after it).

Similarly, in a general sense, a type may look just like any other type with normal implicit deinitialization, but by statically proving (or more accurately, by statically enforcing) a consuming operation, the compiler can get away with leaving the program devoid of any actual deinitialization code, aside from the primary benefit of enforcing the use of intended cleanup routines.

As long as the type system prohibits accidentally forgetting the non-destructibility of the value, it should be perfectly safe to move between static typing and dynamic typing.

With the introduction of non-destructible types, the ~Destructible becomes the new root type, superseding the current root type (which is ~Copyable.

func doSomething<Value: ~Destructible>(with value: Value) {
    // Impossible to implement without dynamic casting, since there is no know way to cause `value` to `discard self`, even though it may actually be a destructible type under the hood.
}
1 Like

I understand that the feature would be implemented with local flow analysis. I am concerned that it would be not be as useful in practice as you think it would be, because APIs that use it would become very difficult to use, and so it would become a source of complexity but relatively little benefit. I have to note that the Rust community, hardly slouches in embracing developer pain for a perceived safety benefit, pursued and ultimately abandoned a feature like this.

4 Likes

I'd love to read about Rust's exploration of this topic to see why they decided to abandoned it.

For those use cases where an explicit mandatory cleanup is necessary and unavoidable (for reasons described earlier), we have to compare the usage difficulty of a type that has such a mandatory explicit cleanup method with this feature and without this feature. Otherwise, it would be safe to assume that a traditional automatic cleanup is a viable option.

In light of that, maybe I'm missing something (please correct me if I'm wrong), but carefully studying the API documentation (if it's even available) to ensure correct usage, spending time and effort to experiment with the API in order to determine actual behavior (which may not be documented, even if documentation is available) in limited circumstances, paying the price of a runtime checks, and constantly being at risk of inadvertently causing a trap due to broken invariants seems to me like a lot more difficult usage of an API, compared to a straight-forward compile-time diagnostic that addresses all of these problems, prevents malformed usage from being accepted and guides the programmer toward the solution.

Having said that, I do acknowledge your concern about the possibility of the considerable work that would have to go into implementing this feature property might not yield the desired benefit to the extent that would justify the complexity.

I can't help but remember this quote by Scott Meyers:

Interfaces should be easy to use correctly and hard to use incorrectly.

1 Like

FWIW, I think rather than @available(*, unavailable) deinit, you could spell it private deinit -- aka, it is destructible, just "you" can't do it (unless you have privileges, which your consuming methods would).

This is how C++ handles this kind of use case, but it is a sharp sword, as you noticed about Optional<NonDestructibleThing>.

2 Likes