Basic Swift ownership-y questions

I see. The answer is Swift needs non-copyable types, pitched as moveonly contexts.

@dabrahams @gribozavr: are you working on move-only types in Swift?

I wonder what's so hard about deinit? (asking because I'm not familiar)

Swift doesn't have precise destruction semantics yet, right? That sounds like it'd make optimizer correctness quite hard - since there is no precise specification of semantics, "correct" isn't defined.

Edit: sounds like potential side effects in deinit are the key issue preventing optimization.

Trailing closure APIs (also known as Python context managers) are pretty great.

// Statically typed languages: trailing closure APIs.
let lines = file.withLines { lines in lines }
# Dynamically typed languages: context manangers.
with open(filename,'r') as file:
  lines = file.readlines()

Dan, I think that you are confusing various levels of semantics.

| dan-zheng
November 29 |

  • | - |

I have some vague idea about the answers to these questions. Which actually means I have no clue about the answers.

  1. Why exactly can't structs and enums have deinit yet?

This would require something like a non-copyable type. That will come with time and isn’t an easy one off effort.

    • Do we need precise destruction semantics (e.g. RAII) for move-only struct and enum types?

Now we don’t want this since it hamstrings the optimizer.

  1. Why exactly can't structs and enums have copy constructors yet?
    • Copy constructors seem useful (why?).

We don’t want to have custom copying in Swift. This is a feature, not a bug.

    • Swift's semantic model (and implementation in SIL) makes liberal use of copying (copy_value in SIL) for ease-of-understanding and correctness. Copy constructors can fit into that model, right?

This is a detail of SIL and a copy_value at the SIL level doesn’t imply anything about Swift level copy constructors.

1 Like

Dan's not wrong that copy_value at the SIL level reflects a semantic copy which, if not optimized, would turn into a use of a source-level "copy constructor". They're not unrelated.

But as you say, the problem with having source-level copy constructors is that (1) they could have arbitrary side-effects but (2) to maintain performance we would need to come up with a language model that still allows them to be elided or reordered, which we've struggled to do for the closely-related problems of destroy_value.


Yep; thought you knew. I'm working on C++ interop and filling out the ownership story is a big part of that.

Depends what level of interop you want to achieve. My goals/priorities are, roughly:

  • API Accessibility : all C++ APIs should be usable from Swift, and all Swift APIs usable from C++, without manual annotation or wrapping.
  • The following take precedence over safety and ergonomics:
    • API Accessibility.
    • Avoiding any performance penalties at the API boundary.
  • The ability to easily make an imported C++ API safe and ergonomic in Swift with manual intervention (annotation, wrapping, etc.) is a goal.
  • Using a C++ API from Swift should not introduce any new, Swift-specific “gotchas,” and vice-versa.

The goal of full API accessibility means you need to be able to subclass a C++ struct in Swift and vice-versa. At some level that means being able to define and use copy constructors. Of course I hope to cleverly hide the complexity somehow, but it has to be there, if hidden.

1 Like

Well, it means being able to use copy constructors. Why do we need to be able to define them in Swift for this? If we look at an example:

struct X { X(const X &); };
// Swift
struct Y : X { var i : Int }
// IR
swift.Y = type { X, Int }

The subclass essentially turns into a member and we can just decompose and emit the respective copy logic for each class, in this case, we'll call out to X's copy constructor and simply copy i as any other POD type.

The user of X doesn't need to do anything "special" we have the ability to handle this entirely in the compiler (I think). Here's a proof of concept if you're interested.

The compiler needs to be able to arbitrarily remove and insert copy_value as part of standard SSA optimization of SILValues. Yes, it's a semantic copy, but we've defined copyable types such that those copies don't affect program semantics (to the same extent that isUniquelyReferenced isn't allowed to affect program semantics).

The separate issue, already pointed out, is that allowing every copy_value to have arbitrary side effects would be disastrous for optimization in general.

Right, if we did support user-defined copy operations, it would have to be understood that the compiler would be permitted to optimize types with custom copy operations just like it would optimize any other value-copy. I think the basic problem is that it's hard to define precisely what that would mean — it must mean that certain kinds of side effects would not be allowed within them, but which effects, exactly?

We do need to think about this. C++ interoperation is a long-term goal for the language, and that includes importing non-trivial types. Normal C++ copy constructors and destructors generally have side effects that are "outside the model" that the Swift optimizer works within — they're not going to e.g. write to Swift-accessible memory. Unfortunately, that isn't guaranteed: you can certainly write a C++ type whose copy-constructor calls a function that you stored into the source object. Presumably we need to make that illegal in some way to preserve our optimization goals. So we're forced to think about this because of C++ even if we never have native custom copying.

I wonder if we can reasonably just enumerate a list of side-effects that are legal in custom copy-constructors and destructors. The most important thing to allow is allocation and deallocation, as well as accesses to memory that's only accessible through the object being constructed/destroyed. It's not unreasonable to say that any other memory that it accesses has to be marked somehow to make it "volatile". Would we also want copy operations to guarantee not to destroy any values?

Ideally we could even force class deinits to obey these same restrictions. That bird may have flown, though.


If I want to create a type in Swift to be used by C++ code, it seems to me I may need to be able to control how C++ copies it.

I think the idea floated at the bottom of this post may solve this problem for copy constructors. What are the specific issues with deinit?

Do you have a particular use case in mind where a Swift type would need a custom copy constructor so that C++ could copy the type properly?

My thinking was that we might not need this because no one has ever been able to have Swift types with custom copy constructors, and, as I think Micheal said, this is a feature. So, unlike C++, we wouldn't be "taking away" anything from people.

The optimizer folks can speak to that better than I can, but I believe the issue is that, since class deinit can have arbitrary side-effects, it is very tricky to do certain kinds of optimizations that reorder things around releases without analyzing the possible side-effects of a release.

Swift struct or class as a data member of a C++ class/struct, which is part of the goals of the C++ interoperability (similar to Objective-C++/C++ interoperability). Even the default C++ copy constructor constructed by the compiler would end up calling copy constructors (even if they themselves are default constructed). The Swift class/struct would need to provide something for the containing class to call.

You have the reverse case of a Swift class/struct containing a C++ class/struct as a member. You would need to account for any custom copying the C++ class/struct would need when copying the Swift class/struct.

Well, that doesn't sound so bad for C++ interop, at least:

  • It's a problem Swift classes already have
  • It's a problem C++ has to the same extent we have it

Why couldn't we just emit a call to the necessary copy constructors the same way C++ would do it for a C++ type with a nested C++ type with a user-provided copy constructor?

I don't see any problem with Swift providing "something" for C++ to call when copying a Swift type. We'll probably need to do this anyway. But I also don't see why the user needs to be worried about this.

I lay out how I suggest we do this above.

I've been counting on eventually getting a @default_deinit attribute for classes and protocols so programmers who care about performance of classes can opt-out of writing class deinits in exchange for solving the arbitrary destroy_value side effects problem. I never got around to writing an evolution proposal for it, but the thinking was that the only reason to have a custom deinit was to perform some operation with global side effects. So, the only two useful modes are custom vs. default deinits. Default deinits by definition have no observable side effects, so it works out great.

Now it sounds like we might also want to define a "local deinit" to allow custom deinits as long as they only read from reachable memory. But deallocation from an unsafe pointer is a global side effect, introducing dependencies on other arbitrary pointers. Can we even assume that other objects aren't allocated by the custom allocator? Knowing the deinit won't access class properties that aren't reachable from the destroyed object is better than nothing, but won't be easy to get right.

With structs, I hope that programmers will need to opt-in to custom deinits via something like a move-only constraint. Although that won't be possible if we allow generic substitution of C++ types without any constraints.

Yeah, I think that "you can only touch memory that's only reachable from this object / value" has to be a user promise, not an inferred property. That's precisely because, as you point out, the use cases for custom deinits are generally exactly the things that would totally stymie any reasonable static analysis.

I do think we need to be approaching this as "what restrictions can we impose to enable the optimizations we want regardless of the type being copied or destroyed" rather than "how can we figure out that more types have the special properties that enable the optimizations we want". The first really doesn't seem intractable to me; copy/destroy only touching memory associated with the specific value being constructed/destroyed is the dominant case. It's very rare that copy/destroy does something like maintain some sort of global registration, and it's not at all unreasonable to ask that the memory backing such registries be marked explicitly as "volatile" (or whatever). (Practically speaking, it already has to be concurrency-safe in most cases.) If only "volatile" memory outside the current object/value was touchable by deinits, then most of your optimization problems around e.g. moving releases in/out of accesses would only apply to volatile accesses, which would be statically recognizable.

Hey Dan, sorry for the delay, here MHO:

We had a series of discussions in Yellowstone/DA6 that solidified in my mind that Swift 1.0 didn't need to have rule of 3/5 and that we needed to bake out the rest of the model before we worried about this, so we pushed it off years ago.

However, Swift's internal implementation model intentionally embraces the rule of 5, and imported C++ types should someday be able to fully take advantage of this. This is also important for core Swift features like definitive initialization which needs to reason about the difference between init vs assign, and optimizations like RVO that turn assignments into moves.

The question to me has always been "how do we realize this for Swift programmers" in a way that preserves the principles that make Swift truly great (incl progressive disclosure of complexity, preferencing value semantic types, and trusting the library developer to know better than the language designer). As we've moved forward into the years, Swift has an early but developing model for ownership, as well as a growing community that care about low level performance.

I think that now (or perhaps 6-12 months from now when then concurrency work is settling) would be a good time to reopen these questions. It is not a core premise of Swift that ARC overhead is the only thing that happens during a "copy constructor". Nor is it a core premise of Swift that people who care about rule of 5 should be forced into terrible and inefficient workarounds involving classes to materialize those designs.

My expectation for Swift over time is that we can trust the library developer to know what is best for their clients, be able to intelligently weigh the tradeoffs and achieve their goals. As language designers, we should make sure that Swift programmers don't accidentally stumble into an "experts only" tarpit without understanding the issues, but we don't need to decide that "experts are bad" and all tarpits should be definitionally eliminated.



I agree with John here. The relaxed lifetime behavior of Swift is the right default, as is the attribute that opt-in forces specific weird types to have precise lifetimes. ObjC ARC is good precedent for this as well.


1 Like

I haven't really thought this out, but what if we just provided users a way to define what should happen onWrite? Instead of user-defined behavior every time any copy is made, this would just allow users to define what should happen for a "real" copy. It would allow custom/efficient copying of things like n-dimensional arrays but, it would also give the optimizer freedom to make as many O(1) copies as it wanted and wouldn't guarantee when a "copy" would take place.