Swift struct or class as a data member of a C++ class/struct, which is part of the goals of the C++ interoperability (similar to Objective-C++/C++ interoperability). Even the default C++ copy constructor constructed by the compiler would end up calling copy constructors (even if they themselves are default constructed). The Swift class/struct would need to provide something for the containing class to call.
You have the reverse case of a Swift class/struct containing a C++ class/struct as a member. You would need to account for any custom copying the C++ class/struct would need when copying the Swift class/struct.
Why couldn't we just emit a call to the necessary copy constructors the same way C++ would do it for a C++ type with a nested C++ type with a user-provided copy constructor?
I don't see any problem with Swift providing "something" for C++ to call when copying a Swift type. We'll probably need to do this anyway. But I also don't see why the user needs to be worried about this.
I've been counting on eventually getting a @default_deinit attribute for classes and protocols so programmers who care about performance of classes can opt-out of writing class deinits in exchange for solving the arbitrary destroy_value side effects problem. I never got around to writing an evolution proposal for it, but the thinking was that the only reason to have a custom deinit was to perform some operation with global side effects. So, the only two useful modes are custom vs. default deinits. Default deinits by definition have no observable side effects, so it works out great.
Now it sounds like we might also want to define a "local deinit" to allow custom deinits as long as they only read from reachable memory. But deallocation from an unsafe pointer is a global side effect, introducing dependencies on other arbitrary pointers. Can we even assume that other objects aren't allocated by the custom allocator? Knowing the deinit won't access class properties that aren't reachable from the destroyed object is better than nothing, but won't be easy to get right.
With structs, I hope that programmers will need to opt-in to custom deinits via something like a move-only constraint. Although that won't be possible if we allow generic substitution of C++ types without any constraints.
Yeah, I think that "you can only touch memory that's only reachable from this object / value" has to be a user promise, not an inferred property. That's precisely because, as you point out, the use cases for custom deinits are generally exactly the things that would totally stymie any reasonable static analysis.
I do think we need to be approaching this as "what restrictions can we impose to enable the optimizations we want regardless of the type being copied or destroyed" rather than "how can we figure out that more types have the special properties that enable the optimizations we want". The first really doesn't seem intractable to me; copy/destroy only touching memory associated with the specific value being constructed/destroyed is the dominant case. It's very rare that copy/destroy does something like maintain some sort of global registration, and it's not at all unreasonable to ask that the memory backing such registries be marked explicitly as "volatile" (or whatever). (Practically speaking, it already has to be concurrency-safe in most cases.) If only "volatile" memory outside the current object/value was touchable by deinits, then most of your optimization problems around e.g. moving releases in/out of accesses would only apply to volatile accesses, which would be statically recognizable.
We had a series of discussions in Yellowstone/DA6 that solidified in my mind that Swift 1.0 didn't need to have rule of 3/5 and that we needed to bake out the rest of the model before we worried about this, so we pushed it off years ago.
However, Swift's internal implementation model intentionally embraces the rule of 5, and imported C++ types should someday be able to fully take advantage of this. This is also important for core Swift features like definitive initialization which needs to reason about the difference between init vs assign, and optimizations like RVO that turn assignments into moves.
The question to me has always been "how do we realize this for Swift programmers" in a way that preserves the principles that make Swift truly great (incl progressive disclosure of complexity, preferencing value semantic types, and trusting the library developer to know better than the language designer). As we've moved forward into the years, Swift has an early but developing model for ownership, as well as a growing community that care about low level performance.
I think that now (or perhaps 6-12 months from now when then concurrency work is settling) would be a good time to reopen these questions. It is not a core premise of Swift that ARC overhead is the only thing that happens during a "copy constructor". Nor is it a core premise of Swift that people who care about rule of 5 should be forced into terrible and inefficient workarounds involving classes to materialize those designs.
My expectation for Swift over time is that we can trust the library developer to know what is best for their clients, be able to intelligently weigh the tradeoffs and achieve their goals. As language designers, we should make sure that Swift programmers don't accidentally stumble into an "experts only" tarpit without understanding the issues, but we don't need to decide that "experts are bad" and all tarpits should be definitionally eliminated.
I agree with John here. The relaxed lifetime behavior of Swift is the right default, as is the attribute that opt-in forces specific weird types to have precise lifetimes. ObjC ARC is good precedent for this as well.
I haven't really thought this out, but what if we just provided users a way to define what should happen onWrite? Instead of user-defined behavior every time any copy is made, this would just allow users to define what should happen for a "real" copy. It would allow custom/efficient copying of things like n-dimensional arrays but, it would also give the optimizer freedom to make as many O(1) copies as it wanted and wouldn't guarantee when a "copy" would take place.
So far we have been mapping C++ types with non-trivial special members to Swift structs with value witness functions synthesized from the said special members.
However, we have been importing C++ types with non-trivial copy and destructor semantics as address-only, because instances of such types may depend on their address; the optimizer should not be allowed to arbitrarily explode them into constituent parts and then re-materialize at a different address without invoking the corresponding C++ special members. It would be interesting to investigate, to what extent there is a benefit to manually annotating some types as "movable with memmove" or "copyable with memcpy", and allowing them to be imported as loadable.
Taking that into account, right now C++ interop does not have an implication on the semantics of copy_value (or at least I believe so).
However, copy_addr does run arbitrary code (C++ special members, via the corresponding value witness). To what extent is that a problem?
The optimizer wants to be able to add/remove/move copy_addr instructions. I think there is no issue with that. We should expect C++ types in Swift to play by the Swift rules. Specifically, the value witnesses of mapped C++ types (and hence, underlying C++ special members) would not be guaranteed to be invoked at specific program points predictable from the source code. Trying to special-case value or lifetime behavior of C++ types in Swift is going to lead to non-composable effects (either when the C++ type is used in a Swift aggregate, or when it is passed to a Swift generic).
The optimizer wants to assume that copy_addr has weaker side-effects than an arbitrary function call. That is an issue, because C++ special members are arbitrary functions. I think restricting C++ special members to only "locally mutating" ones is going to be difficult, I believe designs that access global memory from special members are not uncommon. For example:
lazily initializing a global variable;
collecting some statistics about the objects of this type (say, a hash table would want to provide information like the total amount of memory consumed by the hash tables in the process, actual load factor, distribution of probing lengths etc.);
RAII objects (the Swift parser has lots of them, for example, swift::Parser::ContextChange, and of course llvm::SaveAndRestore in LLVM as an ultimate example).
@Andrew_Trick Could you provide more details about what types of semantic limitations are actually useful for the optimizer?
There is no semantic difference between copy_value and copy_addr.
Address-only-by-abstraction types (generic parameters) can be represented as SILValues and copied with copy_value. That's what we mean when we say that Swift types are copyable by default. Any C++ types that are substitutable for generic parameters without any additional constraint must also be copyable. Copyable address-only types need to be lowered to an in-memory representation for LLVM, but otherwise they are regular old substitutable SSA values (a copy of the value is substitutable with the original).
A copy_value performs a semantic copy just like copy_addr, so your witness methods will be properly invoked. (In practice, we will always lower copy_value of an address-only type to copy_addr before IRGen).
Swift code simply won't provide the same semantic guarantee on value lifetimes as C++. The Swift compiler won't guarantee the order of copy constructors (or destructors), or the number of copies performed. Hopefully I'm just restating what you're saying above--Swift will not have special-case semantics for C++ types.
Yes, it does. The same applies to copy_value. I don't have a solution for you short of disabling C++ interop in order to allow optimization of partially generic Swift code. I'll be honest, the compiler isn't going to get this right initially, it's going to take time to teach the compiler about copy side-effects, and there will be long-term performance tradeoffs.
Some earlier posts in this thread seemed to indicate that C++ compilers have the same problem, or that Swift already has the same problem as C++. That is not true. The Swift compiler needs to optimize in the presence of values with abstract types (not just pointers to abstract types). And it's fairly baked-in that values can be copied without side effects.
From my point of view, a C++ type with global copy side effects should not be generically subtitutable without an additional type constraint. But I realize that may not be the programming model you're shooting for.
If I understand correctly (and that's a big if ) I think the difference isn't so much with copy_addr and copy_value, but what they imply. An address-only type will never be destructured, for example, and that is not the case with a loadable type (there are several passes that might destructure loadable values for various reasons). There are places in the optimizer where a copy_value might be turned into a destructure + several copy_values and that would (hopefully) never happen to a copy_addr. This is important because the former would be OK no matter the C++ type, whereas the latter may be quite problematic if the copy constructor is non-trivial.
Any C++ types that are substitutable for generic parameters without any additional constraint must also be copyable.
I'm not sure I agree with this. I think the idea is that we're going to substitute all C++ types either before SILGen or in a raw SIL pass. See this post.
Swift code simply won't provide the same semantic guarantee on value lifetimes as C++. The Swift compiler won't guarantee the order of copy constructors (or destructors), or the number of copies performed. Hopefully I'm just restating what you're saying above--Swift will not have special-case semantics for C++ types.
For what it's worth, I agree that this is both OK and important.
From my point of view, a C++ type with global copy side effects should not be generically subtitutable without an additional type constraint. But I realize that may not be the programming model you're shooting for.
But it would be OK if it were a concrete type (i.e., not a generic substitution)? Does the above proposal for generics solve this?
I think it's worth breaking down what these global copy side effects are. In most cases, their impact is probably not directly exposed to Swift: e.g. Swift code will probably not directly access the variables that hold statistics on the outstanding values of some time. If Swift calls a function that reads these variables, the results might change according to optimization, but I hope everyone agrees that's expected.
My intuition (which could totally be wrong) is that all the optimization problems arise from direct accesses in Swift code to memory that's modified or accessed by copy/destroy operations. We don't want to consider "code observes a different value in memory because of optimization" to be a problem; our concern is just about the optimizer creating a miscompile by e.g. moving a copy/destroy into the middle of access or to a point where we previously did some sort of store-forwarding or similar optimization. That's why I keep talking about "volatile" storage: I think if we can recognize storage as "volatile" to copy/deinit operations, we can just optimize less aggressively around explicit accesses to that storage, and that should be sufficient. Notably, we do not need to worry about reordering things over encapsulated accesses to that storage because that's just "code might observe a different value", which is acceptable.
So as long as whatever global state is modified by C++ copy/destroy is encapsulated within C++ code, we should be fine. If it's not, and Swift can access that storage directly, we just need to mark it as "volatile" and then we'll know to be less aggressive about it.
This would require volatile accesses to be more-or-less "atomic", because copy/destroys could be moved into the middle of them. That is, you wouldn't way to have two separate volatile variables holding an array and an index into it, you'd want to have a single volatile variable holding a struct that held them both. But this kind of thing is very corner-case, and I think it's completely reasonable for us to tell programmers that if they want to have copy/destroy operations with global side effects, they will have to jump through a few extra hoops with those side-effects.
You understand correctly. There's no relevant difference between copy_value and copy_addr`. An address-only type should never be destructured. That may happen naturally/accidentally today when SIL opaque values is enabled and we have concrete address-only types, but that's a straightforward bug that needs to be fixed before enabling opaque values.
If the Swift code is fully specialized, then I don't have any concerns.
I'm only concerned with calling generic Swift code with imported C++ types or, for example, creating Swift arrays of imported C++ types.
That's an important observation. I think this an appropriate place to have an undefined behavior rule: storage accessed by copy/destroy outside the object cannot be visible to Swift. I don't think we support importing 'volatile', but it might work for now to wrap storage in std::atomic.
To be clear, I didn't mean to suggest that this annotation would actually be the C volatile qualifier, just that it needs to be a sort of "volatile" storage as seen by Swift (if we have to import it at all).
Substituting C++ types into Swift generics certainly needs to be possible, Array<cxx_std.string> etc. That other thread is about using C++ templates from a Swift generic.
It's easy to imagine C++ APIs that, to use them fully, require a copy constructor to be defined. This may not be all that realistic, but:
/// A base class for smart pointer types.
///
/// Subclasses are expected to provide "smartness" by defining rule-of-5 members
template <class T> struct SmartPtr {
virtual ~SmartPtr() = 0 // destructor
T* raw; // The raw pointer
};
If you want to define useful SmartPtr subclasses in Swift, you'll need to give them copy c'tors.
I suppose there are some APIs that would require this. I'm just not sure the balance of supporting these super niche APIs (that could easily be fixed with a different C++ interface) is worth the complexity/performance hit of custom copy constructors in Swift.
That being said, I'm not at all opposed to custom copy constructors in Swift (in fact the more I think about it, the more I like my idea of an onWrite special member), I just want to make sure we're adding them for the right reason. That is, I think we could support C++ interop without them, so I don't want C++ to drive the design too much.