The consequence of this recommendation is that, from what I've seen in many projects, people tend to declare gigantic structs (esp. their JSON objects) and pass them around in functions or assign to variables without thinking that it can be a waste of memory and CPU cycles. In some edge cases the overhead can be significant and can be felt by the users.
Also I vaguely remember in some earlier versions of the Swift spec (or accompanying docs) Apple saying structs should be used only for small, short-lived stuff, which made sense!
So why is Apple pushing this? Is it because it's easier to understand, or there's something else?
Passing classes around has ARC overhead and from someone's perspective could also be considered as a waste of memory and CPU cycles. Additionally, what you may think of as "gigantic structs" may not be one under the hood. If you're referring to JSON objects, I'd assume such structs have strings, arrays, or dictionaries as their fields. Most of the time their storage would not be copied when passed as arguments, since those are CoW types, either living on the stack if they're small enough, or living on the heap and aren't copied until there's a need to.
The benefits of value types are enormous though. It's easier to reason about those, easier to avoid reference cycles, trivial to make to make them Sendable and/or immutable, easier to localize mutability to a certain scope when needed. If you're still concerned about some copying that you think should be avoidable, I suggest some form of participation in the evolution of the ownership model and corresponding proposals and pitches. I'm only listing a few that I'm following:
Another point to add is that composition is easier to manage than inheritance. It's too easy to stumble upon inheritance pitfalls with classes, with which you have to figure out whether you want final or open when making them public, deal with override, remember initializer rules special to classes with convenience/required. Code complexity grows pretty fast with class hierarchies.
If you need inheritance-like behavior with value types, you can achieve it with protocol extensions. And nothing forbids conformances to multiple protocols and inheriting multiple protocol extensions, while multiple inheritance with classes is not allowed in Swift for a good reason.
To illustrate this, of dozens of public types in the Swift standard library, only KeyPath hierarchy uses classes, the rest is structs, enums, and protocol hierarchies with corresponding extensions.
This shouldn't be taken as a recommendation to never use classes, reference types are useful in their own way in certain situations. But then in the Swift concurrency world, value types seem to be a sensible default. And when you're starting to write a new class, also consider how to implement Sendable on it, or mark it non-sendable, or if you actually need an actor after all.
The reason KeyPath uses a class hierarchy is primarily that we didn't have expressive enough existential types to allow for the use of protocols. If we were to introduce the feature today, it could probably be better expressed using a KeyPath protocol, with composable protocols like Writable, Codable, Equatable, etc. to opt in to various additional key path capabilities.
As far as the "giant structs" problem goes, that's a legitimate issue with using value types in Swift today. Our copy controls may help a lot there, but I also think we still need to introduce indirect struct types, which could provide value semantics while behaving more like classes, automatically using copy-on-write to share a buffer and avoid deep copying until actual writes occur.
Keeping indirect struct types aside; would what you write above, if implemented, result into ARC traffic when working with POD structs (e.g. a struct of 1000 integer fields)?
We would use ARC to manage copy-on-write for indirect structs, yeah. There might be some interesting optimizations we could do to allow an indirect struct to be stored in a @noncopyable type without the boxing, since it could be uniquely-owned in that situation. If you must avoid ARC at all costs, then the ownership facilities are probably the way to go.
indirect in structs would be a declaration modifier, like indirect on enums is now, so the marked properties would be placed in a dynamically-allocated, reference-counted box, yeah.
Interesting, but I don't understand how that would be different from using class instead. Apart from that structs have their limitations such as no subclassing, but those limitations were due to the value nature of structs, right?
Ownership seems like a more sensible way of optimizing struct copying, no?
Since structs would remain value types, they would still have a number of benefits over classes: you would get copy-on-write behavior to preserve value semantics, so they'd remain concurrency-safe and have static ownership checking. And although they may be dynamically allocated behind the scenes, we wouldn't promise stable identities for indirect structs, which makes a number of optimizations more powerful than they can be for classes. I think there is a good chunk of code out there for which the overhead of large value types is a problem, but which doesn't need absolute manual performance management; switching to ARC and copy-on-write would be sufficient to make such code perform acceptably without requiring a deep rewrite to make ownership-aware.
swift-protobuf is a good concrete example of this. We want generated messages to have value semantics, so we always generate a struct, and then we have some heuristics (probably not ideal ones, but something) that check whether we should put the stored properties for a generated message into a nested storage class instead of directly in the struct. We also have to do this if the message is recursive (directly or indirectly).
Declaring those properties as indirect var would eliminate that complexity, in an ideal world*.
* There are some performance subtleties that would need to be explored. For example, does each indirect var get its own dynamic allocation, or do all the indirect vars in a struct share a single dynamically allocated box? I don't immediately know what the perf characteristics end up looking like for different mutating operations without measuring them.
It would definitely take some experimentation to figure out what the ideal behavior is, but my hunch is that we'd generally want one box for all the indirect fields.
Why would it be beneficial to ARC manage a non-indirect struct? MemoryLayout<T> exposes the fact that a value of non-indirect struct type is the struct itself.
There isn't any implicit ARC overhead for structs today. If they contain ARC-managed fields, then copying the struct will retain those fields, but the struct itself does not add any implicit memory management overhead. indirect would more than likely remain something you have to ask for explicitly.
To name a few- Most likely large nested JSON structs are actually using copy-on-write under the hood. That would be automatic if they use Arrays or Dictionaries for instance. Structs are generally easier to use with structured concurrency. Structs can be stored on the stack or used in Arena data structures which can make them more efficient. Structs are generally easier to make lock-free.
There is always the right tool for the job. Classes have their place.
So my assumption is correct that Apple is pushing the safe side (structs by default) but of course you can choose the right tool if you know what you are doing especially in the multithreaded environment. If you don't then go with structs.
What's interesting is, if anyone has ever measured the effect of this on performance. Copying structs has CPU cache implications, e.g. if you have 4 nested calls with a large struct as an argument, you already have 4 copies of it loaded into your CPU cache. In real life code it can be even worse. Passing around structs feels cheap at the code level but it's not.
So cache implications, plus the fact that GUI apps are mostly single-threaded with rare and usually well isolated concurrency. My sense is that of course you would recommend "structs by default" by default, but (ahem) the rest of us should know it's not that simple, right?
This assumes that the compiler haven't done any optimizations at all, which is not true in practice most of the time.
I wouldn't say it's rare, at least in my personal experience of previously working as an iOS developer. Most GUI apps rely on some I/O (at least for disk access, networking), which is either blocking or asynchronous. It's in the best interest of an app developer to move I/O off the main thread to make sure that users don't see beachballs or frozen animations due to JSON parsing, image loading etc.
I'm not seeing any optimizations with structs when passing as arguments unless only a function is fully inlined, which afaik can happen only for smaller private functions. I'm not aware of any other methods that would eliminate struct copying.
Those things happen on the main thread even though they are asynchronous. There are a few exceptions in the SDK when working with hardware such as the camera: things returned by the camera asynchronously can arrive in a different thread. The documentation is usually explicit about these things.
For URLRequest it's an opt-in if I remember correctly: by default the result is delivered on the main thread, and it's what people do most of the time as it doesn't interfere with the UI flow.
Large struct parameters ought to be passed by a pointer, and if the argument the caller passes cannot be mutated while the callee executes, then the caller ought to pass a pointer to the value it has rather than making a full copy of the struct. This should be reliable for local variables and let global variables and properties, but is less reliable for mutable globals and class properties, since we can't generally tell whether a function might have another reference to the same object to write through.