Why does Apple recommend to use structs by default?

Yes, I was replying to Max's claim that the compiler should optimize struct passing somehow automatically but the compiler has no chance in most cases, unless (again this is afaik) a function is private and small enough to be fully inlined.

Are you suggesting using

func foo(p: UnsafePointer<LargeStruct>)

instead of

func foo(p: LargeStruct)

or did I misread you?

He's saying that func(p: LargeStruct) should pass p by pointer instead of by value. Automatically, as a compiler optimization.

1 Like

The more subtle aspect of whether it "has no chance in most cases" is how heavily the codebase makes use of shared mutable state. Referencing back to the topic, one of the reasons we recommend using value types is that pervasive use of value types and local state allows parameter passing and other overheads to be optimized in most cases when they're dominant.


I’m curious if resilient structs are also always passed by pointer, or if the runtime provides enough information that the caller and callee can dynamically figure out which registers contain which elements of the struct.

Resilient structs, and generic structs or anything else we don't know a definite size for at compile time, are passed by pointer. For a resilient struct, the implementation may use a pass-in-register convention internally if it knows the values are small enough to do so.


FWIW our app state is represented by one deeply nested struct that itself nests another even bigger nested struct, which represents model data from the DB.

We use the reducer pattern to update the state object, which happens many (> 200) times per second. Despite this, the updates to the state object (and their presumed copying) mostly don’t even register on a performance trace at all, even on older devices and in our wasm builds. In the worst cases, large batches of state updates show up as taking a few microseconds on wasm.

While I’m sure we can make this even more efficient with ownership and so on, for our case I am extremely happy with both the performance and convenience of using structs here.

I don’t doubt that there are cases where even larger structs in certain configurations (e.g. ones that contain all custom structs rather than implicit COW types like Array and String) might lead to less acceptable overheads. Maybe using such a huge struct in a tight loop for example would not be a great idea. But I wanted to chime in to reassure devs – especially those new to Swift – that you’ll most likely be fine performance-wise unless you really try not to be.


Bring back NSZone! (Not really.) (But, wait, maybe?)

On OmniWeb we actually used NSZones to keep different web pages in different sets of memory pages, with the idea that you normally only actively browse one page at once (even if other tabs are open).

I have NO idea if this actually made anything faster, because back then we didn’t really have the bandwidth to do performance testing. So maybe there’s something to allowing the programmer to pick their own zones? But, also, what the heck do I know, I’m a front-end dude.

1 Like

The modern take on NSZone would be an arena allocator. There have been some discussion on this forum about what it might take to give Swift the ability to allocate types out of a custom arena.

1 Like

WELCOME TO THE ARENA OF DESTINY ok sorry I’m a bit loopy today. I’ll go research what arena means in this context.