Unowned references have more overhead than strong references?

According to Analyze heap memory, unowned references are actually many times more expensive than a strong reference. I wasn't previously well aware of this - unowned in my mind was basically just a way to get a non-ARC pointer. Apparently that's not accurate.

How is unowned implemented, then? Is it essentially the same as weak (i.e. an allocated-on-demand side table on the target reference)?

What makes them so expensive, compared to strong references which are already relatively expensive compared to basic pointers (due to the spurious ARC traffic that's not always optimised out)?

2 Likes

I think you want unowned(unsafe). Regular unowned is like a weak reference that is guaranteed to crash if you attempt to access the referenced object after it's gone away.

2 Likes

Apparently. I was vaguely aware of that variation, I just never realised it was such a dramatically different thing to plain unowned.

Right, but how does it do that (and why is it so expensive)?

My guess is that it's actually doing what weak does - i.e. tracking the references in a side table and zeroing them when the object deinits - but that instead of requiring you to explicitly handle the optionality it's basically acting like force-unwrapping?

But if that's the case, shouldn't it be basically the same cost as weak? According to the aforementioned presentation it's less than half the cost.

I've not stayed totally on top of the details here but I should have been more specific--the "like weak but guaranteed to crash" is only true at the conceptual level. IIRC at the implementation level, unowned doesn't do the same side table shenanigans as weak, and instead uses the zombie object approach that weak references did pre-Swift-4 (since unowned references aren't supposed to outlive the referenced object). That is, we track both the strong reference count and the unowned reference count, and we don't deallocate memory until the both the strong and unowned count go to zero. So unowned(safe) and strong references both do RC and point directly at the object.

2 Likes

For "pure Swift" classes without ObjC heritage, I would expect them to be about the same cost. The object has two refcount fields for "strong" and "unowned" refcounts. When the strong refcount hits zero, the object is deinitialized, and the unowned refcount is decremented; when the unowned refcount hits zero, the object memory is freed. Keeping the memory around until the unowned refcount hits zero allows remaining unowned references to check whether the object is still valid before making a strong retain.

For ObjC-heritage classes, there is no second refcount, so they do get implemented like weak references.

7 Likes

Unowned preferences are pretty much inherently slower than strong references: we have to do work to turn the unowned reference into a strong reference in order to actually use the object, and then of course we have to release that strong reference when we're done with it.

We probably also don't optimize them effectively in situations where we know the reference is valid, but I think it's unlikely to be a significant impact compared to the above.

10 Likes

Ah, so they implicitly convert their 'unowned' retain to a 'strong' retain each time they're used? And that presumably accounts for the extra [CPU time] cost that the WWDC presentation asserts?

If so, though, then a factor of four seems surprisingly large. :thinking:

Since one cannot control the scope of that temporary strong retain, unlike for weak references, it seems like repeated use of the same item [within a code block] might end up being more expensive than a weak reference that just 'unwraps' once for the whole block?

(perhaps the optimiser can eliminate duplicate 'unwraps', but I'm assuming it's not to be relied upon…?)

There's no difference between unowned and weak references in terms of your ability to control the scope of a strong reference you extract from it. If you repeatedly use a weak reference in a block of code, you'll naturally end up separately promoting that to a strong reference for each use. If you want to guarantee to avoid that, you can assign it to a local variable, and the weak reference will only be read once. The same thing works for unowned references.

5 Likes

Weak references being Optional might more strongly encourage well-scoped use, but you can still "control the scope of the strong retain" by assigning to a variable, just like you would typically if let a weak reference:

let foo = someObject.foo
// we now have a strong reference to `foo`
foo.bar()
foo.bas()

Unfortunately, it's hard for me to recommend depending on the optimizer to save you if you fail to do this, since it needs to generally be conservative with accesses into objects; being shared mutable state, it's hard to know that other code in the program didn't change the state of the object graph between operations.

4 Likes

If the compiler had an optimization to combine nearby weak/unowned promotions, it might be easier to apply it to unowned references than weak ones. Consider code like this:

  ref?.foo()
  bar()
  ref?.foo()

where ref is either a weak or an unowned reference. The optimization in either case is to use the strong reference acquired from the first load of ref as the value of the second, extending it across the call to bar. (Assume that ref is immutable or otherwise inaccessible so that this isn't semantically invalid for other reasons.)

Now suppose that bar causes the last (other) strong reference to the object referenced by ref to go away, so that the optimization is actually extending the lifetime of the object, causing the second call to happen when it should potentially be skipped.

If ref is a weak reference, extending the object lifetime like that seems obviously semantically wrong. Loading from a weak reference is a semantic test for whether the object still exists, and we should not do optimizations that would change that result, at least in patterns like this. (I think we generally do want to reserve some flexibility to destroy things early, but destroying things late seems very bad.)

If ref is an unowned reference, I feel there's at least an argument that it's okay to extend the object lifetime. The programmer is almost certainly not trying to get a trap here — most likely, they have a good reason to think that the object does still exist for this entire duration, and they won't be upset by an extension in the event that they're wrong. (The strongest argument not to do this optimization is that it would probably be sensitive to build configuration: we wouldn't be guaranteeing to do the optimization, so we could actually be hiding a bug in release builds that would show up in debug builds.)

6 Likes

FWIW to me this seems like a reasonable optimisation. Things that fail in debug builds but not release builds are usually acceptable.

It'd be cool if the compiler could try to warn, though, if it has reason to believe the coder's making a bad assumption (with an explanation of how the second dereference could [in principle] be a nil dereference, and a fix-it to use an explicit strong temporary reference).

Of course, one pertinent question is how often this optimisation would apply, anyway.