Improving reference counting performance

Take a look a this paper:

Its authors tested an approach (biased reference counting) to improve Swift runtime performance. The paper is short and is worth reading.

Reference counting is one of the few areas of the language that may have dramatic performance impact in our code. While not advocating for following this approach, I would like to nudge Core Team into examining strategies for improving reference counting performance.

If there’s already plans or work being done in the field of Swift reference counting I would love to learn more.

Thank you.


An interesting paper.

On remark:

As shown in the figure, performing RC operations takes on average 42% of the execution time in client programs

Calling microbenchmarks client programs is a little dubious. In real world client softwares, RC represents far less than 42% of the execution time.

And they probably also use more shared references than this small benchmarks.

The "servers" software numbers are far more interesting.

A nice reading.

FYI, I believe Core Team already has plans that can tackle this problem at language level, which are not restricted to using an enhanced reference counting mechanism.

For example, at the end of section 3.2 of the paper, it says

However, as argued in Section 1, the Swift compiler does not know this because Swift compiles components separately.

If I understand it correctly, a new ownership model can just be helpful to provide enough information for compiler to reduce many reference counting overheads.

Yes, those are my thoughts as well. I think essentially it all comes down to 3 different ways of reducing the overhead:

  1. Explicit language construct, be it ownership-based move semantic or something else, will clearly indicate to the compiler that this reference's counter can be optimised (possibly removing the counter altogether).

  2. New rules for SIL optimiser to enable the optimisation result as in #1, but with no changes to the Swift program source code.

  3. Runtime performance optimisation. This is what the paper is about. There may be more ways to reduce reference counting overhead.

Those optimisation targets can be pursued independently, if I understand correctly.

For anyone wondering what the linked paper (which is 12 pages long) is about, here’s a brief summary:

The main idea is to use what the paper calls “biased reference counting”, meaning that the thread which owns an object can update its reference count non-atomically, whereas other threads must update it atomically.

Terms of Service

Privacy Policy

Cookie Policy