A roadmap for improving Swift performance predictability: ARC improvements and ownership control

Swift's high-level semantics try to relieve programmers from thinking about memory management in typical application code. In situations where predictable performance and runtime behavior are needed, though, the variability of ARC and Swift's optimizer have proven difficult for performance-oriented programmers to work with. The Swift Performance team at Apple are working on a series of language changes and features that will make the ARC model easier to understand, while also expanding the breadth of manual control available to the programmer. Many of these features are based on concepts John McCall had previously sketched out in the Ownership Manifesto ([Manifesto] Ownership), and indeed, the implementation of these features will also provide a technical foundation for move-only types and the other keystone ideas from that manifesto. We will be posting pitches for the features described in this document over the next few months.

We want these features to fit within the "progressive disclosure" ethos of Swift. These features should not be something you need to use if you're writing everyday Swift code without performance constraints, and similarly, if you're reading Swift code, you should be able to understand the non-ARC-centric meaning of code that uses these features by ignoring the features for the most part. Conversely, for programmers who are tuning the performance of their code, we want to provide a predictable model that is straightforward to understand.

Lexical lifetimes

Our first step does not really change the surface language at all, but makes the baseline behavior of memory management in Swift more predictable and stable, by anchoring lifetimes of local variables to their lexical scope. Going back to Objective-C ARC, we tried to avoid promising anything about the exact lifetimes of bindings, saying that code should not generally rely on releases and deallocations happening at any specific point in time between the final use of the variable and the variable's end of scope. We wanted to have our cake and eat it too, allowing debug builds to compile straightforwardly, with good debugger behavior, while also retaining the flexibility for optimized builds to reduce memory usage and minimize ARC traffic by shortening lifetimes to the time of use.

However, in practice, this was a difficult model for developers to work with: different behavior in debug vs. release builds can lead to subtle, easy-to-miss bugs sneaking through testing. Many common patterns were also technically invalid by the old ARC language rules. For example, if you use a weak reference to avoid reference cycles between a controller and delegate, like this:

class Controller {
  weak var delegate: MyDelegate?

  func callDelegate() {
    _ = delegate!

let delegate = MyDelegate(controller)

then the delegate variable’s lifetime would be in jeopardy as soon as it’s done being passed to MyController’s initializer, since that is the last use of the variable. Because the delegate variable was the only strong reference to the MyDelegate object, that object gets deallocated immediately, causing the weak reference in the controller to be niled out even before the expression has finished being evaluated. Every time the optimizer has improved and discovered new opportunities to shorten object lifetimes, we've inevitably broken working code that looks similar to this.

The wishy-washy "we can release at any time" rule also made certain constructs dangerous, such as anything that interacts with C's errno construct. If a variable can be released at any point, releasing the variable may trigger deinitializers, and that deinitializer can free(3), that free can clobber errno. Even if it is unlikely that the optimizer would choose to shorten the lifetime of an object exactly to the time between the user making a C library call and checking errno for the error state afterward, the existence of the possibility makes the programming model more hazardous than it needs to be.

For these reasons, we think it makes sense to change the the language rules to follow what is most users' intuition, while still giving us the flexibility to optimize in important cases. Rather than say that releases on variables can happen literally anywhere, we will say that releases are anchored to the end of the variable's scope, and that operations such as accessing a weak reference, using pointers, or calling into external functions, act as deinitialization barriers that limit the optimizer's ability to shorten variable lifetimes. The upcoming proposal will go into more detail about what exactly anchoring means, and what constitutes a barrier, but in our experiments, this model provides much more predictable behavior and greatly reduces the need for things like withExtendedLifetime in common usage patterns, without sacrificing much performance in optimized builds. The model remains less strict than C++'s strict scoping model, since it still allows for reordering of releases that go out of scope at the same time, but we haven't seen order dependencies among deinits as a major problem in practice. The optimizer in this model can still also shorten of variable lifetimes when there aren't deinitialization barriers, and such code is unlikely to observe the effects of deinitialization.

move function for explicit ownership transfer

Pitch: [Pitch] Move Function + "Use After Move" Diagnostic

Having set a predictable baseline for memory management behavior, we can provide tools to give users additional control where they need it. Shortening variable lifetimes is still an important optimization for reducing ARC traffic, and for maintaining uniqueness of copy-on-write data structures when building up values. For instance, we may want to build up an array, use that array as part of a larger struct, and then be able to update the struct efficiently by maintaining the array's uniqueness. If we write this:

struct SortedArray {
    var values: [String]
    init(values: [String]) {
        self.values = values
        // Ensure the values are actually sorted

then under the lexical lifetimes rule, the call to self.values.sort() can potentially trigger a copy-on-write, since the underlying array is still referenced by the values argument passed into the initializer. The optimizer could copy-forward values into the newly-initialized struct, since it is the last use of values in its scope, but this isn’t guaranteed. We want to explicitly guarantee that copy forwarding happens here, since it affects the time complexity of making updates to the aggregate. For this purpose, we will add the move function, which explicitly transfers ownership of a value in a local variable at its last use:

struct SortedArray {
    var values: [String]
    init(values: [String]) {
        // Ensure that, if `values` is uniquely referenced, it remains so,
        // by moving it into `self`
        self.values = move(values)
        // Ensure the values are actually sorted

By making the transfer of ownership explicit with move, we can guarantee that the lifetime of the values argument is ended at the point we expect. If its lifetime can't be ended at that point, because there are more uses of the variable later on in its scope, or because it's not a local variable, then the compiler can raise errors explaining why. Since values is no longer active, self.values is the only reference remaining in this scope, and the sort method won't trigger an unnecessary copy-on-write.

Managing ownership transfer across calls with argument modifiers

Another currently-underspecified part of Swift's ownership model is how the language transfers ownership of values across calls. When passing an argument to a function, the caller can either let the function borrow the argument value, letting the callee assume temporary ownership of the existing value for the duration of the call and taking ownership back when the callee returns, or it can let the callee consume the argument, relinquishing the caller's own ownership of the argument, and giving the callee the responsibility to either release the value when it's done with it, or transfer ownership somewhere else. inout arguments must borrow their argument, because they perform in-place mutation, but Swift today does not otherwise specify which convention it uses for regular arguments. In practice, it follows some heuristic rules:

  • Most regular function arguments are borrowed.
  • Arguments to init are consumed, as is the newValue passed to a set operation.

The motivation for these rules is that initializers and setters are more likely to use their arguments to construct a value, or modify an existing value, so we want to allow initializers and setters to move their arguments into the result value without additional copies, retains, or releases. These rules are a good starting point, but we may want to override the default argument conventions to minimize ARC and copies. For instance, the append method on Array would also benefit from consuming its argument so that the new values can be forwarded into the data structure, and so would any other similar method that inserts a value into an existing data structure. We can add a new argument modifier to put the consuming convention in developer control:

extension Array {
    mutating func append(_ value: consuming Element) { ... }

On the other hand, an initializer may take arguments that serve only as options or other incidental input to the initialization process, without actually being used as part of the newly-initialized value, so the default consuming convention for initializers imposes an unnecessary copy in the call sequence, since the caller must perform an extra retain or copy to balance the consumption in the callee. So we can also put the nonconsuming convention in developer control:

struct Foo {
    var bars: [Bar]
    // `name` is only used for logging, so making it `nonconsuming`
    // saves a retain on the caller side
    init(bars: [Bar], name: nonconsuming String) {
        print("creating Foo with name \(name)")
        self.bars = move(bars)

(These modifiers already exist in the compiler, spelled __owned and __shared, though we think those names are somewhat misleading in their current form.)

read and modify accessor coroutines for in-place borrowing and mutation of data structures

Swift provides computed properties and subscripts to allow types to abstract over their physical representation, defining properties and data structures in terms of user-defined "get" and "set" functions. However, there is a fundamental cost to the get/set abstraction; under the sugar, getters and setters are plain old functions. The getter has to provide the accessed value as a return value, and returning a value requires copying it. The setter then has to take the new value as an argument, and as previously discussed, that argument is callee-consumed by default. If we're performing what looks like an in-place modification on a computed property, that involves calling the getter, applying the modification to the result of the getter, and then calling the setter. Even if we define a computed property that attempts to transparently reveal an underlying private stored property:

struct Foo {
    private var _x: [Int]
    var x: [Int] {
        get { return _x }
        set { _x = newValue }

we're adding overhead by accessing through that computed property, since:


evaluates to:

var foo_x = foo.get_x()

For copy-on-write types like Array, this is particularly undesirable, since the temporary copy returned by the getter forces the array contents to always be copied when the value is modified.

We would really like computed properties and subscripts to be able to yield access to part of the value, allowing the code accessing the property to work on that value in-place. Our internal solution for this in the standard library is to use single-yield coroutines as alternatives to get/set functions, called read and modify:

struct Foo {
    private var _x: [Int]
    var x: [Int] {
        read { yield _x }
        modify { yield &_x }

A normal function stops executing once it's returned, so normal function return values must have independent ownership from their arguments; a coroutine, on the other hand, keeps executing, and keeps its arguments alive, after yielding its result until the coroutine is resumed to completion. This allows for coroutines to provide access to their yielded values in-place without additional copies, so types can use them to implement custom logic for properties and subscripts without giving up the in-place mutation abilities of stored properties. These accessors are already implemented in the compiler under the internal names _read and _modify, and the standard library has experimented extensively with these features and found them very useful, allowing the standard collection types like Array, Dictionary, and Set to implement subscript operations that allow for efficient in-place mutation of their underlying data structures, without triggering unnecessary copy-on-write overhead when data structures are nested within one another.

Requiring explicit copies on variables

The features described so far greatly increase a Swift programmer's ability to control the flow of ownership at the end of value lifetimes, across functions, and through property and subscript accesses. In the middle, though, Swift is still normally free to introduce copies on values as necessary in the execution of a function. The compiler should be able to help developers optimize code, and keep their code optimized in the face of future change, by allowing implicit copying behavior to be selectively disabled, and offering an explicit copy operation to mark copies where they're needed:

class C {}

func borrowTwice(first: C, second: C) {}
func consumeTwice(first: consuming C, second: consuming C) {}
func borrowAndModify(first: C, second: inout C) {}

func foo(x: @noImplicitCopy C) {
    // This is fine. We can borrow the same value to use it as a
    // nonconsuming argument multiple times.
    borrowTwice(first: x, second: x)
    // This would normally require copying x twice, because
    // `consumeTwice` wants to consume both of its arguments, and
    // we want x to remain alive for use here too.
    // @noImplicitCopy would flag both of these call sites as needing
    // `copy`.
    consumeTwice(first: x, second: x) // error: copies x, which is marked noImplicitCopy
    consumeTwice(first: copy(x), second: copy(x)) // OK

    // This would also normally require copying x once, because
    // modifying x in-place requires exclusive access to x, so
    // the `first` immutable argument would receive a copy instead
    // of a borrow to avoid breaking exclusivity.
    borrowAndModify(first: copy(x), second: &x)

    // Here, we can `move` the second argument, since it is the final
    // use of `x`
    consumeTwice(first: copy(x), second: move(x))

For a programmer looking to minimize the excess copies and ARC traffic in their code, making copies explicit like this is essential feedback to help them adjust their code, changing argument conventions, adopting accessor coroutines, and making other copy-avoiding changes.

Generalized nonescaping arguments

We can selectively prevent implicit copies on a borrowed function argument, as laid out above, but we can also selectively prevent explicit copies as well. If we do that, then the argument is effectively non-escaping, meaning the callee cannot copy and store the value anywhere it can be kept alive beyond the duration of the call. We already have this concept for closures—closure arguments are nonescaping by default, and must be marked @escaping to be used beyond the duration of their call. Making closures nonescaping has both performance and correctness benefits; a nonescaping closure can be allocated on the stack and never needs to be retained or released, instead of being allocated on the heap and reference-counted like an escaping closure. Nonescaping closures can also safely capture and modify inout arguments from their enclosing scope, because they are guaranteed to be executed only for the duration of their call, if at all.

Non-closure types could benefit from these performance and safety properties as well. Swift’s optimizer can already stack-promote classes, arrays, and dictionaries in limited circumstances, but its power is limited by the fact that function calls must generally be assumed to escape their arguments. Being able to mark arbitrary arguments as @nonescaping could make the optimizer more powerful:

func foo(x: Int, y: Int, z: Int) {
    // We would like to stack-allocate this array:
    let xyz = [x, y, z]
    // but this call makes it look like it might escape.

// However, if we mark print's argument as nonescaping, then we can still stack
// allocate xyz.
func print(_ args: @nonescaping Any...) { }

Furthermore, there are many APIs, particularly low-level ones such as withUnsafePointer, that invoke body closures with arguments that must not be escaped, and the language currently relies on the programmer to use them correctly. Being able to make the arguments to these functions nonescaping would allow the compiler to enforce that they are used safely. For instance, if withUnsafePointer declares its body closure as taking a nonescaping argument, the compiler can then enforce that the pointer is not misused:

func withUnsafePointer<T, R>(to: T, _ body: (@nonescaping UnsafePointer<T>) -> R) -> R

let x = 42
var xp: UnsafePointer<Int>? = nil
withUnsafePointer(to: x) { p in
    xp = p // error! can't escape p

Borrow variables

When working with deep object graphs, it’s natural to want to assign a local variable to a property deeply nested within the graph:

let greatAunt = mother.father.sister

However, with shared mutable objects, such as global variables and class instances, these local variable bindings necessitate a copy of the value out of the object; either mother or mother.father above could be mutated by code anywhere else in the program referencing the same objects, so the value of mother.father.sister must be copied to the local variable greatAunt to preserve it independently of changes to the object graph from outside the local function. Even with value types, other mutations in the same scope would force the variable to copy to preserve the value at the time of binding. We may want to prevent such mutations while the binding is active, in order to be able to share the value in-place inside the object graph for the lifetime of the variable referencing it. We can do this by introducing a new kind of local variable binding that binds to the value in place without copying, while asserting a borrow over the objects necessary to access the value in place:

// `ref` comes from C#, as a strawman starting point for syntax.
// (It admittedly isn't a perfect name, since unlike C#'s ref, this would
// actively prevent mutation of stuff borrowed to form the reference, and
// if the right hand side involves computed properties and such, it may not
// technically be a reference)
ref greatAunt = mother.father.sister
mother.father.sister = otherGreatAunt // error, can't mutate `mother.father.sister` while `greatAunt` borrows it

We have a similar problem with unavoidable copies when passing properties of a class instance as arguments to a function. Because the callee might modify the shared state of the object graph, we normally must copy the argument value. Using a borrow variable, to explicitly borrow the value in place, gives us a way to eliminate this copy:

print(mother.father.sister) // copies mother.father.sister

ref greatAunt = mother.father.sister
print(greatAunt) // doesn't copy, since it's already borrowed

It’s also highly desirable to be able to perform multiple mutations on part of an object graph in a single access. If we write something like:

mother.father.sister.name = "Grace"
mother.father.sister.age = 115

then not only is that repetitive, but it’s also inefficient, since the get/set, or read/modify, sequence to access mother, then mother.father, then mother.father.sister must be repeated twice, in case there were any intervening mutations of the shared state between operations. As above, we really want to make a local variable, that asserts exclusive access to the value being modified for the scope of the variable, allowing us to mutate it in-place without repeating the access sequence to get to it:

inout greatAunt = &mother.father.sister
greatAunt.name = "Grace"
mother.father.sister = otherGreatAunt // error, can't access `mother.father.sister` while exclusively borrowed by `greatAunt`
greatAunt.age = 115

There are other places where inout bindings for in-place mutation are desirable, but aren’t currently available, and we can extend inout bindings to be available in those places as well. For instance, when switch-ing an enum, we would like to be able to bind to its payload, and update it in place:

enum ZeroOneOrMany<T> {
    case zero
    case one(T)
    case many([T])

    mutating func append(_ value: consuming T) {
        switch &self {
        case .zero:
            self = .one(move(value))
        case .one(let oldValue):
            self = .many([move(oldValue), move(value)])
        case .many(inout oldValues):

Looking forward to move-only types

No-implicit-copy and nonescaping variables are effectively “move-only variables”, since they ask the compiler to force a variable to be used only in ways that don’t require ARC to insert copies. The consuming and nonconsuming modifiers on arguments, read and modify accessor coroutines, and ref and inout variables allow for local variables to make non-mutating and mutating references into data structures and object graphs without copying. All together, these features clear the way to support move-only types, types for which every value is non-copyable. As discussed in the Ownership Manifesto, move-only types open the way to represent uniquely-owned resources without the overhead of ARC, and which cannot safely have multiple copies of themselves existing, particularly low-level concurrency primitives like atomic variables and locks. Fully designing and implementing move-only types involves broad changes to the generics model, and retrofits to the standard library to support them, so we won’t include them in this roadmap. However, many of these features, and the implementation work behind them, set the stage for implementing them in the future.

Building safer performance-oriented APIs with these features

Even without the full expressivity of move-only types, there are new APIs we can add that allow for working with memory safely with lower overhead than our existing safe APIs. Types that have no public initializers, but which are only made available to client code via @nonescaping arguments, have a useful subset of the functionality of a move-only type—they can’t be copied, so they can be used to represent scoped references to resources. Full move-only types would also allow for ownership transfer between scopes and different values, and for generic abstraction over move-only types. But even without those abilities, we can create useful APIs. For instance, we can create a safe type for referring to contiguous memory regions, as efficient and flexible as UnsafeBufferPointer in being able to refer to any contiguous memory region, but without being any less safe than ArraySlice. We could call this type BufferView, and give it collection-like APIs to index elements, or slice out subviews:

struct BufferView<Element> {
    // no public initializers
    subscript(i: Int) -> Element { read modify }
    subscript<Range: RangeExpression>(range: Range) -> BufferView<Element> {
        @nonescaping read
        @nonescaping modify
    var count: Int { get }

Contiguous collections like Array can provide new subscript operators, allowing access to part of their in-place contents via a BufferView:

extension Array {
    subscript<Range: RangeExpression>(bufferView range: Range) -> BufferView<Element> {
        @nonescaping read
        @nonescaping modify

Note that these APIs use the @nonescaping modifier on read and modify coroutines, indicating that when client code accesses the BufferView, it cannot copy or otherwise prolong the lifetime of the view outside of the duration of the accessor coroutine.

var lastSummedBuffer: BufferView<Int>?

func sum(buffer: @nonescaping BufferView<Int>) -> Int {
    var value = 0
    // error! can't escape `buffer` out of its scope
    lastSummedBuffer = buffer
    // Move-only types would let us make BufferView conform to Sequence.
    // Until then, we can loop over its indices…
    for i in 0...buffer.count {
        value += buffer[i]

let primes = [2, 3, 5, 7, 11, 13, 17]
// We can pass the BufferView of the array, without any reference counting,
// and without the array looking like it escapes, making it more likely the
// constant array above gets stack-allocated, or optimized into a global
// static array
let total = sum(buffer: primes[bufferView: ...])

The nonescaping constraint allows BufferView to be safe, while having overhead on par with UnsafeBufferPointer, and @nonescaping coroutines that produce BufferViews provide a more expressive alternative to the withUnsafe { } closure-based pattern used in Swift’s standard library today. Multiple BufferViews from multiple data structures can be worked with in the same scope without a “pyramid of doom” of closure literals. When these features become official parts of the language, then user code can adopt this pattern as well, replacing something like:

extension ResourceHolder {
    func withScopedResource<R>(_ body: (ScopedResource) throws -> R) rethrows -> R {
        let scopedResource = setUpScopedResource()
        defer { tearDownScopedResource(scopedResource) }
        try body(scopedResource)


extension ResourceHolder {
    var scopedResource: ScopedResource {
        @nonescaping read {
            let scopedResource = setUpScopedResource()
            defer { tearDownScopedResource(scopedResource) }
            yield scopedResource

Taken together, these features will greatly improve the ability for Swift programmers to write low-level code safely and efficiently and control the ARC behavior of higher-level code, while providing the technical basis for full move-only types in the future.


This roadmap is a pleasure to read. Fantastic work here!

This ARC guarantee would have solved a particularly vexing bug I encountered back in the days of Obj-C, when changing from debug to release build caused a release to move earlier in scope, which caused OpenGL to reuse a previously used ID, which in turn exposed a bug in an OpenGL wrapper library that assumed equal GL IDs meant identical objects, which caused a very, very intermittent crash.

That was a particularly obscure situation, but the lesson here is that starting with ARC predictability and then layering optimization on top of it seems like an excellent guiding principle, whose bug-preventing benefits will be myriad and not immediately obvious.

Just to check basic understanding: I take it that move(…) is a purely compile-time construct, and shows up in the emitted code only as an earlier release? (Presumably it can even show up as a noop if the receiver consumes the moved value?)

Is there possibility of / value in allowing the caller to specify consuming / nonconsuming? (Is it perhaps possible for the caller to promote one to the other, e.g. change consuming to borrowing with an additional retain? I haven’t actually thought this through at all; I may be entirely off base.)

It seems like append is one where both could be optimal in different situations. One could of course provide an overload with a different name, but that seems less than ideal.

Love this.

Name question aside, this seems fantastic. The repetition of long property chains is a vexing fact of working with value types.

Ergonomics and naming seem like they need special care here. The rules are going to be hard for developers to follow without a good casual mental model (my perennial drumbeat), especially when people start using these for simple refactoring and not ARC-aware performance tuning.


What a delightful Christmas present!

I am a little worried about the progressive disclosure aspect of some of this, though. The move example is particularly worrying. I think this should remain the best way to write this code in Swift:

I get that the move would be needed basically for diagnostics against introducing accidental 'deinitialization barriers', but I think it will basically become a staple for many projects. Basically every function argument will get moved out at its first use. I wish there was a way we could make this less intrusive, and less scary to newcomers.

Additionally, since this is a performance manifesto: has any thought been given to a Rust-style lifetime system? For example, will I be able to create an Array of non-escaping closures (which itself becomes bound by the same lifetime), and pass that in to a function (which must obey that lifetime restriction)?

Or is the idea that move-only types will allow this somehow?


This was fascinating to read! No feedback on the proposal directly, but it did make me start to think about ways we could make things like unintentional copies more visible to the developer as they are writing their code. Consider the following:

var item = array[i]
item.value = 2

To a seasoned swift developer, it is immediately obvious that if item is a value type, it won't be modified in the array itself, while if it is a reference type, it will. However, this is not so obvious to beginners, as I've seen students make this mistake time and time again. Although API developers could potentially use the tools introduced here so beginners could catch issues like this, they remain out of scope of what a beginner would typically discover on their own.

More so, it is easy for a seasoned developer to also miss this as they are working through a more complex example, or even while they comb through existing code attempting to add the optimizations discussed here. One way around this that could help everyone is if copies could be marked in some non-intrusive way in the source editor itself, not as a warning or anything alarming like that, but by other syntax coloring/display means.

This could be yet another tool developers could use to either catch potential issues or validate what they think will happen is happening, especially if this could be provided before a build even takes place.


One possible solution to this would be a warning for writes to otherwise-dead values, but I guess that would have false positives for side-effectful setters.


One of the very few things I miss in C++ is its . vs -> distinction for traversing value types vs references. I’ve long thought languages ought to consider a similar distinction for assignment. That seems, however, a bit too wild for a language like Swift that is both already well-established and aims not to shock the eyes too much with its syntax.


Certainly worthwhile always to think about how to make Swift a better learning and teaching language. But since as you say this is separate from feedback on the roadmap here, which is already a large document encompassing many proposals, can we keep this particular thread focused on that feedback and split this off elsewhere?

There's generally no need to move an argument on first use. Only when it is assigned to another variable or passed to a consuming argument before the end of its scope.

Nonetheless, this is a difficult tradeoff. I'll start a separate thread on the rules for lexical lifetimes as soon as I can. Quick response for now...

When a programmer cares about the uniqueness of a variable, the easiest thing to remember, and the most direct way to express that is by using a 'move' when assigning that variable into another variable:

self.values = move(values)
// other stuff in the same scope, represented by 'foo'

Relying on the compiler to remove the copy automatically will also be determined by rules, but those rules can be difficult to reason about. They depend on a series of extremely complex conditions:

Does value have a default deinitializer? In this case, yes, because, it's an Array of String. But that's only because both Array and String are special cases that don't have custom deinitialization and don't support associated objects.

Does foo have any side effects that may implicitly depend on the lifetime of value? That includes weak reference or unsafe pointer access.

Does foo have any side effects observable by the deinitializer?

It's much simpler to explicitly express the intent that values lifetime end at the assignment. Then, if the compiler can't honor that intent, it will provide a diagnostic.


I can’t thank the team enough for valuing this principle. We’re all trying to improve as developers and nobody comes born fully formed with an inherit understanding of the concepts and implementation of any language.

It’s crucial that I get utility first, and know that I’ll usually have more to gain when I’m ready to dig deeper.

Great work! I appreciate the effort.


What a fabulous proposal ~


  • ref/inout
  • copy()/move()
  • read{}/modify{}
  • consuming/nonconsuming T
  • @escaping/@nonescaping F

finally, we can write high performance and super efficient code in Swift without ARC/CoW worries. Make Swift much more like a Rust minus 'lifetime variant.

Can't wait Apple internal frameworks start using Swift to write/rewrite critical system components in near future.

Big 6.0 roadmap.


Yeah that's kind of what I'm worried about; that nobody is going to remember these rules (which might even change in the future as new cases are discovered and features added), and just think:

(I left out the part about caring about uniqueness; it's not always clear if you care about uniqueness - or at least, it's not always clear that you don't need to care).

So then users will think "Hey, I have this great idea for a new collection!", then they'll look at swift-collections (or package X), see a bunch of moves in almost every function across the entire library, and think "gosh, there's something really intricate going on here and I don't understand it; I'm out of my depth and shouldn't contribute to this package".

It may be an unavoidable consequence of achieving predictable performance, but it's still unfortunate.

Anyway, I'll wait for the lexical lifetimes proposal. From the brief description here, I really like it.

I wonder -- and this is a totally oddball idea that I absolutely have not thought through -- if the goal is just to communicate some performance assertions to the compiler that it could diagnose, why not just write those as some kind of assertion? In as close to real English as possible.

I guess this ultimately becomes a question of syntax and how well concepts are communicated in source code. The ideas floated in the move proposal about adding a drop(x) or assert(no longer using x) to mark the end of a lifetime is a lot more attractive in light of this manifesto.

// Instead of writing:
init(_ values: [String]) {
  // OMG what does 'move' do? This is some low-level performance intrinsic - eek!
  self.values = move(values)

// You'd write something more like:
init(_ values: [String]) {
  self.values = values
  // Oh, ok, they want to check that 'values' is no longer being used.
  // It isn't required for correctness. It's just a check.
  #assert(no longer using 'values')

This would be a lot more familiar to Swift developers who aren't ready to get in to the weeds about move semantics and ownership. It's just regular Swift, with some additional assertions to the compiler which are very approachable -- the reason for them being there might be a bit opaque, but no more opaque than having 'move' dotted about everywhere.


Overall, I'm thrilled to see such a detailed roadmap for performance-critical Swift! It's going to be a significant positive leap for the language.

On the topic of memory model and performance predictability, one thing that I wish could be added to this discussion is immortal data, such as data stored in static data segments. Currently, the only way to achieve this in Swift—aside from some specific types like StaticString that explicitly produce it—is to do an optimized build, but the ObjectOutliner SILOptimizer pass is entirely unpredictable to the average Swift developer.

I'm also very interested in hearing more about lexical lifetimes, because I agree with @Karl here; in the values sorting example, it seems like the compiler could detect that the local argument values is no longer used after the assignment and perform the move automatically, but that might run counter to the desired change of having lifetimes controlled by their lexical scope. While the more flexible lifetime does leave open the possibility of a performance hit if someone later adds code that uses the local values without thinking, it's also consistent with a philosophy of progressive disclosure: "get the best or near-best results writing the most straightforward code, then be explicit if you need to be later".

Essentially, I hope we can avoid some of the pitfalls that C++ has, where trying to do what you think is "the right thing" ends up making things worse. For example, a C++ user new to move-semantics might think "I have a function that returns a large collection, but I'll return std::move(...) it so it doesn't make a copy!", but then they've gone and undone the return-value optimization. So while this philosophy is something I agree with:

...it would be even better if Swift can do the performant thing by default without any explicit expression of intent; that's what would set its memory model apart from other languages like C++ and Rust.

Whenever the compiler is able to infer something about memory lifetime automatically that could be expressed explicitly, maybe there could be a way for users to enable "educational notes" as optional diagnostics, and the compiler would flag all those inferences to let the user know "by the way, if you want to ensure that you don't make a change that could cause problems later, you can add move here" or something to that effect. Memory models can be tricky to master, and giving authors guided diagnostics about how they could be more explicit, even if they don't need to be with the code written as it is at the moment, is just as important as giving them error diagnostics when they do something wrong.


One thing to keep in mind, it isn't so much that we are saying that the lifetime is controlled by the scope. What we are instead saying is that the value's lifetime will be at least the end of scope, but modulo specific deinitializer side-effects, the optimizer is allowed to shrink the lifetime. Deinit side-effects are the only side-effects that can cause this problem and act as a barrier against lifetime shrinking.

That is what assembly vision is: PSA: Compiler optimisation remarks - #6 by Michael_Gottesman. I am basically adding to this over time and hope to make it handle more and more cases over time.


My mental model thus far has been that a value’s lifetime may end starting when it is last referenced. Is it correct to say that that is still true, but has been joined by a guarantee that it will end when the scope is exited?


To me, immortal data seems intimately tied to compile-time evaluation. That probably means it is going to need more work yet, unless you just want to add more special cases like StaticString.

Yeah, move shouldn't have any runtime effect at all on its own; it should only have the compile-time effect of taking the moved variable out of scope, and as a side effect of that, allowing the bound value's lifetime to shrink to the new end of scope. That compile-time-only transparency should avoid pitfalls like the one @allevato mentioned of std::move in C++ occasionally defeating last-use optimizations if used improperly.

That's a fair concern. Our hope is that the lexical lifetime rules strike a balance where obvious code like this still does the right thing without explicit moves, so move (or however we end up spelling the operation) will still be something most Swift developers don't normally have to think about. But at the same time, performance-constrained developers who must get hard guarantees of their code's behavior really need something that will make the compiler raise diagnostics if those guarantees can't be met, for the reasons Andy laid out.

I think we will need move-only types to be able to fully compose types with move-only properties on the level of Rust. These proposals will get us closer to that point, at least, and many of them would be required to work with values and parts of structs and classes without requiring copies anyway, but the full composability in the type system won't be there yet.

I think this capability will be a natural outgrowth of compile-time constant values. Once we've established what a compile-time-constant expression is, then it would be a natural next step to be able to annotate an initializer expression as being required to be a constant expression.

It is my hope that Swift will still be able to continue to "do the right thing" by default in most situations. As I noted above, our design for move should hopefully avoid the pitfalls of move in C++ being able to accidentally defeat the desired optimization. It could well be in practice, people end up spraying move everywhere out of an abundance of caution, but I hope that they won't need to.

In practice, the guarantee that it will end before or at the point the scope exits has always been there. We are planning to introduce new barriers that limit how far back a value's lifetime may end, so that it not only includes the final use in source code, but also things like weak reference loads and memory accesses through UnsafePointer that tend to rely on the lifetime of other variables to get correct behavior.


I actually have been working on a new collection for that package, and I’ve run into that sort of issue with all the unstable features I’m expected to use. I’ve mostly copied existing uses and winged it, which is not a good feeling given the circumstances.

It is going to be especially important that these new tools are well-documented, such that anyone who stumbles upon them will come away with a concrete understanding of when to use them and why.

For the sake of people encountering them for the first time, I still think they need to have more descriptive names. I don’t think MemoryLayout.size is particularly intimidating despite being fairly low-level, simply because the meaning is obvious at the call site. You read the documentation of MemoryLayout and you’re pretty much good to go. That’s what we need for move, copy, and other low-level operations.

I’d contrast that with existing top-level methods like assert, precondition, and fatalError, none of which quite explain when you should use them and why. They’re also fairly hard to discover, as they have no semantic relation to each other. Newcomers to Swift tend to litter fatalError everywhere, which is not ideal.


This is a stunningly elegant idea, and I look forward to applying it whenever applicable. It seems like a potent tool in the compiler optimization toolbox, especially for non-inlinable function calls.

For the sake of orthogonality, will we be able to mark closures as @nonescaping and non-closures as @escaping? I’m thinking that would act basically the same way internal does: making a default explicit for the sake of readability at the programmer’s discretion.


Would @nonescaping be usable for Iterators? It doesn't sound like it, but it'd be nice if it could be. One optimization we've wanted to make in some places is to reuse a single object which is mutated each time rather than allocating new objects that are then immediately disposed. For obj-c types (where we can't use isKnownUniquelyReferenced) it'd be very error-prone to do this, but @nonescaping could maybe address that.

Optimization often won't be fully effective in situations like the one above without the programmer taking some step to opt-out of lexical variable lifetimes. There are just too many ways that side effects can be hidden from the compiler.

However, the compiler can tell you when a copy is a good optimization candidate via performance diagnostics (see discussion of AssemblyVision). Those diagnostics can point you to one of a few solutions. High-performance libraries will use any one of these opt-out mechanisms:

  • @nonescaping
  • @noImplicitCopy
  • move function/operator

I expect @nonescaping to be the most common over time.

@noImplicitCopy would be the next option, for variables that do explicitly escape. This has the effect of a lifetime assertion.

move is only useful if neither of those attributes are already used. It is an important primitive. But I don't expect to see widespread use of move given that the other attributes should be available soon after.

With move-only types, @nonescaping will be the only relevant annotation. And move will obviously be a no-op.

[EDIT] In rereading some responses I see where confusion is creeping in. This proposal does not make our current ARC optimization less effective. It simply says that the current approach to optimization is sometimes insufficient for predictable performance.