Appending to an Array stored in an enum case payload: O(1) or O(n)?

if i have an enum that contains Array payloads, like

enum Entry 
{
    case strings(String, [String])
    case ints(Int, [Int])
}

what’s the preferred way of appending to the stored Arrays? it seems like the naïve way of switching, appending, and writing back to self would be O(n), not O(1) due to copy-on-write, which would make populating the arrays O(n2) overall.

the best idea i’ve been able to come up with is overwriting self beforehand with an “empty” case to deinitialize the original storage and bring the Array reference count back down to 1. but this would cause more ARC traffic, and only works if the data model has a logical “empty” value.

2 Likes

In-place mutation of an enum associated value might help, but yeah we need proper support for this kind of mutation.

1 Like

This should optimize fine. Using Godbolt.org as an example, you can see that while there is an ARC operation, that retain is on the String argument, not on the array of entries. As this doesn't +1 the array of entries, it doesn't inherently cause a CoW.

In older versions of Swift the compiler was not so smart, and so we (SwiftNIO) did use exactly this pattern by having a deliberately uninhabited case called _modifying or similar. This is largely unnecessary now.

3 Likes

that’s good to know.

i don’t know if i agree with this. compiler optimizations are highly fragile and can easily get thrown off by minor changes in the code in ways that are not immediately obvious to the programmer. normally this isn’t a big problem, but here we are talking about going from a linear to a quadratic algorithmic complexity. and auditing through Godbolt isn’t really practical for projects larger than a small code snippet.

1 Like

I think this is a little better as you don't have to list all the cases with switch:

extension Entry {
    mutating func appendXXX(_ string: String) {
        guard case .strings(let first, var rest) = self else {
            preconditionFailure()
        }
        rest.append(string)
        self = .strings(first, rest)
    }
}

Or use default case to get the same.

I would like to know: is there any way to make it a compile time check to only let var of case .strings can call append(_:String)? It would be nice if this func only exist on .strings

Or I maybe this enum can be replaced with generic protocol to avoid any switch/case.

i wonder if mutating funcs could allow _move(_:) to be used on self, as long as self is written back-to, like in an initializer.

then we could have

extension Entry 
{
    mutating func appendXXX(_ string:String)
    {
        switch _move(self)
        {
        case .strings(let first, var rest):
            rest.append(string)
            self = .strings(first, rest)
        case let other 
            self = other
        }
    }
}
1 Like

This is one of the unfortunate cases where I'd use a different, less correct data structure, e.g. this:

struct Entry  {
    struct S {
        var string: String
        var strings: [String]
    }
    struct I {
        var int: Int
        var ints: [Int]
    }
    var strings: S?
    var ints: I?
    // manually ensure only one is non nil
}
1 Like

In general I agree. To this end, I'd recommend you add a regression test that counts allocations. Optimization failures that lead to copy-on-write are extremely straightforward to detect because when they fail they call a function (one of the allocator functions), whereas when they succeed they don't. This is a nice straightforward binary to detect. SwiftNIO has a hand-rolled collection of bash scripts that we drive from another bash script to run a suite of allocation counter tests. We even re-use this framework in other NIO packages.

These tests absolutely have the granularity to measure a copy-on-write if you write your code such that it would occur in every pass of a loop and then run the loop a few thousand times.

I'm generally with you on being cynical, but the nature of writing high performance Swift is that we have to trust the optimizer, otherwise nothing is fast. My preferred attitude is "trust but verify": find the places where you need the optimization to work, and then add measurements to confirm that it does.

Nope: this would be a function in value space, not in type space.

I don't think you're allowed to assign back to a variable that has been moved out of, are you? This could be allowed, but I think it would be better for the compiler to be able to observe that the lifetime of self is dead after the switch (which, experimentally, it can).

Why? The desired pattern already works, and has done for several years now.

5 Likes

Why not? It seems obvious to me that it should work. In fact, I think one of the motivating use cases of move was to be able to avoid unnecessary ARC traffic by moving a member variable in a _modify closure into a local, yielding that local, then moving it back into the member variable after the caller had modified it.

Furthermore, it works in the only other language I've used with destructive moves, Rust. And obviously it works in C++ which, sort of, has moves. The only time it doesn't work is if the variable is itself immutable, i.e. a let, but self isn't a let in a mutating method so it should be fine.

Edit: Also, I don't see how allowing reassignment would change the compilers ability to check whether the lifetime has ended. It already has to have this logic for ordinary uninitialized variables.

This:

let x: String // x is uninitialized
if Bool.random() {
    x = "Hello!"
}
print(x) // error x is not initialized on all paths

Requires the same logic in the compiler as this:

let x = "Hello!"
if Bool.random() {
    _ = _move(x) // x is unitialized
}
print(x) // error x is not initialized on all paths

I'm not saying it can't be done, just that I don't think the pitch covers that today.

I find the current swift restriction that associated values are immutable - somewhat arbitrary. I'd prefer something like this:

struct X {
    var x: Int = 0
    mutating func change() {
        x += 1
    }
}

enum E {
    case one
    case two(X)
    case three(x: X)
    case four(x: X, X)
    case five(X?)
}

var v: E = ...
                     // Type:      Value:
v.one                // E!         E.one or nil
v.two                // X!         value of type X or nil
v.three              // X!         value of type X or nil
v.four               // (x: X, X)! value of type (x: X, X) or nil
v.five               // X?!        value of type X? or nil

v.four.x
v.four.x = X()     // can modify
v.four.x.change()  // can modify
v.four.1
v.four.1 = Y()     // can modify
v.four.1.change()  // can modify

v = .one
v.four             // trap
v.four?.x = X()    // noop
v.four?.x.change() // noop

v = .four(x: X(), X())
let (x, y) = v.four

Edited: made the types implicitly unwrapped optionals, I believe it makes perfect sense in this case

Oh it's definitely arbitrary. I think the Swift Core Team has been interested in expressing some notion of a modifying access to enum associated values for a while now, but no-one has had the bandwidth to make it happen.

This post is a good example of why it will take a while to actually do it: we have to litigate how things are spelled. :wink: However, if we can get modifying access to associated data in a switch then it becomes fairly straightforward to implement the accessors you want as well, so they can be reasonably considered an extension of that proposal.

I don't know why you think that, since the pitch explicitly mentioned it as being allowed.

Definite reading comprehension failure on my part.

this seems like a lot of effort compared to the ._modifying assignment workaround…

Interestingly the following version gave me O(N) behaviour, while at the quick glance it is doing exactly the same as the commented out "append" (which gave the desired O(1) behaviour):


extension Entry {
    mutating func append(_ string: String) {
        switch self {
        case .strings(let first, var rest):
            rest.append(string)
            self = .strings(first, rest)
        case .ints:
            preconditionFailure()
        }
    }
}

class Test {
    
    var a = Entry.strings("Hello", [])

    func populateA(k: Int) {
        for _ in 0 ..< k {
            let string = UUID().uuidString

            // O(1):
            // a.append(string)

            // O(n):
            switch a {
            case .strings(let first, var rest):
                rest.append(string)
                a = .strings(first, rest)
            case .ints:
                preconditionFailure()
            }
        }
    }

@lukasa does that mean that optimiser is not smart enough to handle that case correctly or is there a more serious fundamental limitation that prohibits O(1) behaviour in that case?

I guess this is what @taylorswift calls "fragile" and incidents like this consciously or subconsciously cause me to select a more dumb structure, to not chance it.

PS. Something like this would be possible with mutable associated values:

enum Entry {
    case strings(name: String, values: [String])
    case ints(Int, [Int])
}

if a.strings != nil {
    a.strings.values.append(string)
}

a bit more awkward with unlabelled associated values:

if a.ints != nil {
    a.ints.1.append(value)
}
1 Like

The reality remains that almost all CoW-elimination optimizations are potentially fragile. The only way to know if they're in your code is to attempt to measure them, and the only way to be confident that your refactors don't regress them is to measure your code.

On top of that, regression testing allocations has been the single most powerful performance improvement practice that the NIO team has used. In the past I've described it as "pulling the magic go-faster lever": add a test, find the allocations, justify them all to yourself and anything that looks like it was unnecessary, make it go away, repeat. Even with the _modify accessor present you still have to validate that you're hitting that code path.

There's a more serious fundamental limitation: the law of exclusivity.

In the first example, you're in a mutating func, which is a single long mutating access. The law of exclusivity forbids anyone else from reading Entry during that access, so the original value of self can be considered dead once we've entered the case statement.

In the second example, a is an internal var on a class. The switch is therefore not a long write access to a but a get followed by a set, with some code in-between. During that time it is possible that something else in your program might observe the value of a, because the law of exclusivity does not prevent it from happening. As a result, the original value of a is still alive up until the set, and so the modification of rest must CoW.

A "sufficiently smart compiler" could perform a whole program observation and attempt to prove that no access to a can possibly exist. This works better with lower and lower visibility: for example, making a private can oftem trigger the optimization to reappear. But it's much harder.

In this instance I think the lesson to learn is not that one is dumb and the other is clever, but that array.append and a switch are not the same unless you tell Swift that the law of exclusivity applies. You'll tend to find that NIO embeds its state machine enums inside structs to help make this perform better.

Another way to get this optmization to trigger more often is to use fewer classes.

3 Likes

I see, makes sense, thank you.

BTW, "private" doesn't help in this case in the current compiler version.
I found that this helps to keep O(1) performance of "switch" variant:

        switch a {
        case .strings(let first, var rest):
            a = .strings(first, []) // reset ****
            rest.append(string)
            a = .strings(first, rest)
        case .ints:
            preconditionFailure()
        }

which is what OP had in mind:

resetting to some a = .strings(first, ["hello"]) works as well

1 Like