@Atomic property wrapper for standard library?

I was told by @Philippe_Hausler at WWDC that that method may have a 0.1% - 1% crash rate due to Swift's automatic management of the os_unfair_lock. Manually allocating was the suggested fix.

Can somebody explain what the failure mode would be? What is the automatic management of os_unfair_lock and what about it would cause the failure described here? @Philippe_Hausler? Is this an alignment issue? A caching issue?

This is a two fold issue; 1) the compiler is completely at liberty to write something back to the taking of the address of a struct (say to write tombstone values) and 2) the allocation region may not be heap, in the regards types can be allocated on the stack if the compiler has knowledge of that type and the lifespan of the usage. So as it stands the only safe way (and by safe I mean way without a crash rate) is to allocate a region via something like os_unfair_lock_t.allocate(1) and use the resultant of that to pass into os_unfair_lock_lock.

5 Likes

os_unfair_lock needs to be manually heap allocated because Swift does not guarantee the address stability that it depends on.

2 Likes

Thanks everyone. This information was incredibly useful. Suddenly everything is working reliably! I still think you should be able to take a UnsafePointer without having to make your your method mutating but that’s a battle for another day… It’s a shame you can’t use property wrappers at the top level (the error says “yet"!)

Correct me if I'm wrong, but property wrapper storage is part of the instance, and classes instances are always allocated on the heap. So as long as you use it in a classes and not in structs, it should be safe to use a simple ivar and don't bother with a wrapper and manual allocation, isn't it ?

My understanding is that this is not the case because Swift reserves the right to use a local temporary rather than referencing the ivar directly.

Is that actually guaranteed anywhere?

classes are reference type, and they are either allocated on the heap, or if they are allocated on the stack (as an optimisation), they must guarantee to not change their address, as being a reference type, they must guarantee that === is always true.

So it should be enough for the case discussed here.

That’s true, but as @anandabits rightly points out, Swift reserves the right to copy stored properties to temporaries as part of some operations, so struct properties are not guaranteed to have a stable address. This is why SwiftNIO’s builtin atomics use manually heap allocated data.

1 Like

Not quite -- class instances may be stack-allocated. However, they are guaranteed to have a stable memory location until deinitialization. This location is already exposed through C interoperability APIs such as Unmanaged.toOpaque(); however, there is currently no reliable way to retrieve the memory location for individual ivars within this instance storage.

As it happens, I'm actively working on changing this -- stay tuned for a pitch very very soon. (A sneak preview is available in my Atomic Experiments PR.)

6 Likes

Yes, I found that PR by accident, and the comments you left made for really interesting reading - thanks a lot for taking it on and leaving such detailed notes :blush:

1 Like

The pitch for enabling direct access to ivar storage locations is now up in a separate topic:

The examples discussed there are somewhat relevant to this discussion, too.

5 Likes

As a followup, after what can only be described as a "crash course" in concurrent programming I settled on a design similar to that of the synchronized(object) block construct in Java with a Swift twist for my app. I was frustrated that every time I wanted to mutate a dictionary across multiple threads I would have to declare an associated lock (assuming I didn’t use serial queues.) In the finish I used the following protocol + extension:

public typealias SynchronizableLock = UnfairLock
private var allLocks = [OpaquePointer: SynchronizableLock]()
private var lockLock = SynchronizableLock()

protocol Synchronizable {
    mutating func synchronized<T>(_ closure: (inout Self) -> T) -> T
}

extension Dictionary: Synchronizable {}
extension Array: Synchronizable {}
extension Int: Synchronizable {}

extension Synchronizable {

    public mutating func synchronized<T>(_ closure: (inout Self) -> T) -> T {
        let lockee = UnsafeMutablePointer(&self)
        let lock = lockLock.synchronized { () -> SynchronizableLock in
            let key = OpaquePointer(lockee)
            var lock = allLocks[key]
            if lock == nil {
                lock = SynchronizableLock()
                allLocks[key] = lock
            }
            return lock!
        }
        return lock.synchronized { closure(&lockee.pointee) }
    }

    public mutating func desynchronize() {
        let lockee = UnsafeMutablePointer(&self)
        lockLock.synchronized {
            allLocks[OpaquePointer(lockee)] = nil
        }
    }
}

To use the protocol you call the synchronised method on the object you want to protect with a closure to execute synchronously with respect to that object.

myDictionary.synchronized { myDictionary in
    // something mutating myDictionary

the myDictionary closure argument is inout and the same as the synchronized object but can be assumed to have only one thread accessing it a time using an UnfairLock allocated for that memory location. The desynchronize method allows the possibility to recover locks rather than leak them (I don’t know if they are a limited resource.)

Trying to make work concurrent is not for the faint of heart and I wish there were higher level Swift constructs available not withstanding the longer term goals in this area so I filled a PR on Dispatch.

This is quite a complex solution to the problem: your lockLock lock (say that five times fast) is a very highly contended global lock on all state. You have a requirement to manage the lifetimes of your locks because they consume memory, so failing to call desynchronize will leak memory.

You also have a mild correctness issue:

let lockee = UnsafeMutablePointer(&self)

This line of code immediately creates a dangling pointer to self (see my description in SR-11315 for more on why). That pointer is not valid, and so the subsequent cast of that pointer to OpaquePointer is also invalid. In practice this is probably not an issue as you don't dereference this pointer, but it means that leaking locks is inevitable no matter how carefully you call desynchronize.

In this context, assuming a synchronize block is truly what you want, I genuinely think you'd be better served by creating tiny wrapper structures like this:

class Synchronized<Wrapped> {
    private var data: Wrapped
    private var lock: UnfairLock

    func with<T>(_ body: (inout Wrapped) throws -> T) rethrows -> T {
        return try self.lock.synchronized { try body(&self.data) }
    }
}

This can then be used by declaring your objects as a Synchronized<Dictionary<T>>. Leaking the reference outside the block will be invalid, but the same is already true for your Synchronizable protocol so this is no worse, and more importantly it will be more obvious to your code that you must acquire the lock before computing on the object. This also resolves all your leaks and correctness issues.

Again, I should stress that this "ambient" locking is rarely the right choice because it confuses your ability to come up with a lock hierarchy, but if you want to do things this way the above struct is a better choice.

4 Likes

Thanks @lukasa, that is indeed far better. Things like this should be available ready rolled in Dispatch!

Sometimes you want an UnfairLock, sometimes an NSRecursiveLock, sometimes a Readers–writer lock - why not based on DispatchQueue barriers, sometimes a primitive atomic operation, sometimes... There's many reasons why concurrency is difficult, and one of them is that there is no "one size fits all" solution to the myriad of concurrency puzzles. So no, you don't want things like this to be available ready rolled in Dispatch :wink: What you want is strengthen you concurrency skills :muscle: !

5 Likes

I’ll readily agree I need to strengthen my concurrency skills without hesitation :relaxed: but really think at least an UnfairLock class should be available somewhere given the subtleties of the implementation pointed out above and DispatchQueue.concurrentPerform seems to offer a very limited vision. Leaving people to fend for themselves seems regrettable when they can be offered better more Swifty abstractions. They can still use other types of lock when they understand they need too.

IIUC, the reason why Dispatch has such primitive Swift bindings is that it's really waiting on a more comprehensive concurrency story for Swift in general. For example, the concurrentPerform method you mentioned could easily be used to build things like a concurrent map operation. Lots of things could be much better.

Hello, I have a service with configuration. Seems looks like I should use Actor for this, because share his config over using this config in functions body. But as I know I can't change it from final class to actor, even so a lot of functions this service is async already. But this functions is not non-isolated and work with this service would be slow, and bring me to some problem with using this service in several places in project simultaneously (because Actor).
I watched WWDC video where is proposing use NSLock for replace DispatchQueue for @unchecked Sendable property synchronisation. I have question: - how to properly realise mechanism for synchronising property in my service?