Simple question, Why does the actor method performAtomicOperation()
return value to the first task got modified by 2nd task
shouldn't it return 1 instead 2?
performAtomicOperation() should not have concurrent access is it?
actor YourActor {
static let instance = YourActor()
private init() {}
private var counter: Int = 0
func performAtomicOperation() async -> Int {
counter += 1
print("\(Thread.current) : \(counter)")
try? await Task.sleep(nanoseconds: UInt64(3 * 1_000_000_000))
return counter
}
}
func runOperations() {
Task(priority: .background) {
let val = await YourActor.instance.performAtomicOperation()
print("Task \(Thread.current) \(val)")
}
Task(priority: .background) {
let val = await YourActor.instance.performAtomicOperation()
print("Task \(Thread.current) \(val)")
}
}
output
<NSThread: 0x10d6be400>{number = 7, name = (null)} : 1
<NSThread: 0x10d68be20>{number = 6, name = (null)} : 2
Task <NSThread: 0x123025280>{number = 10, name = (null)} 2
Task <NSThread: 0x10d6c7e80>{number = 8, name = (null)} 2
You have a suspension point inside performAtomicOperation, and around such points state might change if actor's method (this or other that also mutates state) has been invoked.
First of all, in any circumstances don’t use semaphores with actors. There are a few threads on deadlocks caused by semaphores in actors and in Swift Concurrency in general.
The simples solution is to avoid suspension points in such method. But that might be not always possible.
As alternatives, is to serialize access with own queuing, that has been addressed in a several topics, like this one:
You can search for more, there were a few with a long discussions, nice details and examples.
I’ve recently came across old post by @ktoso in early times of concurrency discussion that was saying that in theory reentrancy can be also addressed by introducing additional actors that will encapsulate reentrant operation. I haven’t explored it yet, but seems like an interesting approach, which also might be more robust in basic implementation.
for anyone who referring here, to handling the specific issue above modifeid the code as below to support synchronized access to mutable states before actor reentry
func performAtomicOperation() async -> Int {
counter += 1
let val = counter
print("\(Thread.current) : \(val)")
try? await Task.sleep(nanoseconds: UInt64(3 * 1_000_000_000))
return val
}
While there are lots of ways to use structured concurrency to deal with actor reentrancy, I personally find it pretty challenging to pull off. Here are two packages you might want to check out that offer an alternative.
(Note that Lock one is currently not a great place, but my hope is the underlying compiler issues will be resolved shortly.)
Mainly for efficiency, NSRecursiveLock is ancient and is derived from NSObject.
But on a second thought yeah, it may be quite challenging. Semaphores are based on atomics underneath, so I suppose a combination of an atomic value and say some lock-free list of suspensions, but the implementation won't be trivial at all. Sorry but I don't have an easy solution for you right now
I think the missing tool is a serial queue for async tasks, that defines an entire task from start to finish to be a "unit" of work that occupies the queue.
You submit async blocks to it, and it guarantees those blocks are executed in the order they are submitted and does not start one until the last one finishes.
I have implemented this a couple of times, and I usually end up using "old fashioned" synchronization like a Darwin lock under the hood to protect concurrent access to the state (the array of queued work and status of the running one). I have wanted to try replacing that with actors but I think I encountered some difficulty with it and just stuck with locks. IIRC this was because I wanted the ability to submit work from synchronous contexts with the same guarantee that execution order is the same as submission order. If the state is protected with actors then you have to start a Task to access the state and that loses the order guarantee.
These sorts of queues are a level of abstraction above actors, which serialize single synchronous slices of async work.
I have built a few flavors of them, including ones with controllable max concurrency (similar to NSOperationQueue) and they've proven very useful. I'm working on publishing an open source library of them, I just need to finish writing tests.
When it comes to lock-free data structures, there's no such thing as an easy solution
My gut says there's nothing to worry about here. But if you can find a situation where the thread contention is such that the cost of the lock matters, I think it would be a really interesting situation to study!
I was looking to build a concurrent queue that supported barrier tasks.
But, let me be more specific: I wasn't able to get it working at the time. I've since learned a few more tricks and I don't think it's fair to say it isn't possible.
As far as I know it's correct to say that you can't make a concurrent queue on Darwin that supports barrier tasks and supports priority inversion avoidance. Dropping the latter requirement should make it feasible.
I've only dug into priority inversion avoidance with the concurrency system a little, because its behavior surprisd me. But, you got me curious!
Is this test sufficient to say that that the runtime is boosting priorities here? I'm not 100% sure I'm exercising stuff correctly, but this test does pass.
The full behavior of priority boosting can’t be observed via user space API; you’d need to use external tooling to see it (I think spindump should work).
As an example of how complex it gets: imagine low priority process B sends low priority process C a synchronous XPC message, and then process C uses your queue to do the work to generate a reply. When high priority process A sends B a synchronous message that blocks on a lock in B held by the thread waiting for the result of the message to C, will the worker threads in your queue be boosted to A’s priority?
apologies to digress from the original topic, but on the subject of observing priority inversion avoidance behaviors, Quinn (and David) provided some useful insights that can be found here.