It's thread-safe because you're not reading/writing simultaneously from different threads. Both queue.sync
calls use the same queue (which by default isn't a concurrent queue), so only one will be executing at any one time. Non-concurrent queues are like their own little streams of causality, which are not synchronised with other queues and can hop between different OS threads. If the reader and writer were on different queues, it wouldn't be thread-safe and the value in counter
would be undefined.
Atomics give you the ability to truly read/write memory concurrently (across different queues or OS threads). If you had two threads (or two dispatch queues/one concurrent dispatch queue):
let queueOne = DispatchQueue(label: "...")
let queueTwo = DispatchQueue(label: "...")
private var counter: Int = 1
queueOne.async {
sleep(1) // Wait a little bit
counter *= 2
}
queueTwo.async {
sleep(1) // Wait a little bit
counter += 5
}
If counter
were atomic, the result will definitely be either (1 + 5) * 2 = 12, or (1 * 2) + 5 = 7. Atomics don't define which one happens first, but it does define that both operations will occur, and that every other atomic operation it executes on counter
from that point, even on a different core, will agree that it occurred.
If counter
is not atomic, the result might be 12 or 7, or it might be 2 or 6 (both threads see counter = 1), or it might be something else entirely - maybe one CPU core was partially writing the result from one thread in to cache when the other one read it, and your result is "torn" somewhere. It's undefined behaviour - it could also crash or send a flock of flamingos to your house every Monday at 4pm
. So you definitely want to avoid that; the car-washing costs aren't worth it.
Memory order basically tells the compiler what restrictions it has when optimising around atomics.
let canAccess = AtomicInt(0)
var storage = [Item]()
func append(newItem: Item) {
while canAccess.atomicCompareExchange(expected: 0, desired: 1, .acquire).exchanged == false {
// Wait.
}
storage.append(newItem)
canAccess.atomicStore(0, .release)
}
In this case, it would really suck if the compiler re-ordered the call to storage.append(newItem)
before we acquired the canAccess
lock (or to after we released it). Within those bounds, it can reorder and optimise things as it likes.