I was under the impression that dispatch_once relies on a static once token (memory that remained 0 since process start).
That‘s why static initializers are thread safe in Swift and lazy vars are not. It‘s just much more expensive to make instance member access thread safe.
There is no requirement for once token to be static. It can be part of the object memory. But it should be possible to write to it bypassing exclusivity checks. And this can be tricky to achieve in a macros or a property wrapper, but probably not impossible. E.g. Mutex does this using _Cell under the hood.
The predicate must point to a variable stored in global or static scope. The result of using a predicate with automatic or dynamic storage (including Objective-C instance variables) is undefined.
I‘m not sure if I remember correctly but I think the synchronization mechanism used in dispatch_once relies on the fact that the memory at the token’s address only changes from zero to nonzero once in the lifetime of the process. That can only be guaranteed when using static storage duration (yes, always heap).
I should mention that Swift already has a heap-based one-time initialization Type AtomicLazyReference, but it only supports the latest systems(and class object).
You can see from its example that it can completely replace lazy let, but it is very inelegant and restrictions (for reasons I don't understand, only classes are supported).
class Image {
let _histogram = AtomicLazyReference<Histogram>()
///
// This is safe to call concurrently from multiple threads.
var atomicLazyHistogram: Histogram {
if let histogram = _histogram.load() { return histogram }
// Note that code here may run concurrently on
// multiple threads, but only one of them will get to
// succeed setting the reference.
let histogram = ...
return _histogram.storeIfNil(histogram)
}
}
This should be implemented using a macro, but AtomicLazyReference has too many restrictions, making it impossible to use a single @lazy let/var to cover all cases.
A difference with the AtomicLazyReference approach is that it won't stop concurrent calls to the initializer: it's a race where the first initializer reaching the finish line sets the value. Good for some cases, but could be problematic if the initializer has side effects or consume resources.
You'll have to make other threads wait for the initializer to finish if you want the same behavior as static let. Here's the same example using Mutex to get rid of concurrent initialization:
class Image {
let _histogram = AtomicLazyReference<Histogram>()
let _histogramInitMutex = Mutex<Void>(())
// This is safe to call concurrently from multiple threads.
var atomicLazyHistogram: Histogram {
if let histogram = _histogram.load() { return histogram }
// Code here may run concurrently on multiple threads
return _histogramInitMutex.withLock { _ in
// No more concurrency here while under the lock
// 1. check again if value was set while waiting for the lock
if let histogram = _histogram.load() { return histogram }
// 2. run initializer
let histogram = ...
return _histogram.storeIfNil(histogram)
}
}
}
Edit: note that this implementation will deadlock if it calls itself recursively through the initializer.