I want to run the two threads in a strict interleave order: 1st, 2nd, 1st, 2nd, ...
Below works:
let mutex = NSLock()
var isFirst: Bool = true
Thread {
while true {
while !(mutex.withLock { isFirst }) {
// busy waiting
}
print("first")
mutex.withLock {
isFirst = false
}
}
}.start()
Thread {
while true {
while (mutex.withLock { isFirst }) {
// busy waiting
}
print("second")
mutex.withLock {
isFirst = true
}
}
}.start()
RunLoop.current.run(until: .distantFuture)
but is obviously bad because of the busy waiting.
Tried to use a cond variable and a semaphore, but the result is neither correct nor concise nor Swifty.
It pushes to the main queue but queues are not threads so what is the intention of pushing to the main queue?
Edit: oh it looks like main queue always runs on main thread
Anyways, a good helper func I’ve seen in the past for something like this looked something like:
func block(complete: () -> Void) {
// only main thread can run completion handler
if thread.current.isMainThread {
complete()
} else { block(complete) }
}
Maybe combined with someth like
func start() {
let sema DispatchSemaphore(value:0)
DispatchQueue.global().async({
//only main thread can signal
block({(sema.signal())})
})
while true {
if sema.wait(timeout: .now().advancedBy(.0001)) == .timedOut { continue }
break //only not the main thread can do this
}
// figure out what thread broke out and repeat
}
Yes, either two separate threads or two different dispatch queues, executing in the interlocked manner.
Please ignore DispatchQueue.main.sync above (it is there for something else). Commented out those DispatchQueue.main.sync in the code above to make it less confusing.
I ask because the simplest answer I can think of is to have one thread that pulls work items from two (threadsafe, non-Dispatch) queues alternately, rather than doing a cross-thread handshake. How applicable this is depends on what you’re actually trying to do.
No, as I said it must be either two separate threads or two different dispatch queues. They will somehow "ping-pong the beachball of control" from one to another (mutex? semaphore?)
Having said that... If there's a magic solution that could make it work in a single thread mode I'd like to know that! Pseudocode:
func first() {
// does something long
// calls to "yieldToSecond()" every now and then
}
func second() {
// does something long
// calls to "yieldToFirst()" every now and then
}
Your setup describes two threads performing cooperative multitasking. When multiple threads have to communicate with each other, a condition lock might be the best synchronization idiom:
let lock = NSConditionLock(condition: 1)
Thread {
while true {
lock.lock(whenCondition: 1)
print("first")
lock.unlock(withCondition: 2)
}
}.start()
Thread {
while true {
lock.lock(whenCondition: 2)
print("second")
lock.unlock(withCondition: 1)
}
}.start()
func block(complete: () -> Void) {
// only main thread can run completion handler
if thread.current.isMainThread {
complete()
} else { block(complete) }
}
I implemented the code in terms of "yield", although the API of this yield is a bit weird:
yield takes two parameters: the current thread identification and the other thread identification
the other thread identification parameter is optional and when called first time from a thread the thread has to pass nil there.
Don't see now how to make this API less awkward.
Full test code:
let lock = NSConditionLock(condition: 0)
var progress = 0
func yield(from: Int, to: Int?) {
if let to {
lock.unlock(withCondition: to)
}
lock.lock(whenCondition: from)
}
Thread {
var other: Int?
while true {
yield(from: 0, to: other)
other = 1
printProgress()
}
}.start()
Thread {
var other: Int?
while true {
yield(from: 1, to: other)
other = 0
printProgress()
}
}.start()
func printProgress() {
progress += 1
if (progress % 1000000) == 0 {
print(progress)
}
}
RunLoop.current.run(until: .distantFuture)
Note that printProgress uses a global progress variable without the need of synchronising (via lock or atomic or otherwise) – this is safe because the two threads are run in a serialised manner.
This is not generally sufficient! Different threads may run on different cores, which have different caches, that can get out of sync. You will generally only see this on arm architectures, which provide weaker memory guarantees than x86. In this case, however, you have effectively protected the progress variable with the same lock you’re using as a condition, so it works out.
I’m not sure you can. Pretty much all synchronization primitives will issue a full memory barrier.
Edit: actually, I take that back. Apparently it’s idiomatic to issue a barrier only in the inner synchronization domain (”same CPU cluster”, roughly speaking), not the full system.